Posts Tagged ‘logging’
[NodeCongress2021] Logging, Metrics, and Tracing with Node.js – Thomas Hunter II
Observability pillars—logs, gauges, spans—form the triad illuminating Node.js constellations, where opacity breeds outages. Thomas Hunter II, a Node.js luminary and author of “Distributed Systems with Node.js,” dissects these sentinels, adapting book chapters to unveil their synergies in service scrutiny.
Thomas frames logging as cloud-elevated console.logs: structured JSON extrudes states, severity tiers—error to silly—filter verbosity. Winston orchestrates: transports serialize to stdout/files, Pino accelerates with async flushes. Conventions prescribe correlation IDs, timestamps; aggregators like ELK ingest for faceted searches.
Metrics quantify aggregates: counters tally invocations, histograms bin latencies. Prometheus scrapes via prom-client, Grafana visualizes trends—spikes foretell fractures. Thomas codes a registry: gauge tracks heap, histogram times handlers, alerting deviations.
Tracing reconstructs causal chains: spans encapsulate ops, propagators thread contexts. OpenTelemetry standardizes; Jaeger self-hosts hierarchies, timelines dissect 131ms journeys—Memcache to Yelp. Datadog APM auto-instruments, flame graphs zoom Postgres/AWS latencies.
Instrumentation Patterns and Visualization Nuances
Thomas prototypes: async_hooks namespaces contexts, cls-r tracers bridge async gulfs. Zipkin’s dependency DAGs, Datadog’s y-axis strata—live Lob.com postcard fetches—demystify depths.
Thomas’s blueprint—Winston for persistence, Prometheus for pulses, Jaeger for journeys—equips Node.js artisans to navigate nebulous networks with crystalline clarity.
Links:
Sniffing RMI Traffic… Rather log it!
Suspecting a thread leak, there is some traffic I’d like to track on my JOnAS server: most of all, the calling IPs, with the methods and parameters sent. Actually, I lack some tools, so I tried to snif the network traffic.
Two softwares may make the job:
I discarded these solutions for several reasons. First, there are some issues with Windows Seven compatibility. Moreover, traffic on RMI protocol is SSL-encrypted… therefore not easy to read.
At least, I withdrew from this idea to snif, and I decided to intercept calls thanks to loggers.
In order to enable RMI logs, add the following properties to your JVM (to JAVA_OPTS parameters):
- on client side:
-Dsun.rmi.client.logCalls=true - on server side:
-Djava.rmi.server.logCalls=true
More details and options are available in Oracle’s documentation: RMI Implementation Logging.
By default, the logs look like this:
[java]FINEST: RMI TCP Connection(5)-10.76.35.25: [10.76.35.25: sun.rmi.transport.DGCImpl[0:0:0, 2]: java.rmi.dgc.Lease dirty(java.rmi.server.ObjID[], long, java.rmi.dgc.Lease)]
2012-05-03 15:35:51,053 : Log$LoggerLog.log : RMI TCP Connection(5)-10.76.35.25: [10.76.35.25: sun.rmi.transport.DGCImpl[0:0:0, 2]: java.rmi.dgc.Lease dirty(java.rmi.server.ObjID[], long, java.rmi.dgc.Lease)][/java]
(Don’t be fool: this is Java logging, but not Log4J!)
By the way, a very interesting result I got is the following: monitoring and profile tools such as JVisualVM or JProfiler “ping” the RMI server, disturbing the measurements. Let’s consider that as “the Heisenberg uncertainty principle” applied to softwares 😉