Modern mobile apps are benefiting from using gRPC with Protobuf to reduce boilerplate code for their client-server
networking implementation. While directly implemented by gRPC, the library can easily implement all necessary features
for efficient file transfers.
gRPC with Protobuf is a framework to efficiently simplify the client-server networking requirements of modern
applications. One use-case where the low-level simplicity of pure HTTP maintains an advantage over gRPC is handling file
transfers: the uploading and downloading of contiguous binary block data. But gRPC can efficiently replicate all HTTP
functionality within its Protobuf message framework making it unnecessary to host separate gRPC and HTTP servers for
applications.
Expanding on the realtime Firebase implementation in the previous article, this expands the functionality allowing the
server to fetch data on-demand from an external datasource. Additionally, functionality to periodical refresh active
data which is subscribed to by connected clients transforms this database into an efficient cache to evolving external
data which can only be obtained by polling.
Job Queues are critical parts of Enterprise workloads. Complex queues use distributed nodes, state machines, and
complex scheduling to trigger and track running jobs. But when simplicity allows the best approach is to create small
idempotent jobs. The smaller the unit of work the easier progress can be tracked, jobs can be restarted or rerun with
minimal waste, composability and reuse are increased, and logic is easier to reason about. These are the same arguments
for Functional Programming and their Effect Systems, such as ZIO. Effect systems are congruent to the enterprise job
queue, with ZIO fibers performing work and ZIO Resource Management
forming the scheduling and supervision backbone. An efficient job queue can be written using ZIO constructs using
surprisingly minimal amount of code.
Realtime push-based databases such as Google Firebase conveniently ensure
clients are synchronized with the server. Data updates stream to clients immediately as they happen; and if a client
disconnects, updates are immediately processed after reconnecting.
gRPC server streaming
and ZIO Hub can implement this functionality replicating an expensive paid
Firebase service while allowing greater extensibility.
Streaming is the primary mechanism to reduce memory requirements for processing large datasets. The approach is to
view only a small window of data at a time, allowing data to stream through in manageable amounts matching the
data window size to the amount of RAM available. A practical example is a file-upload, where multi-GBs file streams
can be handled by MBs of server RAM. However, enforcing streaming in software code is prone to errors, and misuse or
incompatible method implementations will lead to breaking stream semantics, and ultimately to OOM exceptions. This
article focuses on streams within the context of file uploads, using the Http4s library for examples.
Scala Native is a compiler and JDK written in Scala with the goal of removing Scala’s dependency on the JVM. This isn’t
meant to achieve a higher performance such as with JDKs, and it is targeting a specialized use-case not considered to be
today’s typical Scala development. Its competitors are Rust and Go, not GraalVM, Java or Kotlin. This article goes
through common steps and challenges encountered when compiling Scala Native for linux with a GitHub Action.
The free tier of GitHub Packages has limited bandwidth to download private artifacts; which can make it unsuitable for
use in a CI/CD pipeline for projects on a budget. In an effort to increase GitHub Packages’ usability, this article
develops an alternative approach minimizing the dependency on GitHub Packages as hot storage, but preserving it as a
viable cold storage, durable storage solution.
In-memory caches mapping Key => Value are a simple and versatile tool to reduce number of calls to an origin
datasource. There are many use-cases requiring multiple cache calls, preferring Seq[Key] => Seq[Value]. Can a standard
cache implementation be expanded to efficiently handle this scenario?
Modern software design requires the understanding of the different layers of concurrency and parallelism that can exist.
Abstractions exposed by libraries and frameworks can inadvertently hide layers of parallelism when their focus is the
simplification of others; and libraries trying to treat all levels of parallelism equality can be limited to low level
concepts for their common interface. In order to optimally design and avoid errors, all levels of concurrency and
parallelism need to be understood no matter what framework is chosen.