Just as the cloud has become mainstream, always-on applications are driving the demand for new, smarter networks. Hear opposing viewpoints on new approaches for building and using Smart Networks and “Fog Computing” to address scale and latency, improve reliability, and overcome bandwidth and cost constraints.
As software continues to evolve from a request/response architecture to an always-on data stream architecture, the traditional three-tier architecture is dying. This session will explore:
- New architecture designs that address the new complexities that emerge from the Streaming Web.
- The software design patterns that will emerge as architectures change to support a world of always-on data streams.
- The industry trends and thought leadership that are helping define architecture for the streaming web.
In a digital era where users are bombarded left and right with competing web startups, the difference between a second and a millisecond can mean the difference between bankruptcy and success. What started out as a neat little experiment using AWS Lambdas to stream processed data to RDS Clusters, at the beginning of 2016, has led to a slew of discoveries regarding major databases, cost-effectiveness, and the delicate balance between computation and I/O speeds. This talk will cover a few battle-tested strategies for design and implementation of streaming to data layers.
Sun Microsystems coined the phrase “The Network is the Computer” in the 1990s, though it was more aspirational than reality. But today, the explosive demand for realtime software and always-on experiences is forcing the network to get smarter. The new trend in “Serverless Infrastructure” is gaining steam, but doesn’t always help delivering scalable realtime experiences. To truly meet the needs of realtime, always-on apps, serverless platforms must evolve from the cloud to the network. This presentation will explore:
– Categories of realtime software and always-on use cases that need a distributed infrastructure to scale
– Embedding application logic into a programmable network
– The software design patterns that will emerge to leverage smarter networks
– The security considerations of data streams and programmable networks
Streaming data is all around us and yet it is generally limited to updating automated systems. Organizations can unlock even more value out of their streaming data with custom dashboard applications designed for human consumption.
However, building dashboards for real-time streams of information comes with a unique set of challenges. We will address the issues and solutions for them in this Lightning Talk.
Businesses across every industry including finance, transportation, energy, military, emergency response are seeing a massive increase in data volume and data collection. The demand for immediate decision-making means that realtime processing of this flood of data is also mandatory. This session will address the products, strategies, and ideas around “firehose” data collection, processing, triggering, and the pros/cons of what’s deployed today.
Doing meaningful things with streaming data that enable us to make decisions when we want to (now)—trending, statistical analysis, dynamic thresholding, ad-hoc exploration, etc—is hard, because of TIME. When we have many sources of data (emitters) streaming a ton of data, every little bit of irregularity, skew, lag, jitter, and burstiness can throw off everything. This talk will look at what happens when we try to do computations against streaming data because of the inevitable timing issues encountered in the real world, and how to engineer for them.
We’ll touch on topics such as:
– Time series in theory and practice
– Batch vs streaming vs batch+streaming
– Strategies for managing timing issues
– How to making decisions about resolution
– Using constraint solvers
Telemetry streaming is nothing new, but the businesses growing out of this space recently certainly are. As computing and communications costs have plummeted we now have vast amounts of real-time data available for the creation and delivery of innovative value added services. This talk will address the economic impact of technological innovation that the cloud has brought to the streaming space highlighting how PaaS services are driving down costs and freeing engineering teams to focus on value, speed of delivery, and time to market.
Weak or faulty security could be the fly in the ointment of Internet of Things adoption. In just one indication of that, a report by the U.S. Department of Commerce’s National Telecommunications and Information Administration (NTIA) in May found that public faith in the Internet has dimmed in the wake of data breaches, cybersecurity incidents, and reports critical of the privacy practices of online services. The biggest threat came in the form of “negative personal experience,” the report found.
What is the best approach to securing a fast-growing population of connected “stuff” before it is too late? Join moderator Paul Roberts as he speaks with some of the top experts working in areas such as hardware based security, embedded systems and automotive security to talk about stepping back from the security cliff that looms before the Internet of Things.
In a data-streaming web world, things happen fast. In less than the blink of an eye, MediaMath’s digital marketing systems host real-time auctions and serve ads across the world to the tune of 2.3 million queries per second. This session will dive into MediaMath’s data stream processing architecture and how the company is building the next generation of real-time, high performance systems in Go. Using Go, MediaMath is able to scale its systems on a minimal resource footprint. This session will cover why Go is a game-changer for building services, how to maximize channel performance and real-time data processing throughput, how to minimize the effects of garbage collection, and more.
With all the investment opportunities in realtime apps, infrastructure, and business models, prominent Silicon Valley investors share their many opinions about where this is all going, where it’s been, who they’re investing in, and who are the likely winners and losers.
Modern applications are moving away from the traditional request/response loop to more event-driven patterns. The concept of “serverless” computing is floated around when discussing microservices and modern architectural design patterns. But what is serverless, and how – if at all – is it different than microservices?
We’re entering a world where developers won’t have to think about provisioning resources to run their code. While serverless computing sounds like nirvana, there’s an upfront investment in changing development and operational patterns to allow for such a level of automated orchestration. Meanwhile, innovative teams have embraced the use of microservices as a direct response to the costly and cumbersome monolithic methodologies of the past, but new questions have arisen around use cases, planning, and best practices for implementation.
What makes a modern, traffic-serving application resilient? It must handle unexpected load, manage tail latencies, and degrade gracefully in the presence of component failure. For microservices, which introduce inter-service communication as a fundamental component of application behavior, this requirement for application resilience becomes a requirement for resilient communication.
In this talk, we introduce the notion of a service mesh: a reification of inter-service communication handling into an infrastructure “mesh”, separate from application code. We demonstrate why this approach becomes increasingly necessary for polyglot microservices and how it fits into the existing OSI network model. We describe an open source implementation, linkerd, used in this capacity today at companies ranging from banks to artificial intelligence startups to government labs, and describe its multi-layered approach for handling failure (and its pernicious cousin, latency), including latency-aware load balancing, failure accrual, deadline propagation, retry budgets, and negative acknowledgement. We describe a unified model for service naming, inspired by the concepts of symbolic naming and dynamic linking in operating systems, which extends this model for failure handling across service cluster and datacenter boundaries.
Advances in machine learning, augmented/virtual reality, robotics and analytics are fostering significant changes in how enterprises operate. What is needed to build the monetization strategies on the opportunities arising out of the proliferation of sensors and connected devices in the Enterprise.
A typical microservice architecture comprises 5 tiers:
– web proxy (NginX)
– application web server (Apache, Django, etc)
– caching layer (Memcached, Redis, etc)
– database proxy (to abstract away sharding and replication from an application)
– database management system
You need a web proxy to handle slow clients and to serve static files. You need an application web server to run your application logic. You need a caching layer to cache frequently accessed data. You need a database proxy to take care about all the sharding and replication stuff. And finally, you need a DBMS to store your data.
Five tiers — isn’t it too many? Yeah, that’s what we think about this. So we came up with an idea to shrink the whole bunch of tiers down to just a couple. A web proxy and a database proxy are packed into one tier which is just a proxy. A cache, a database and an application server are packed into the other one. Less tiers mean less problems and more bonuses like data consistency and better performance. In my session, attendees will learn how to do this magic and see how this works under the hood.