continuous integration

50 results back to index

pages: 540 words: 103,101

Building Microservices by Sam Newman


airport security, Amazon Web Services, anti-pattern, business process, call centre, continuous integration, create, read, update, delete, defense in depth, don't repeat yourself, Edward Snowden, fault tolerance, index card, information retrieval, Infrastructure as a Service, inventory management, job automation, load shedding, loose coupling, platform as a service, premature optimization, pull request, recommendation engine, social graph, software as a service, source of truth, the built environment, web application, WebSocket, x509 certificate

encryption of backup, Encrypt Backups retrieval via service calls, Data Retrieval via Service Calls securing at rest, Securing Data at Rest(see also security) shared, Example: Shared Data shared static, Example: Shared Static Data data encryption, Go with the Well Known data pumpsbackup, Backup Data Pump data retrieval via, Data Pumps event, Event Data Pump serial use of, Alternative Destinations database decomposition, The Database-Understanding Root Causesbreaking foreign key relationships, Example: Breaking Foreign Key Relationships incremental approach to, Cost of Change overview of, Summary refactoring databases, Staging the Break selecting separation points, Getting to Grips with the Problem selecting separation timing, Understanding Root Causes shared data, Example: Shared Data shared static data, Example: Shared Static Data shared tables, Example: Shared Tables transactional boundaries, Transactional Boundaries database integration, The Shared Database database scalingCommand-Query Responsibility Segregation (CQRS), CQRS for reads, Scaling for Reads for writes, Scaling for Writes service availability vs. data durability, Availability of Service Versus Durability of Data shared infrastructure, Shared Database Infrastructure decision-making guidelines, A Principled Approach-A Real-World Examplecustomized approach to, Combining Principles and Practices practices for, Practices principles for, Principles real-world example, A Real-World Example strategic goals, Strategic Goals decompositional techniquesdatabases (see database decomposition) identifying/packaging contexts, Breaking Apart MusicCorp modules, Modules seam concept, It’s All About Seams selecting separation points, The Reasons to Split the Monolith selecting separation timing, Understanding Root Causes shared libraries, Shared Libraries decoupling, Autonomous, Orchestration Versus Choreography degrading functionality, Degrading Functionality delivery bottlenecks, Delivery Bottlenecks deploymentartifacts, images as, Images as Artifacts artifacts, operating system, Operating System Artifacts artifacts, platform-specific, Platform-Specific Artifacts automation, Automation blue/green deployment, Separating Deployment from Release build pipeline, Build Pipelines and Continuous Delivery bundled service release, And the Inevitable Exceptions continuous integration basics, A Brief Introduction to Continuous Integration continuous integration checklist, Are You Really Doing It? continuous integration in microservices, Mapping Continuous Integration to Microservices custom images, Custom Images environment definition, Environment Definition environments to consider, Environments immutable servers, Immutable Servers interfaces, A Deployment Interface microservices vs. monolithic systems, Ease of Deployment, Deployment overview of, Summary separating from release, Separating Deployment from Release service configuration, Service Configuration virtualization approach, From Physical to Virtual virtualization, hypervisors, Traditional Virtualization virtualization, traditional, Traditional Virtualization virtualization, type 2, Traditional Virtualization deputy problem, The Deputy Problem design principles, Principles, Bringing It All Together-Highly Observable(see also architectural principles) design/delivery practicesdevelopment of, Practices real-world example, A Real-World Example directory service, Common Single Sign-On Implementations DiRT (Disaster Recovery Test), The Antifragile Organization distributed systemsfallacies of, Local Calls Are Not Like Remote Calls, Failure Is Everywhere key promises of, Composability distributed transactions, Distributed Transactions DNS service, DNS Docker, Docker documentationHAL (Hypertext Application Language), HAL and the HAL Browser importance of, Documenting Services self-describing systems, HAL and the HAL Browser Swagger, Swagger domain-driven design, Microservices Dropwizard, Tailored Service Template DRY (Don't Repeat Yourself), DRY and the Perils of Code Reuse in a Microservice World dummies, Mocking or Stubbing durability, How Much Is Too Much?

If you don’t approach deployment right, it’s one of those areas where the complexity can make your life a misery. In this chapter, we’re going to look at some techniques and technology that can help us when deploying microservices into fine-grained architectures. We’re going to start off, though, by taking a look at continuous integration and continuous delivery. These related but different concepts will help shape the other decisions we’ll make when thinking about what to build, how to build it, and how to deploy it. A Brief Introduction to Continuous Integration Continuous integration (CI) has been around for a number of years at this point. It’s worth spending a bit of time going over the basics, however, as especially when we think about the mapping between microservices, builds, and version control repositories, there are some different options to consider.

synchronous vs. asynchronous, Synchronous Versus Asynchronous compensating transactions, Abort the Entire Operation composability, Composability configuration drift, Immutable Servers configuration, service, Service Configuration confused deputy problem, The Deputy Problem consistencyin CAP theorem, CAP Theorem sacrificing, Sacrificing Consistency constraints, Constraints Consul, Consul consumer-driven contracts (CDCs), Consumer-Driven Tests to the Rescue-It’s About Conversations content delivery network (CDN), Client-Side, Proxy, and Server-Side Caching content management systems (CMS), Example: CMS as a service continuous delivery (CD), Microservices, Build Pipelines and Continuous Delivery continuous integration (CI)basics, A Brief Introduction to Continuous Integration checklist for, Are You Really Doing It? mapping to microservices, Mapping Continuous Integration to Microservices Conway's lawevidence of, Evidence in reverse, Conway’s Law in Reverse statement of, Conway’s Law and System Design summary of, Summary coordination process, Distributed Transactions core team, Role of the Custodians CoreOS, Docker correlation IDs, Correlation IDs CP system, AP or CP?

pages: 224 words: 48,804

The Productive Programmer by Neal Ford


anti-pattern, business process,, continuous integration, database schema, domain-specific language, don't repeat yourself, Firefox, general-purpose programming language, knowledge worker, Larry Wall, Ruby on Rails, side project, type inference, web application, William of Occam

You know you have configuration problems when you ship a laptop to a consulting company to figure out how to build your own software! Use a Canonical Build Machine The other process required in every development shop is continuous integration. Continuous integration is a process where you build the entire project, run tests, generate documentation, and do all the other activities that make software on a regular basis (the more often the better, generally you should build every time you check in code to version control). Continuous integration is supported by software of the same name. Ideally, the continuous integration server runs on a separate machine, monitoring your check-ins to version control. Every time you perform a code check-in, the continuous integration server springs to life, running a build command that you specify (in a build file like Ant, Nant, Rake, or Make) that usually includes performing a full build, setting up the database for testing, running the entire suite of unit tests, running code analysis, and deploying the application to perform a “smoke test.”

In engineering terms, these little scraps are “jigs” or “shims.” As developers, we create too few of these little throwaway tools, frequently, because, we don’t think of tools in this way. Software development has lots of obvious automation targets: builds, continuous integration, and documentation. This chapter covers some less obvious but no less valuable ways to automate development chores, from the single keystroke all the way to little applications. Don’t Reinvent Wheels General infrastructure setup is something you have to do for every project: setting up version control, continuous integration, user IDs, etc. Buildix* is an open source project (developed by ThoughtWorks) that greatly simplifies this process for Java-based projects. Many Linux distributions come with a “Live CD” option, allowing you to try out a version of Linux right off the CD.

Every time you perform a code check-in, the continuous integration server springs to life, running a build command that you specify (in a build file like Ant, Nant, Rake, or Make) that usually includes performing a full build, setting up the database for testing, running the entire suite of unit tests, running code analysis, and deploying the application to perform a “smoke test.” The continuous integration server redirects build responsibilities from individual machines and creates a canonical build location. The canonical build machine should not include the development tool you use to create the project, only the libraries and other frameworks needed to build the application. This prevents subtle dependencies on tools from creeping into your build process. Unlike Bob and his hapless coworkers, you want to make sure that everyone builds the same thing. Having a canonical build server makes it the only “official” build for the project. Changes to development tools don’t affect it. Even single developers benefit from having a continuous integration server as the lone build machine. It prevents you from inadvertently allowing tool dependencies from creeping into 72 CHAPTER 5: CANONICALITY your project.

pages: 290 words: 119,172

Beginning Backbone.js by James Sugrue


Airbnb, continuous integration, don't repeat yourself, Firefox, Google Chrome, loose coupling, MVC pattern, node package manager, single page application, web application, Y Combinator

//tests for DOM manipulation module('Fixture Test');   test('Check for paragraph', function(){   var results = fixtureEl.find('#myparagraph').length; console.log(fixtureEl); console.log(results); ok(results === 1, 'Found the correct paragraph');   });   As you can see, it’s pretty straightforward to have JavaScript tests that validate the integrity of your DOM. Recording QUnit Results If you are running your tests on a continuous integration server, as we will look at in the next chapter, you will need to have the capability of recording the results of your tests without the need to refresh the HTML page that runs the tests. While you can roll out your own solution based around the QUnit callbacks, there are plug-ins available that provide the ability to store the results in the most common format across continuous integration systems: JUnit style. As the leading unit test framework in the Java world, JUnits report style became the de facto standard. To incorporate this reporting into your own test suite, you’ll just need to add another include in your suite.

In this chapter, we’ll introduce you to Grunt and explain how you can use it to improve quality, generate production-ready code, and much more. Continuing from the previous chapter, you will also see how you can automate the execution of your unit tests with Grunt. An Introduction to Grunt The aim of continuous integration is to improve software quality and reduce the time it takes to deliver productionready applications by ensuring tests are executed as code changes. Of course, many other tasks happen in a continuous process, but running unit tests is considered the most important. Grunt (, see Figure 9-1), created by Ben Alman in 2012, has become the default continuous integration tool for JavaScript web applications. Rather than writing your own custom build scripts to execute from the shell or adopting another languages’ build tools, Grunt scripts are written in JavaScript.

grunt.registerTask('default', ['jshint', 'uglify', 'cssmin', 'qunit_junit', 'qunit', 'jasmine']);  Creating Different Task Sets The additional tasks we have created are not always suitable for all machines. A developer workspace will need to run a different set of tasks to an continuous integration server. This can be controlled quite simply with the registerTask function in Grunt. 198 Chapter 9 ■ Using Grunt for Your Build Process For example, let’s say the developers would run grunt dev on the command line, while the continuous integration server would run grunt build. We’ll create two different task sets for each of these.   grunt.registerTask('dev', ['jshint', 'qunit_junit', 'qunit', 'jasmine']); grunt.registerTask('build', ['jshint', 'uglify', 'cssmin', 'qunit_junit', 'qunit', 'jasmine']); grunt.registerTask('default', ['jshint', 'uglify', 'cssmin', 'qunit_junit', 'qunit', 'jasmine']);   On developer machines, the static analysis and test tasks are run, while the build machine will also run the minification tasks.

pages: 834 words: 180,700

The Architecture of Open Source Applications by Amy Brown, Greg Wilson


8-hour work day, anti-pattern, bioinformatics,, cloud computing, collaborative editing, combinatorial explosion, computer vision, continuous integration, create, read, update, delete, David Heinemeier Hansson, Debian, domain-specific language, Donald Knuth,, fault tolerance, finite state, Firefox, friendly fire, Guido van Rossum, linked data, load shedding, locality of reference, loose coupling, Mars Rover, MVC pattern, peer-to-peer, Perl 6, premature optimization, recommendation engine, revision control, Ruby on Rails, side project, Skype, slashdot, social web, speech recognition, the scientific method, The Wisdom of Crowds, web application, WebSocket

ISBN 978-1-257-63801-7 License / Buy / Contribute Chapter 6. Continuous Integration C. Titus Brown and Rosangela Canino-Koning Continuous Integration (CI) systems are systems that build and test software automatically and regularly. Though their primary benefit lies in avoiding long periods between build and test runs, CI systems can also simplify and automate the execution of many otherwise tedious tasks. These include cross-platform testing, the regular running of slow, data-intensive, or difficult-to-configure tests, verification of proper performance on legacy platforms, detection of infrequently failing tests, and the regular production of up-to-date release products. And, because build and test automation is necessary for implementing continuous integration, CI is often a first step towards a continuous deployment framework wherein software updates can be deployed quickly to live systems after testing.

Testing CMake Any new CMake developer is first introduced to the testing process used in CMake development. The process makes use of the CMake family of tools (CMake, CTest, CPack, and CDash). As the code is developed and checked into the version control system, continuous integration testing machines automatically build and test the new CMake code using CTest. The results are sent to a CDash server which notifies developers via email if there are any build errors, compiler warnings, or test failures. The process is a classic continuous integration testing system. As new code is checked into the CMake repository, it is automatically tested on the platforms supported by CMake. Given the large number of compilers and platforms that CMake supports, this type of testing system is essential to the development of a stable build system.

And, because build and test automation is necessary for implementing continuous integration, CI is often a first step towards a continuous deployment framework wherein software updates can be deployed quickly to live systems after testing. Continuous integration is a timely subject, not least because of its prominence in the Agile software methodology. There has been an explosion of open source CI tools in recent years, in and for a variety of languages, implementing a huge range of features in the context of a diverse set of architectural models. The purpose of this chapter is to describe common sets of features implemented in continuous integration systems, discuss the architectural options available, and examine which features may or may not be easy to implement given the choice of architecture. Below, we will briefly describe a set of systems that exemplify the extremes of architectural choices available when designing a CI system.

pages: 203 words: 14,242

Ship It!: A Practical Guide to Successful Software Projects by Jared R. Richardson, William A. Gwaltney


continuous integration, David Heinemeier Hansson, Donald Knuth, index card, MVC pattern, place-making, Ruby on Rails, web application

You can’t automate a process that doesn’t exist. Once you can build your product automatically, how often should you do so? Ideally, you will rebuild every time the code changes. That way you’ll know immediately if any change broke your build. Add a lightweight set of smoke tests to this system, and you also get a basic level of functional insurance as well. This type of system is called Continuous Integration.4 A Continuous Integration (or CI) tool sits on a clean, nondeveloper box (the build machine) and rebuilds your project every time someone commits code. Rebuilding each time code is committed keeps your code base clean by catching compile errors as soon as they occur. It also runs your test suites to catch functional errors. We use an open-source CI tool called CruiseControl5 because it’s well-supported, it scales well, and it’s free!

We believe that the benefit comes from having a constant “virtual build monitor” that catches every bad code commit almost immediately. It always flags code that doesn’t compile. It also catches the new files that you forgot to add or the existing files you modified. Automated build systems are great at catching the details that we humans are so good at missing. TIP 6 Build continuously 4 5 CruiseControl is an open-source Continuous Integration system hosted at smoke tests Continuous Integration 30 B UILD A UTOMATICALLY Figure 2.4: CruiseControl status report web page You can also move beyond “Does it compile?” and ask “Does it run?” With a well-selected test suite, basic functionality is retested and bugs are not allowed to be reintroduced (preventing bug regression). With this system, your development staff spends their time adding features instead of fixing compile failures or refixing the same bugs again and again.

We’re Never Done . . . . . . . . . . . . . . . 128 129 131 132 133 135 136 138 139 141 144 146 147 150 153 155 156 158 159 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . A Source Code Management 162 B Build Scripting Tools 165 C Continuous Integration Systems 169 D Issue Tracking Software 172 E Development Methodologies 175 F Testing Frameworks 178 G Suggested Reading List 181 G.1 Bibliography . . . . . . . . . . . . . . . . . . . . . . . . . 184 Index 186 vi Dogma does not mean the absence of thought, but the end of thought. Gilbert Keith Chesterton (1874–1936) Foreword You may have noticed that this isn’t the only book about developing software sitting on the shelf.

pages: 372 words: 67,140

Jenkins Continuous Integration Cookbook by Alan Berg


anti-pattern, continuous integration, Debian, don't repeat yourself,, Firefox, job automation, performance metric, revision control, web application, x509 certificate

Jenkins Continuous Integration Cookbook * * * Jenkins Continuous Integration Cookbook Copyright © 2012 Packt Publishing All rights reserved. No part of this book may be reproduced, stored in a retrieval system, or transmitted in any form or by any means, without the prior written permission of the publisher, except in the case of brief quotations embedded in critical articles or reviews. Every effort has been made in the preparation of this book to ensure the accuracy of the information presented. However, the information contained in this book is sold without warranty, either express or implied. Neither the author, nor Packt Publishing, and its dealers and distributors will be held liable for any damages caused or alleged to be caused directly or indirectly by this book. Packt Publishing has endeavored to provide trademark information about all of the companies and products mentioned in this book by the appropriate use of capitals.

Fully searchable across every book published by Packt Copy and paste, print, and bookmark content On demand and accessible via web browser Free Access for Packt account holders If you have an account with Packt at, you can use this to access PacktLib today and view nine entirely free books. Simply use your login credentials for immediate access. Preface Jenkins is a Java-based Continuous Integration (CI) server that supports the discovery of defects early in the software cycle. Thanks to over 400 plugins, Jenkins communicates with many types of systems, building and triggering a wide variety of tests. CI involves making small changes to software, and then building and applying quality assurance processes. Defects do not only occur in the code but also appear in the naming conventions, documentation, how the software is designed, build scripts, the process of deploying the software to servers, and so on. Continuous integration forces the defects to emerge early, rather than waiting for software to be fully produced. If defects are caught in the later stages of the software development lifecycle, the process will be more expensive.

Jenkins is an agile project; you can see numerous releases in the year, pushing improvements rapidly ( There is also a highly stable long-term support release for the more conservative. Hence, there is a rapid pace of improvement. Jenkins pushes up code quality by automatically testing within a short period after code commit, and then shouting loudly if build failure occurs. Jenkins is not just a continual integration server but also a vibrant and highly active community. Enlightened self-interest dictates participation. There are a number of ways to do this: Participate on the Mailing lists and Twitter ( First, read the postings, and as you get to understand what is needed, participate in the discussions. Consistently reading the lists will generate many opportunities to collaborate.

pages: 351 words: 123,876

Beautiful Testing: Leading Professionals Reveal How They Improve Software (Theory in Practice) by Adam Goucher, Tim Riley


Albert Einstein, barriers to entry, Black Swan, call centre, continuous integration, Debian, Donald Knuth,, Firefox, Grace Hopper, index card, Isaac Newton, natural language processing, p-value, performance metric, revision control, six sigma, software as a service, software patent, the scientific method, Therac-25, Valgrind, web application

Tools or scripts that analyze source code at or extremely close to compile time are essential for detecting many common programming errors such as these. Finally, it’s critical that test code is under source control and builds in a central location. Source control eases maintenance by enabling testers to investigate an entire history of changes to test code, overlaying infrastructure or “collateral” (noncode files used by the tests, such as media files or input data). Frequent builds or continuous integration are just as applicable to test code as they are to product code. Central builds allow the latest tests to run on every build of the production code, and ensure that tests are available from a central location. If testers build their own tests and are responsible for copying the tests to a server, mistakes are eventually bound 106 CHAPTER EIGHT to occur. A central build also enables the embedding of consistent version information into every test.

The implementation of the details, as well as the relevant cultural changes, won’t happen overnight. As with any large-scale change, changing and growing a little at a time is the best approach to turn an inefficient system into a beautiful one. Start by writing better tests; poor test automation is one of the biggest obstacles to automation success. Also make sure that test scripts and code are part of a source control system, and that tests go through some sort of continuous integration process to ensure some level of test quality and consistency. 116 CHAPTER EIGHT Then, set up a test lab for running automated tests. Start working out a way to distribute tests to these machines and execute the tests. Then, begin gathering and aggregating simple highlevel test results. Next, find a way to log test failures in the bug database. After that, investigate how you can automatically investigate failures and separate new failures from known failures.

Knowing what should be tested is beautiful, and knowing what is being tested is beautiful. In this chapter we discuss some techniques that are associated with efficient testing methodologies. Testing activity encompasses all aspects of a software development life cycle, be it one of the traditional waterfall, incremental, or spiral models, or the modern agile and test-driven development (TDD) models. One of the recent project management trends is to set up continuous integration frameworks where changes are frequently integrated into the source code, the product is built, and a large part of existing tests are run to validate the correctness of code change. Although this concept was akin to the exhaustive testing methodology of the old school that was deemed as very difficult, recent advances in hardware configurations and reduced cost make this approach financially feasible.

pages: 226 words: 17,533

Programming Scala: tackle multicore complexity on the JVM by Venkat Subramaniam


augmented reality, continuous integration, domain-specific language, don't repeat yourself, loose coupling, semantic web, type inference, web application

This can be very useful for logging results and processing them during continuous integration.3 12.5 Asserts ScalaTest provides a simple assert( ) method.4 It checks whether the expression in the parameter evaluates to true.5 If the expression evaluates to true, the assert( ) method returns silently. Otherwise, it throws an AssertionError. Here is an example of assertion failure: Download UnitTestingWithScala/AssertionFailureExample.scala class AssertionFailureExample extends org.scalatest.Suite { def testAssertFailure() { assert(2 == List().size) } } (new AssertionFailureExample).execute() 3. See “Continuous Integration” in Appendix A, on page 211, and Mike Clark’s Pragmatic Project Automation [Cla04] and Continuous Integration [DMG07] by Duvall et al. 4. You can also import and use JUnit, TestNG, or Hamcrest matchers methods like assertEquals( ) and assertThat( ).

Canary Test. . . . . . In this blog, Neal Ford discusses canary tests and the advantage of starting out small and simple. Command Query Separation. . . . . . In this blog, Martin Fowler discusses the term command query separation. Continuous Integration. . . . . . In this article, Martin Fowler discusses the practice of continuous integration. Discussion Forum for This Book . . . . . . This is the discussion forum for this book where readers share their opinions, ask questions, respond to questions, and interact with each other. Essence vs. Ceremony. . . . . . In this blog titled “Ending Legacy Code in Our Lifetime,” Stuart Halloway discusses essence vs. ceremony.

Effective Java Programming Language Guide. Addison Wesley Longman, Reading, MA, 2001. [Blo08] Joshua Bloch. Effective Java. Addison Wesley Longman, Reading, MA, second edition, 2008. [Cla04] Mike Clark. Pragmatic Project Automation. How to Build, Deploy, and Monitor Java Applications. The Pragmatic Programmers, LLC, Raleigh, NC, and Dallas, TX, 2004. [DMG07] Paul Duvall, Steve Matyas, and Andrew Glover. Continuous Integration: Improving Software Quality and Reducing Risk. Addison-Wesley, Reading, MA, 2007. [For08] Neal Ford. The Productive Programmer. O’Reilly & Associates, Inc, 2008. [Fri97] Jeffrey E. F. Friedl. Mastering Regular Expressions. O’Reilly & Associates, Inc, Sebastopol, CA, 1997. [GHJV95] Erich Gamma, Richard Helm, Ralph Johnson, and John Vlissides. Design Patterns: Elements of Reusable ObjectOriented Software.

Scala in Action by Nilanjan Raychaudhuri


continuous integration, create, read, update, delete, database schema, domain-specific language, don't repeat yourself,, failed state, fault tolerance, general-purpose programming language, index card, MVC pattern, type inference, web application

As you start writing tests, you’re also building a test suite. If you don’t run them often, then you’re not extracting the benefits from them. The next section discusses setting up a continuous integration[9] environment to get continuous benefits from them. 9 Martin Fowler, “Continuous Integration,” May 1, 2006, 10.3.1. Setting up your environment for TDD Once you and your team get comfortable with TDD, you need a tool that checks out the latest code from your source code repository and runs all the tests after every check-in of the source control system. This ensures that you always have a working software application. A continuous integration (CI) tool does that automatically for you. Almost all the existing CI tools will work for Scala projects. Table 10.1 shows some Scala tools that you could use in your Scala project.

The goal in chapter 10 is to make you comfortable writing automated tests in Scala so that you can build production-quality software. The path to writing well-crafted code is the path where you write tests for your code. Another goal is to dispel the common perception that writing tests is hard. Your first steps will be getting started with practices like test-driven development (TDD) and continuous integration for your Scala project. Chapter 6. Building web applications in functional style This chapter covers Building Scala projects with SBT (Simple Build Tool) Introduction to the Scalaz HTTP module Creating a web application in Scala called weKanban This second part of the book switches focus to more real-world applications of the Scala programming language, and what could be more practical than building a web application in Scala?

My goal for this chapter is to make you comfortable writing automated tests in Scala so that you can build production-quality software. The path to writing well-crafted code[1] is the path where you write tests for your code. The common perception about writing tests is that it’s hard, but this chapter will change that mindset. I’m going to show you how you can get started with practices like test-driven development and continuous integration for your Scala project. The idea of test-driven development (TDD) is to write the test before you write code. I know this seems backward, but I promise you that by the end of this chapter it will make sense. You’ll learn that writing tests is more like doing a design exercise than testing, and it makes sense to design your software. Your design tool will be code—more specifically, test code. 1 “Manifesto for Software Craftsmanship,”

pages: 313 words: 75,583

Ansible for DevOps: Server and Configuration Management for Humans by Jeff Geerling


Amazon Web Services, Any sufficiently advanced technology is indistinguishable from magic, cloud computing, continuous integration, database schema, Debian, defense in depth, DevOps, fault tolerance, Firefox, full text search, Google Chrome, inventory management, loose coupling, Minecraft, Ruby on Rails, web application

I hope you enjoy reading this book as much as I did writing it! — Jeff Geerling, 2015 Who is this book for? Many of the developers and sysadmins I work with are at least moderately comfortable administering a Linux server via SSH, and manage between 1-100 servers. Some of these people have a little experience with configuration management tools (usually with Puppet or Chef), and maybe a little experience with deployments and continuous integration using tools like Jenkins, Capistrano, or Fabric. I am writing this book for these friends who, I think, are representative of most people who have heard of and/or are beginning to use Ansible. If you are interested in both development and operations, and have at least a passing familiarity with managing a server via the command line, this book should provide you with an intermediate- to expert-level understanding of Ansible and how you can use it to manage your infrastructure.

To update all the Server servers, all that was needed was: $ ansible centos -m yum -a "name=bash state=latest" You could even go further and create a small playbook that would patch the vulnerability, then run tests to make sure the vulnerability was no longer present, as illustrated in this playbook. This would also allow you to run the playbook in check mode or run it through a continuous integration system to verify the fix works in a non-prod environment. This infrastructure inventory is also nice in that you could create a top-level playbook that runs certain roles or tasks against all your infrastructure, others against all servers of a certain Linux flavor, and another against all servers in your entire infrastructure. Consider, for example, this example master playbook to completely configure all the servers: 1 --- 2 # Set up basic, standardized components across all servers. 3 - hosts: all 4 sudo: true 5 roles: 6 - security 7 - logging 8 - firewall 9 10 # Configure web application servers. 11 - hosts: servercheck-web 12 roles: 13 - nginx 14 - php 15 - servercheck-web 16 17 # Configure database servers. 18 - hosts: servercheck-db 19 roles: 20 - pgsql 21 - db-tuning 22 23 # Configure logging server. 24 - hosts: servercheck-log 25 roles: 26 - java 27 - elasticsearch 28 - logstash 29 - kibana 30 31 # Configure backup server. 32 - hosts: servercheck-backup 33 roles: 34 - backup 35 36 # Configure Node.js application servers. 37 - hosts: servercheck-nodejs 38 roles: 39 - servercheck-node There are a number of different ways you can structure your infrastructure-management playbooks and roles, and we’ll explore some in later chapters, but for a simple infrastructure, something like this is adequate and maintainable.

Consider Etsy, a company whose engineers are deploying code to production up to 40 times per day, with no manual intervention from the operations team. The operations team is free to work on more creative endeavors, and the developers see their code go live in near-real-time! Etsy’s production deployment schedule is enabled by a strong DevOps-oriented culture (with robust code repository management, continuous integration, well-tested code, feature flags, etc.). While it may not be immediately possible to start deploying your application to production 20 times a day, you can move a long way towards effortless deployments by automating deployments with Ansible. Deployment strategies There are dozens of ways to deploy code to servers. For the simplest of applications, all that’s involved might be switching to a new tag in a code repository on the server and restarting a service.

pages: 292 words: 62,575

97 Things Every Programmer Should Know by Kevlin Henney


A Pattern Language, active measures, business intelligence, commoditize, continuous integration, crowdsourcing, database schema, deliberate practice, domain-specific language, don't repeat yourself, Donald Knuth, fixed income, general-purpose programming language, Grace Hopper, index card, inventory management, job satisfaction, loose coupling, Silicon Valley, sorting algorithm, The Wisdom of Crowds

Let Your Project Speak for Itself Daniel Lindner YOUR PROJECT PROBABLY HAS A VERSION CONTROL SYSTEM IN PLACE. Perhaps it is connected to a continuous integration server that verifies correctness by automated tests. That's great. You can include tools for static code analysis in your continuous integration server to gather code metrics. These metrics provide feedback about specific aspects of your code, as well as their evolution over time. When you install code metrics, there will always be a red line that you do not want to cross. Let's assume you started with 20% test coverage and never want to fall below 15%. Continuous integration helps you keep track of all these numbers, but you still have to check regularly. Imagine you could delegate this task to the project itself and rely on it to report when things get worse.

Chapter 33 Clint Shank Clint Shank is a software developer, consultant, and mentor at Sphere of Influence, Inc., a company that leads with design-driven innovation to make curve-jumping, mouth-watering software that's awesome inside and out. His typical consulting focus is the design and construction of enterprise applications. He is particularly interested in agile practices such as continuous integration and test-driven development; the programming languages Java, Groovy, Ruby, and Scala; frameworks like Spring and Hibernate; and general design and application architecture. He keeps a blog at and was a contributor to the book 97 Things Every Software Architect Should Know. Chapter 18 Dan Bergh Johnsson Dan Bergh Johnsson is senior consultant, partner, and official spokesperson for Omegapoint AB.

pages: 157 words: 35,874

Building Web Applications With Flask by Italo Maia


continuous integration, create, read, update, delete, Debian,, Firefox, full stack developer, minimum viable product, MVC pattern, premature optimization, web application

You could, for example, change a piece of code to fix a bug and create another bug in another point in your code. Software tests also help with that as they assure that your code does what it should do; if you change a piece of broken code and break another piece of code, you'll also be breaking a test. In this scenario, if you make use of continuous integration, the broken code will never reach your production environment. Tip Don't know what continuous integration is? Refer to and Tests are so important that there is a software development process called Test Driven Development (TDD), which states that the test should be written before the actual code, and that the actual code is only ready when the test itself is satisfied. TDD is quite common among senior developers and beyond.

pages: 757 words: 193,541

The Practice of Cloud System Administration: DevOps and SRE Practices for Web Services, Volume 2 by Thomas A. Limoncelli, Strata R. Chalup, Christina J. Hogan

active measures, Amazon Web Services, anti-pattern, barriers to entry, business process, cloud computing, commoditize, continuous integration, correlation coefficient, database schema, Debian, defense in depth, delayed gratification, DevOps, domain-specific language,, fault tolerance, finite state, Firefox, Google Glasses, information asymmetry, Infrastructure as a Service, intermodal, Internet of things, job automation, job satisfaction, load shedding, loose coupling, Malcom McLean invented shipping containers, Marc Andreessen, place-making, platform as a service, premature optimization, recommendation engine, revision control, risk tolerance, side project, Silicon Valley, software as a service, sorting algorithm, statistical model, Steven Levy, supply-chain management, Toyota Production System, web application, Yogi Berra

* * * Case Study: RSS Feeds of Build Status StackExchange has an internal chat room system. It has the ability to monitor an RSS feed and announce any new entries in a given room. The SRE chat room monitors an RSS feed of build completions. Every time a build completes, there is an announcement of what was built and whether it was successful, plus a link to the status page. This way the entire team has visibility to their builds. * * * 9.5 Continuous Integration Continuous integration (CI) is the practice of doing the build phase many times a day in an automated fashion. Each run of the build phase is triggered by some event, usually a code commit. All the build-phase steps then run in a fully automated fashion. All builds are done from the main trunk of the source code repository. All developers contribute code directly to the trunk. There are no long-lived branches or independent work areas, created for feature development.

With each build, the testing environment is created, the automated testing runs, and the release is “delivered,” ready to be considered for use in other environments. This doesn’t mean every change is deployed to production, but rather that every change is proven to be deployable at any time. CD has similar benefits as continuous integration. In fact, it can be considered an extension to CI. CD makes it economical and low risk to work in small batches, so that problems are found sooner and, therefore, are easier to fix. (See Section 8.2.4.) CD incorporates all of continuous integration, plus system tests, performance tests, user acceptance tests, and all other automated tests. There’s really no excuse not to adopt CD once testing is automated. If some tests are not automated, CD can deliver the release to a beta environment used for manual testing. 10.6 Infrastructure as Code Recall in Figure 9.1 that the service delivery platform (SDP) pattern flow has quadrants that represent infrastructure as well as applications.

We are optimistic that someday we will be able to do so, too. 11.10 Continuous Deployment Continuous deployment means every release that passes tests is deployed to production automatically. Continuous deployment should be the goal of most companies unless constrained by external regulatory or other factors. As depicted in Figure 11.1, this requires continuous integration and continuous delivery, plus automating any other testing, approval, and code push processes. Figure 11.1: Continuous integration, delivery, and deployment build on each other. Recall that continuous delivery results in packages that are production ready, but whether to actually deploy each package into production is a business decision. Continuous deployment is a business decision to automate this approval and always deploy approved releases. It may sound risky to always be doing pushes, and it is.

pages: 210 words: 42,271

Programming HTML5 Applications by Zachary Kessin


barriers to entry, continuous integration, fault tolerance, Firefox, Google Chrome, mandelbrot fractal, QWERTY keyboard, web application, WebSocket

The test runner accepts input tests in HTML format, which can be created in the IDE. Finally, it is possible to write tests in a unit test framework in a programming language such as PHPUnit. Using the test runner or the programming language-based test suite allows testing with a full suite of browsers and can provide reporting and other functions. This procedure can also be integrated with continuous integration tools along with any other tests written in any of the xUnit frameworks. A Selenium test is constructed from an HTML file containing a table, with each step in the test being a row in the table. The row consists of three columns: the command to run, the element on which it will act, and an optional parameter used in some cases. For example, the third column contains the text to type into an input element while testing a form.

By using the API from a server-side programming language, it is possible to create a very rich environment for scripting the Web, and of course you have access to libraries on the server side to check data in a database or access web services. You can construct a test that will perform some actions in the browser, and then check the result in a database or against a logfile. Another advantage of server-side testing is that if you are using any form of continuous integration, such as CruiseControl or phpUnderControl, the Selenium tests appear to the test system as just more tests in whatever language the team is using. In a team that is using a test framework, this will leverage the existing experience of the team. Example 3-8 is a very simple Selenium test, written in PHP with the PHPUnit testing framework. It just opens up a web page, and after the page has loaded, it asserts that the title of the page is the string “Hello World.”

This is often helpful when developing a multistep wizard or similar user interface. Running QUnit from Selenium Selenium can run QUnit tests as well. To do so, load the QUnit page in Selenium and run the tests. It is also possible to choose to only run a subset of tests by passing parameters to the URL string. By integrating Selenium with QUnit, you can export the results of browser tests in QUnit into a test runner for continuous integration. Selenium just opens the QUnit URL and then stands back and waits for the test to finish. To let the test runner know whether the tests passed or failed, QUnit provides a simple micro format (see Example 3-12) that shows how many tests were run and how many passed or failed. The unit test can then look for this data by an XPath selector and make sure all tests passed. In Example 3-11, the PHP program opens the QUnit test from the start of this chapter and then waits for the test to run.

pages: 193 words: 46,550

Twisted Network Programming Essentials by Jessica McKellar, Abe Fettig


continuous integration, WebSocket

Nonblocking Database Queries | 79 More Practice and Next Steps This chapter discussed how to interact with databases in a non-blocking fashion using Twisted’s adbapi. adbapi provides an asynchronous interface to Python’s DB-API 2.0 specification, which is defined in PEP 249. The methods in the asynchronous interface map directly to methods in the blocking API, so converting a service from blocking database queries to adbapi is straightforward. For an example of how a large project uses Twisted’s relational database support, check out the Buildbot continuous integration framework. Twistar is a library that builds an object-relational mapper (ORM) on top of adbapi. 80 | Chapter 8: Databases CHAPTER 9 Authentication Twisted comes with a protocol-independent, pluggable, asynchronous authentication system called Cred that can be used to add any type of authentication support to your Twisted server. Twisted also ships with a variety of common authentication mechanisms that you can use off the shelf through this system.

For example, to see how Twisted Web’s Agent interface is tested, including mocking the transport, testing timeouts, and testing errors, have a look at twisted/web/test/ To see how to test a protocol like twisted.words.protocols.irc, check out twisted/words/tests/ You can read about Twisted’s test-driven development policy in detail on the Twisted website. Twisted publishes its own coverage information as part of its continuous integration. Help improve Twisted by writing test cases! More Practice and Next Steps | 115 PART III More Protocols and More Practice CHAPTER 12 Twisted Words Twisted Words is an application-agnostic chat framework that gives you the building blocks to build clients and servers for popular chat protocols and to write new protocols. Twisted comes with protocol implementations for IRC, Jabber (now XMPP, used by chat services like Google Talk and Facebook Chat), and AOL Instant Messenger’s OSCAR.

We finished by surveying client and server implementations for several popular protocols. You now have all of the tools you need to build and deploy event-driven clients and servers for any protocol, and I think you’ll find that to be a powerful tool to have in your back pocket. Twisted powers everything from networked game engines and streaming media servers to web crawling frameworks and continuous integration systems to Bit‐ Torrent clients and AMQP peers. The next time you need to programmaticaly download data from a website, test an HTTP client, process your email, or annoy your friends with an IRC bot, you know what to do. Thank you for reading! We’d love to hear your thoughts on this book. Please send feed‐ back and technical questions to You can find more infor‐ mation about the book, and a list of errata, at

pages: 282 words: 79,176

Pro Git by Scott Chacon


Chris Wanstrath, continuous integration, creative destruction, Debian, distributed revision control, GnuPG, pull request, revision control

The line should look something like this: git:x:1000:1000::/home/git:/usr/bin/git-shell Now, the ‘git’ user can only use the SSH connection to push and pull Git repositories and can’t shell onto the machine. If you try, you’ll see a login rejection like this: $ ssh git@gitserver fatal: What do you think I am? A shell? Connection to gitserver closed. Public Access What if you want anonymous read access to your project? Perhaps instead of hosting an internal private project, you want to host an open source project. Or maybe you have a bunch of automated build servers or continuous integration servers that change a lot, and you don’t want to have to generate SSH keys all the time — you just want to add simple anonymous read access. Probably the simplest way for smaller setups is to run a static web server with its document root where your Git repositories are, and then enable that post-update hook we mentioned in the first section of this chapter. Let’s work from the previous example.

The Git protocol is far more efficient and thus faster than the HTTP protocol, so using it will save your users time. Again, this is for unauthenticated read-only access. If you’re running this on a server outside your firewall, it should only be used for projects that are publicly visible to the world. If the server you’re running it on is inside your firewall, you might use it for projects that a large number of people or computers (continuous integration or build servers) have read-only access to, when you don’t want to have to add an SSH key for each. In any case, the Git protocol is relatively easy to set up. Basically, you need to run this command in a daemonized manner: git daemon --reuseaddr --base-path=/opt/git/ /opt/git/ --reuseaddr allows the server to restart without waiting for old connections to time out, the --base-path option allows people to clone projects without specifying the entire path, and the path at the end tells the Git daemon where to look for repositories to export.

You can use this hook to do things like make sure none of the updated references are non-fast-forwards; or to check that the user doing the pushing has create, delete, or push access or access to push updates to all the files they’re modifying with the push. The post-receive hook runs after the entire process is completed and can be used to update other services or notify users. It takes the same stdin data as the pre-receive hook. Examples include e-mailing a list, notifying a continuous integration server, or updating a ticket-tracking system — you can even parse the commit messages to see if any tickets need to be opened, modified, or closed. This script can’t stop the push process, but the client doesn’t disconnect until it has completed; so, be careful when you try to do anything that may take a long time. update The update script is very similar to the pre-receive script, except that it’s run once for each branch the pusher is trying to update.

pages: 448 words: 84,462

Testing Extreme Programming by Lisa Crispin, Tip House

Amazon:, continuous integration, data acquisition, database schema, Donner party, Drosophila, hypertext link, index card, job automation, web application

The programmers write the unit tests before they write the code, then add unit tests whenever one is found to be missing. No modification or refactoring of code is complete until 100% of the unit tests have run successfully. Acceptance tests validate larger blocks of system functionality, such as user stories. When all the acceptance tests pass for a given user story, that story is considered complete. Continuous integration. Additions and modifications to the code are integrated into the system on at least a daily basis, and the unit tests must run 100% successfully, both before and after each integration. Small releases. The smallest useful feature set is identified for the first release, and releases are performed as early and often as possible, with a few new features added each time. Courage Planning game.

Either the software is delivered late or the system- and acceptance-level testing is never completed, and the software suffers significant quality problems in operation. This is especially frustrating to testers, because they really can't do anything about it. No matter how thorough, efficient, and/or automated are the tests they develop, they can't put either functionality or quality into the system; they can only detect its presence or absence. XP avoids this scenario completely through the practices of 100% unit-test automation and continuous integration. Programmer pairs detect and correct unit and integration bugs during the coding sessions. By the time the code gets to acceptance testing, it's doing what the programmers intend, and the acceptance tests can focus on determining how well that intent matches the customer's expectations. Missing and Out-of-Date Requirements Moving upstream in the process, another common problem in developing tests occurs when requirements are missing or out of date.

Make sure a team is really practicing XP before you attempt to apply these XP testing techniques. Summary XP is a lightweight but disciplined approach to software development that has testing and quality at its core. XP is based on four values: communication, simplicity, feedback, and courage. Twelve practices comprise the rules of XP: Onsite customer Pair programming Coding standards Metaphor Simple design Refactoring Testing Continuous integration Small releases Planning game Collective code ownership Sustainable pace XP solves three major testing and quality assurance problems: Unit and integration bugs during system and acceptance testing Lack of requirements from which to develop tests Large gaps between customer expectations and delivered product Chapter 2. Why XP Teams Need Testers Much of the published material on Extreme Programming is aimed at programmers, customers, and managers.

pages: 196 words: 58,122

AngularJS by Brad Green, Shyam Seshadri


combinatorial explosion, continuous integration, Firefox, Google Chrome, MVC pattern, node package manager, single page application, web application, WebSocket

test main require module last 'test/spec/main.js' ]; // list of files to exclude exclude = []; // test results reporter to use // possible values: dots || progress reporter = 'progress'; // web server port port = 8989; // cli runner port runnerPort = 9898; // enable/disable colors in the output (reporters and logs) colors = true; // level of logging logLevel = LOG_INFO; // enable/disable watching file and executing tests whenever any file changes autoWatch = true; // Start these browsers, currently available: // - Chrome // - ChromeCanary // - Firefox // - Opera // - Safari // - PhantomJS // - IE if you have a windows box browsers = ['Chrome']; // Continuous Integration mode // if true, it captures browsers, runs tests, and exits singleRun = false; We use a slightly different format to define our dependencies (the included: false is quite important). We also add the dependency on REQUIRE_JS and its adapter. The final thing to get all this working is main.js, which triggers our tests. // This file is test/spec/main.js require.config({ // !! Karma serves files from '/base' // (in this case, it is the root of the project /your-project/app/js) baseUrl: '/base/app/scripts', paths: { angular: 'vendor/angular/angular.min', jquery: 'vendor/jquery', domReady: 'vendor/require/domReady', twitter: 'vendor/bootstrap', angularMocks: 'vendor/angular-mocks', angularResource: 'vendor/angular-resource.min', unitTest: '../../..

./'; // list of files / patterns to load in the browser files = [ ANGULAR_SCENARIO, ANGULAR_SCENARIO_ADAPTER, 'test/e2e/*.js' ]; // list of files to exclude exclude = []; // test results reporter to use // possible values: dots || progress reporter = 'progress'; // web server port port = 8989; // cli runner port runnerPort = 9898; // enable / disable colors in the output (reporters and logs) colors = true; // level of logging logLevel = LOG_INFO; // enable / disable watching file and executing tests whenever any file changes autoWatch = true; urlRoot = '/_karma_/'; proxies = { '/': 'http://localhost:8000/' }; // Start these browsers, currently available: browsers = ['Chrome']; // Continuous Integration mode // if true, it capture browsers, run tests and exit singleRun = false; Chapter 4. Analyzing an AngularJS App We talked about some of the commonly used features of AngularJS in Chapter 2, and then dived into how your development should be structured in Chapter 3. Rather than continuing with similarly deep dives into individual features, Chapter 4 will look at a small, real-life application.

pages: 1,758 words: 342,766

Code Complete (Developer Best Practices) by Steve McConnell


Ada Lovelace, Albert Einstein, Buckminster Fuller, call centre, choice architecture, continuous integration, data acquisition, database schema, don't repeat yourself, Donald Knuth, fault tolerance, Grace Hopper, haute cuisine, if you see hoof prints, think horses—not zebras, index card, inventory management, iterative process, Larry Wall, late fees, loose coupling, Menlo Park, Perl 6, place-making, premature optimization, revision control, Sapir-Whorf hypothesis, slashdot, sorting algorithm, statistical model, Tacoma Narrows Bridge, the scientific method, Thomas Kuhn: the structure of scientific revolutions, Turing machine, web application

., India, Japan, and Europe found that only 20–25 percent of projects used daily builds at either the beginning or middle of their projects (Cusumano et al. 2003), so this represents a significant opportunity for improvement. Continuous Integration Some software writers have taken daily builds as a jumping-off point and recommend integrating continuously (Beck 2000). Most of the published references to continuous integration use the word "continuous" to mean "at least daily" (Beck 2000), which I think is reasonable. But I occasionally encounter people who take the word "continuous" literally. They aim to integrate each change with the latest build every couple of hours. For most projects, I think literal continuous integration is too much of a good thing. In my free time, I operate a discussion group consisting of the top technical executives from companies like, Boeing, Expedia, Microsoft, Nordstrom, and other Seattle-area companies.

Daily Build and Smoke Test Whatever integration strategy you select, a good approach to integrating the software is the "daily build and smoke test." Every file is compiled, linked, and combined into an executable program every day, and the program is then put through a "smoke test," a relatively simple check to see whether the product "smokes" when it runs. Further Reading Much of this discussion is adapted from Chapter 18 of Rapid Development (McConnell 1996). If you've read that discussion, you might skip ahead to the "Continuous Integration" section. This simple process produces several significant benefits. It reduces the risk of low quality, which is a risk related to the risk of unsuccessful or problematic integration. By smoke-testing all the code daily, quality problems are prevented from taking control of the project. You bring the system to a known, good state, and then you keep it there. You simply don't allow it to deteriorate to the point where time-consuming quality problems can occur.

In my free time, I operate a discussion group consisting of the top technical executives from companies like, Boeing, Expedia, Microsoft, Nordstrom, and other Seattle-area companies. In a poll of these top technical executives, none of them thought that continuous integration was superior to daily integration. On mediumsized and large projects, there is value in letting the code get out of sync for short periods. Developers frequently get out of sync when they make larger-scale changes. They can then resynchronize after a short time. Daily builds allow the project team rendezvous points that are frequently enough. As long as the team syncs up every day, they don't need to rendezvous continuously. Checklist: Integration Integration Strategy Does the strategy identify the optimal order in which subsystems, classes, and routines should be integrated? Is the integration order coordinated with the construction order so that classes will be ready for integration at the right time?

pages: 761 words: 80,914

Ansible: Up and Running: Automating Configuration Management and Deployment the Easy Way by Lorin Hochstein


Amazon Web Services, cloud computing, continuous integration, Debian, DevOps, domain-specific language, don't repeat yourself, general-purpose programming language, Infrastructure as a Service, job automation, pull request, side project, smart transportation, web application

Docker Application Life Cycle Here’s what the typical life cycle of a Docker-based application looks like: Create Docker images on your local machine. Push Docker images up from your local machine to the registry. Pull Docker images down to your remote hosts from the registry. Start up Docker containers on the remote hosts, passing in any configuration information to the containers on startup. You typically create your Docker image on your local machine, or on a continuous integration system that supports creating Docker images, such as Jenkins or CircleCI. Once you’ve created your image, you need to store it somewhere it will be convenient for downloading onto your remote hosts. Docker images typically reside in a repository called a registry. The Docker project runs a registry called Docker Hub, which can host both public and private Docker images, and where the Docker command-line tools have built-in support for pushing images up to a registry and for pulling images down from a registry.

For example, to build the Mezzanine image, I wrote: $ cd mezzanine $ docker build -t lorin/mezzanine . Ansible does contain a module for building Docker images, called docker_image. However, that module has been deprecated because building images isn’t a good fit for a tool like Ansible. Image building is part of the build process of an application’s lifecycle; building Docker images and pushing them up an image registry is the sort of thing that your continuous integration system should be doing, not your configuration management system. Deploying the Dockerized Application Note We use the docker module for deploying the application. As of this writing, there are several known issues with the docker module that ships with Ansible. The volumes_from parameter does not work with recent versions of Docker. It does not support Boot2Docker, a commonly used tool for running Docker on OS X.

pages: 648 words: 108,814

Solr 1.4 Enterprise Search Server by David Smiley, Eric Pugh


Amazon Web Services, bioinformatics, cloud computing, continuous integration, database schema, domain-specific language,, fault tolerance, Firefox, information retrieval, Internet Archive, Ruby on Rails, web application, Y Combinator

You'll then use a boost to scale it up. This approach makes your function queries more comparable by using a common baseline, as the values will fit within the same range. • If your data changes in ways causing you to alter the constants in your function queries, then consider implementing a periodic automated test of your Solr data to ensure that the data fits within expected bounds. A Continuous Integration (CI) server might be configured to do this task. An approach is to run a search simply sorting by the data field in question to get the highest or lowest value. • As you tweak the boost, you'll want to look at the proportion of the function query's score contribution (which includes the raw function query, the boost, and the queryNorm) relative to the total score. You'll most likely want this component of the score to be small so that the other factors of the score are more prominent., field collapsing 193 See Blacklight OPAC, Ruby On Rails collapse.maxdocs, field collapsing 193 integrations collapse.threshold, field collapsing 193 Blacklight OPAC, Ruby On Rails collapse.type, field collapsing 192 integrations combined index 32 about 263 CommonsHttpSolrServer 235 data, indexing 263-267 complex systems, tuning Boolean operators about 271 AND 100 CPU usage 272 AND operator, combining with OR memory usage 272 operator 101 scale deep 273 AND or && operator 101 scale high 273 NOT 100 scale wide 273 NOT operator 101 system changes 272 OR 100 components OR or || operator 101 about 111, 159 bool element 92 solrconfig.xml 159 boost functions compressed, field option 41 boosting 137, 138 configuration files, Solr r_event_date_earliest field 138 <requestHandler> tag 25 boosting 70, 107 solrconfig.xml file 25 boost queries standard request handler 26 boosting 134-137 Configuration Management. See CM bq parameter(s) 134 ConsoleHandler 204 bucketFirstLetter 148 Content Construction Kit 252 buildOnCommit 174 Content Management System. See CMS buildOnCommit, spellchecker option 174 Continuous Integration. See CI buildOnOptimize, spellchecker option 174 coord 112 C copyField directive about 46 caches uses 46 tuning 281 CoreDescriptor classes 231 CapitalizationFilterFactory filter 63 core, managing 209, 210 CCK 252 count, Stats component 189 Chainsaw CPU usage 272 URL 204 cron 289 characterEncoding, FileBasedSpellChecker CSV, sending to Solr option 175 about 72 CharFilterFactory 62 configuration options 73, 74 CI 128 curl classname 173 using, to interact with Solr 66, 68 CM 197 [ 302 ] Download at Boykma.Com This material is copyright and is licensed for the sole use by William Anderson on 26th August 2009 4310 E Conway Dr.

pages: 443 words: 51,804

Handbook of Modeling High-Frequency Data in Finance by Frederi G. Viens, Maria C. Mariani, Ionut Florescu


algorithmic trading, asset allocation, automated trading system, backtesting, Black-Scholes formula, Brownian motion, business process, continuous integration, corporate governance, discrete time, distributed generation, fixed income, Flash crash, housing crisis, implied volatility, incomplete markets, linear programming, mandelbrot fractal, market friction, market microstructure, martingale, Menlo Park, p-value, pattern recognition, performance metric, principal–agent problem, random walk, risk tolerance, risk/return, short selling, statistical model, stochastic process, stochastic volatility, transaction costs, value at risk, volatility smile, Wiener process

(13.33) 367 13.3 Another Iterative Method Then, we try to extend our results to the corresponding initial-value problem in the unbounded domain RTd +1 = Rd × (0, T ): ut − Lu = F(x, t, u, ∇u) in RTd +1 , u(x, 0) = u0 (x) on Rd . (13.34) Here, L = L(x, t) is a second-order elliptic operator in nondivergence form, namely, L(x, t) := d i,j=1 ∂2 ∂ + bi (x, t) + c(x, t). ∂xi ∂xj ∂xi i=1 d aij (x, t) The integro-differential operator is defined by F(x, t, u, ∇u) = f (x, t, y, u(x, t), ∇u(x, t)) dy. (13.35) This integro-differential operator will be a continuous integral operator as the ones defined in Equations 13.28 and 13.32 modeling the jump. The case in which f is decreasing respect to u and all jumps are positive corresponds to the evolution of a call option near a crash. Throughout this section, we impose the following assumptions: A(1) The coefficients aij (x, t), bi (x, t), c(x, t) belong to the Hölder space C δ,δ/2 (Q T ). A(2) For some 0 < λ < , aij (x, t) satisfies the inequality λ|v|2 < d aij (x, t)vi vj < |v|2 , i,j=1 for all (x, t) ∈ QT , v ∈ Rd .

See also Value at risk (VaR) 423 Conditional variances, 203, 206, 208 of the GARCH(1,1) process, 180 Confidence intervals, for forecasts, 187–188 Consecutive trades, 129 Consensus indicators, 62 Constant coefficient case, 311 Constant default correlation, 79–81 Constant default correlation model, 76 Constant rebalanced portfolio technical analysis (CRP-TA) trading algorithm, 65–66 Constant variance, 181 Constant volatility, 353 Constructed indices, comparison of, 106–107 Constructed volatility index (VIX). See also Volatility index (VIX) comparing, 105–106 convergence of, 105 Contaminated returns, variance and covariance of, 257 Continuous integral operator, 367 Continuous semimartingales, 246, 253 Continuous-time long-memory stochastic volatility (LMSV) model, 220 Continuous-time stochastic modeling, 3 Continuous-time vintage, 78 Convergence-of-interests hypothesis, 54 Convex duality method, 296 Copula models, 77 Copulas, 75–76 CorpInterlock, 62, 63 Corporate governance, 53–54 of S&P500 companies, 54–60 Corporate governance best practices, 59 Corporate governance scorecards, 51–52 Corporate governance variables, 69 interpreting S&P500 representative ADTs with, 58–59 Corporate performance, predicting, 69 Correlation coefficient, 400 Correlation fluctuations impact on securitized structures, 75–95 products and models related to, 77–79 Cost structures, 392 424 Covariance(s) estimating, 244 forecasting, 280–285 Covariance function, 252 Covariance matrix, 170 Covariance stationarity, 177, 179, 181 Covariation-realized covariance estimator, 266 Covolatility function, 249 Covolatility measurement/forecasting, as a key issue in finance, 243 Cox, Ingersoll, Ross (CIR) square-root model, 257 cpVIX, 103 Crash imminence, precautions against, 121 Creamer, Germán, xiii, 47 Crisis detection, 131 Crisis-related equity behavior, 150 Cubic-type kernels, 261, 263 Cumulative abnormal return, 62, 63 Cumulative consumption process, 297, 305–306 Cumulative distribution curve, 346 Cumulative distribution function, 176 Current market volatility distribution, estimating, 115 Current weighting, 49 Customer perspective, 51 Cutting frequency, 258, 259 cVIX-1, 101, 102.

pages: 394 words: 118,929

Dreaming in Code: Two Dozen Programmers, Three Years, 4,732 Bugs, and One Quest for Transcendent Software by Scott Rosenberg


A Pattern Language, Benevolent Dictator For Life (BDFL), Berlin Wall,, call centre, collaborative editing, conceptual framework, continuous integration, Donald Knuth, Douglas Engelbart, Douglas Engelbart, Douglas Hofstadter, Dynabook,, Firefox, Ford paid five dollars a day, Francis Fukuyama: the end of history, George Santayana, Grace Hopper, Guido van Rossum, Gödel, Escher, Bach, Howard Rheingold, index card, Internet Archive, inventory management, Jaron Lanier, John Markoff, John von Neumann, knowledge worker, Larry Wall, life extension, Loma Prieta earthquake, Menlo Park, Merlin Mann, new economy, Nicholas Carr, Norbert Wiener, pattern recognition, Paul Graham, Potemkin village, RAND corporation, Ray Kurzweil, Richard Stallman, Ronald Reagan, Ruby on Rails, semantic web, side project, Silicon Valley, Singularitarianism, slashdot, software studies, source of truth, South of Market, San Francisco, speech recognition, stealth mode startup, stem cell, Stephen Hawking, Steve Jobs, Stewart Brand, Ted Nelson, Therac-25, thinkpad, Turing test, VA Linux, Vannevar Bush, Vernor Vinge, web application, Whole Earth Catalog, Y2K

So as the date for 0.2 drew closer, the developers began to try to take the work they had been pursuing on their own—like Andi Vajda’s new repository and Andy Hertzfeld’s work on agents—and add it to the main trunk of Chandler code. Most projects today embrace the idea of continuous integration: The programmers always keep their latest code checked in to the main trunk of the code tree, and everyone is responsible for making sure that their new additions haven’t thrown a spanner in the works. Later on, OSAF would end up achieving a higher level of continuous integration, but for 0.2 the process was more like what software-development analysts call “big-bang integration”: all the programmers try to integrate their code at the end, and everything breaks. Toy knew the dangers, but he felt they had made a collective commitment to release a version of Chandler every three months and ought to meet that deadline.

From Airline Reservations to Sonic the Hedgehog: A History of the Software Industry by Martin Campbell-Kelly


Apple II, Apple's 1984 Super Bowl advert, barriers to entry, Bill Gates: Altair 8800, business process, card file, computer age, computer vision, continuous integration, deskilling, Donald Knuth, Grace Hopper, information asymmetry, inventory management, John Markoff, John von Neumann, linear programming, Menlo Park, Network effects, popular electronics, RAND corporation, Robert X Cringely, Ronald Reagan, Silicon Valley, software patent, Steve Jobs, Steve Wozniak, Steven Levy, Thomas Kuhn: the structure of scientific revolutions

These programs were sophisticated, containing hundreds of thousands of lines of code, and were comparable in complexity to real-time airline reservation systems. Because these software developments were so capital intensive, the two market leaders, 154 Chapter 5 Anacomp and Hogan Systems, were both funded through non-market sources. Anacomp, established as a programming services operation in 1968, received funding from 30 major banks to develop its CIS (Continuous Integrated System) real-time banking software product. By 1983, Anacomp had pulled ahead of all its competitors and was among the top 10 software products firms, with annual sales of $172 million. However, the company’s sales plummeted in 1984 as a result of product-development delays with CIS. After losing $116 million on sales of $132 million, “for all practical purposes, CIS was dead.”49 Anacomp’s software was acquired by EDS in 1985, and Anacomp faded from sight.

., 80, 84 Computer Information Management, 148 Computer Leasing Company, 80 Computer Machinery Group, 77 Computer Sciences Corporation, 5, 12, 50–54, 57, 70–73, 76, 83, 84, 115, 117, 195, 205, 304, 305 Computer Sciences International, 76 Computer Services and Software Industry, 57, 59 Computer Services Association, 77 Computer Space, 272 Computer Usage Company, 5, 24, 31, 50–52, 57, 59, 71, 73 Computer Vision, 128, 129, 160–162, 244, 245 ComputerLand, 209, 255 Computers, numbers of, 89, 90 Computers and Software Inc., 62 Computicket, 73 Compuware, 168, 172 Comshare, 63 Configuration Utilization Evaluator, 102 Connecticut Mutual Insurance Company, 219 Consolidated Edison, 149 Continental Software, 227, 294 Continuous Integrated System, 154 Control Data Corporation, 62, 82, 102, 109, 250 CDC 1604 computer, 80 Cook, Scott, 261, 294, 295 Copyright, See Intellectual property issues Corel, 261, 263 Cornfeld, Bernie, 83 COSMIC, 130 CP/M, 206, 215–218, 239–241, 249 Creative Strategies International, 26 Cricket Software, 183 Cross-platform software, 262 Cullinane Corporation, 86, 122, 126, 148, 165 Cullinane, John, 148, 169, 189, 190 Cullinet, 168, 184, 187–190 Cyrix, 239 Cytation Inc., 291 Data Decisions, 132 Data General, 159, 160, 254 Data Pro, 132 Data Products Corporation, 80 Data Systems Analysts, 58, 59 364 Index Data Transmission Company, 84 Databases, 6, 101, 113, 116, 123, 133, 134, 145–149, 156, 176, 184–191, 203, 210, 213–216, 219–221, 234, 244, 251, 256–259, 263, 301, 307 relational, 31, 149, 168, 169, 185–191, 198, 307, 310 DATACOM/DB, 116, 146–148, 151, 184 Dataskil, 78 Datasolv, 78 Datran, 84, 85 dBase II–IV, 7, 203, 210–212, 215, 219, 220, 254–257, 259, 263 Defense Advanced Research Projects Agency, 41 Desktop publishing software, 236, 261, 263 DESQ, 249 DeVries, John, 82 Diana project, 48, 49 Digital Computer Association, 32, 33 Digital Equipment Corporation, 143–145, 159, 160, 173, 188, 195, 213, 306 Digital Research, 203, 206, 207, 217, 234, 239, 241, 248, 249, 264, 289 Disruptive technologies, 186, 187 DistribuPro, 183 Dorling Kindersley, 292 Dow Chemical, 195 Drake, Dan, 243 Draw!

pages: 184 words: 12,922

Pragmatic Version Control Using Git by Travis Swicegood


continuous integration, David Heinemeier Hansson,, Firefox, George Santayana, revision control

Using a file outside the repository to drive your tests isn’t a necessity but can help avoid any problems. git bisect is one of those commands that you may never have to use, but if you do, it can be a lifesaver. Coupled with an automated test suite that can be run from the command line, you can use it to trace bugs quickly. The enterprising development team could even tie it into their automated build system to help isolate bugs. This can be a tremendous time-saver when a project moves too rapidly to be tested with a traditional continuous integration system where each change kicks off a new build. This brings us to the end of the basics chapter and the end of Part II. Whether you’ve just started your journey as a developer and are learning your first VCS or you are an old hand looking to pick up Git, you now know what you need to start being productive with Git. Part III covers administrative tasks. It’s required reading only if you plan on migrating to Git from Subversion or CVS—covered in Chapter 10, Migrating to Git, on page 136—or are going to set up your own remote repository—covered in Chapter 11, Running a Git Server with Gitosis, on page 147.

pages: 274 words: 58,675

Puppet 3 Cookbook by John Arundel


Amazon Web Services, cloud computing, continuous integration, Debian, defense in depth, don't repeat yourself, GnuPG, Larry Wall, place-making, Ruby on Rails, web application

However, desktop cloud has really taken off with the arrival of Vagrant, a tool for managing and provisioning VM environments automatically. Vagrant drives VirtualBox or another virtualization layer to automate the process of creating a VM, provisioning it with Chef or Puppet, setting up networking, port forwarding, and packaging running VMs into images for others to use. You can use Vagrant to manage your development VMs on your own desktop, or on a shared machine such as a continuous integration server. For example, you might use a CI tool such as Jenkins to boot a VM with Vagrant, deploy your app, and then run your tests against it as though it were in production. Getting ready… In order to use Vagrant, you'll need to install VirtualBox (which actually runs the virtual machine) and Vagrant (which controls VirtualBox). Currently you can't run VirtualBox VMs on an EC2 instance (which is itself a virtual machine), so for this example I'm using my laptop. 1.

pages: 179 words: 43,441

The Fourth Industrial Revolution by Klaus Schwab


3D printing, additive manufacturing, Airbnb, Amazon Mechanical Turk, Amazon Web Services, augmented reality, autonomous vehicles, barriers to entry, Baxter: Rethink Robotics, bitcoin, blockchain, Buckminster Fuller, call centre, clean water, collaborative consumption, commoditize, conceptual framework, continuous integration, crowdsourcing, disintermediation, distributed ledger, Edward Snowden, Elon Musk, epigenetics, Erik Brynjolfsson, future of work, global value chain, Google Glasses, income inequality, Internet Archive, Internet of things, invention of the steam engine, job automation, job satisfaction, John Maynard Keynes: Economic Possibilities for our Grandchildren, John Maynard Keynes: technological unemployment, life extension, Lyft, mass immigration, megacity, meta analysis, meta-analysis, more computing power than Apollo, mutually assured destruction, Narrative Science, Network effects, Nicholas Carr, personalized medicine, precariat, precision agriculture, Productivity paradox, race to the bottom, randomized controlled trial, reshoring, RFID, rising living standards, Second Machine Age, secular stagnation, self-driving car, sharing economy, Silicon Valley, smart cities, smart contracts, software as a service, Stephen Hawking, Steve Jobs, Steven Levy, Stuxnet, supercomputer in your pocket, The Future of Employment, The Spirit Level, total factor productivity, transaction costs, Uber and Lyft, Watson beat the top human players on Jeopardy!, WikiLeaks, winner-take-all economy, women in the workforce, working-age population, Y Combinator, Zipcar

In today’s disruptive, fast-changing world, thinking in silos and having a fixed view of the future is fossilizing, which is why it is better, in the dichotomy presented by the philosopher Isaiah Berlin in his 1953 essay about writers and thinkers, to be a fox than a hedgehog. Operating in an increasingly complex and disruptive environment requires the intellectual and social agility of the fox rather than fixed and narrow focus of the hedgehog. In practical terms, this means that leaders cannot afford to think in silos. Their approach to problems, issues and challenges must be holistic, flexible and adaptive, continuously integrating many diverse interests and opinions. Emotional intelligence – the heart As a complement to, not a substitute for, contextual intelligence, emotional intelligence is an increasingly essential attribute in the fourth industrial revolution. As management psychologist David Caruso of the Yale Center for Emotional Intelligence has stated, it should not be seen as the opposite of rational intelligence or “the triumph of heart over head – it is the unique intersection of both.”71 In academic literature, emotional intelligence is credited with allowing leaders to be more innovative and enabling them to be agents of change.

pages: 255 words: 55,018

Architecting For Scale by Lee Atchison

Amazon Web Services, business process, cloud computing, continuous integration, DevOps, Internet of things, platform as a service, risk tolerance, software as a service, web application

Knowing what you can do when your availability begins to slip will help you to avoid falling into a vicious cycle of problems. The ideas in this chapter will help you manage your application and your team to avoid this cycle and keep your availability high. 1 According to Werner Vogels, CTO of Amazon, in 2014 Amazon did 50 million deploys to individual hosts. That’s about one every second. 2 This could be, but does not need to be a modern continuous integration and continuous deploy (CI/CD) process. Part II. Risk Management You cannot possibly manage the risk in your system if you cannot identify the risk in your system. …but there are also unknown unknowns—the ones we don’t know we don’t know. And if one looks throughout the history of our country and other free countries, it is the latter category that tend to be the difficult ones. Donald Rumsfeld Chapter 5.

pages: 536 words: 73,482

Programming Clojure by Stuart Halloway, Aaron Bedra


continuous integration,, general-purpose programming language, Gödel, Escher, Bach, Paul Graham, Ruby on Rails, type inference, web application

Of course, you should still do validation, because it will help you narrow down where a problem happened. (With no validation, the regression error might just be telling you that the old code was broken and the new code fixed it.) How hard is it write a program that should produce exactly the same output? Call only pure functions from the program, which is exactly what our score-inputs function does. Wiring this kind of regression test into a continuous integration build is not difficult. If you do it, think about contributing it to whatever testing framework you use. Now we have partially answered the question, “How do I make sure that my code is correct?” In summary: Build with small, composable pieces (most should be pure functions). Test forms from the inside out at the REPL. When writing test code, keep input generation, execution, and output validation as separate steps.

pages: 239 words: 68,598

The Vanishing Face of Gaia: A Final Warning by James E. Lovelock


Ada Lovelace, butterfly effect, carbon footprint, Clapham omnibus, cognitive dissonance, continuous integration, David Attenborough, decarbonisation, discovery of DNA, Edward Lorenz: Chaos theory, Henri Poincaré, Intergovernmental Panel on Climate Change (IPCC), mandelbrot fractal, mass immigration, megacity, Northern Rock, oil shale / tar sands, phenotype, Pierre-Simon Laplace, planetary scale, short selling, Stewart Brand, University of East Anglia

As if this were not enough to damn wind energy, the construction of a 1 GW wind farm would use a quantity of concrete, 2 million tons, sufficient to build a town for 100,000 people living in 30,000 homes; making and using that quantity of concrete would release about 1 million tons of carbon dioxide into the air. For us to survive as a civilized nation our cities need that safe, secure and constant supply of electricity that only coal, gas or nuclear can provide, and only with nuclear can we be assured of a constant supply of fuel. We have already seen how vulnerable gas supplies are to the continued integrity of pipelines perhaps a thousand miles long, and to the aggressive politics of autocrats. Coal is expensive in the UK and imports are insecure. Wind farms are hopelessly inadequate to the UK as a source of energy and as I have indicated can do little to halt global heating even when used on a global scale; moreover, experience in Western Europe shows them to be costly and inefficient sources of electricity.

Raw Data Is an Oxymoron by Lisa Gitelman


collateralized debt obligation, computer age, continuous integration, crowdsourcing, Drosophila, Edmond Halley, Filter Bubble, Firefox, fixed income, Google Earth, Howard Rheingold, index card, informal economy, Isaac Newton, Johann Wolfgang von Goethe, knowledge worker, liberal capitalism, lifelogging, Louis Daguerre, Menlo Park, optical character recognition, peer-to-peer, RFID, Richard Thaler, Silicon Valley, social graph, software studies, statistical model, Stephen Hawking, Steven Pinker, text mining, time value of money, trade route, Turing machine, urban renewal, Vannevar Bush

True interpellation—in his terms “a complicated configuration of unconsciousness, indirection, automation, and absentmindedness”—requires a coercive system, a “superpanopticon,” capable of rendering us as both subjects of and subjects to that particular assemblage that David Mitchell, in a fictional context, calls a corpocracy.22 For Kevin Robins and Frank Webster, this is the essence of “cybernetic capitalism,” by which they mean the whole of the socioeconomic control system that is in part dependent on the capacity of state and corporate entities to collect and aggregate personal data to the extent that each individual can be easily monitored, managed, and hence controlled.23 As my epigraphs indicate, Robins and Webster are far from alone in their concern with our dynamic incorporation within a totalizing technological system of data management. 24 Greg Elmer also explicates the techniques by which consumer profiles are developed and individuals are “continuously integrated into a larger information economy and technological apparatus.”25 But for Elmer and Lyon and others, a crucial aspect of this incorporation is our voluntary participation: the composition of consumer profiles in part results from solicitation—whether in the form of a request for feedback or personal data so as to be granted access to a particular service or program—which means we are interpellated as “self-communicating” actors.26 To be sure, to participate in the project of modernity has arguably always meant that one becomes a calculable subject by voluntarily surrendering data.

pages: 304 words: 22,886

Nudge: Improving Decisions About Health, Wealth, and Happiness by Richard H. Thaler, Cass R. Sunstein


Al Roth, Albert Einstein, asset allocation, availability heuristic, call centre, Cass Sunstein, choice architecture, continuous integration, Daniel Kahneman / Amos Tversky, desegregation, diversification, diversified portfolio, endowment effect, equity premium, feminist movement, fixed income, framing effect, full employment, George Akerlof, index fund, invisible hand, late fees, libertarian paternalism, loss aversion, Mahatma Gandhi, Mason jar, medical malpractice, medical residency, mental accounting, meta analysis, meta-analysis, Milgram experiment, money market fund, pension reform, presumed consent, profit maximization, rent-seeking, Richard Thaler, Right to Buy, risk tolerance, Robert Shiller, Robert Shiller, Saturday Night Live, school choice, school vouchers, transaction costs, Vanguard fund, Zipcar

The architect can also help reduce latent incentive conflicts between advantaged and disadvantaged parents during the choice process. Despite the attention they receive in the media, market-based programs like vouchers are available to relatively few students nationwide. One popular alternative is a policy known as controlled choice, which emerged in the wake of 1970s court rulings prohibiting busing for the purpose of achieving desegregation. The idea was to continue integration by guaranteeing students a priority space at a nearby school or a school that a sibling attended, while giving them the option to apply for enrollment somewhere else. School administrators in Boston adopted a computer algorithm designed to assign as many students as possible to their first-choice schools, while still giving priority to the neighborhood students. It is hard to know exactly how many districts use the so-called Boston system, because administrators do not always explain controlled-choice policies in detail, but some of the larger metropolitan districts that employ that algorithm or something similar include Denver, Tampa, Minneapolis, Louisville, and Seattle.

pages: 509 words: 92,141

The Pragmatic Programmer by Andrew Hunt, Dave Thomas


A Pattern Language, Broken windows theory, business process, buy low sell high,, combinatorial explosion, continuous integration, database schema, domain-specific language, don't repeat yourself, Donald Knuth, general-purpose programming language, George Santayana, Grace Hopper, if you see hoof prints, think horses—not zebras, index card, loose coupling, Menlo Park, MVC pattern, premature optimization, Ralph Waldo Emerson, revision control, Schrödinger's Cat, slashdot, sorting algorithm, speech recognition, traveling salesman, urban decay, Y2K

Tests that run with every build are much more effective than test plans that sit on a shelf. The earlier a bug is found, the cheaper it is to remedy. "Code a little, test a little" is a popular saying in the Smalltalk world,[6] and we can adopt that mantra as our own by writing test code at the same time (or even before) we write the production code. [6] eXtreme Programming [URL 45] calls this concept "continuous integration, relentless testing." In fact, a good project may well have more test code than production code. The time it takes to produce this test code is worth the effort. It ends up being much cheaper in the long run, and you actually stand a chance of producing a product with close to zero defects. Additionally, knowing that you've passed the test gives you a high degree of confidence that a piece of code is "done."

pages: 278 words: 83,468

The Lean Startup: How Today’s Entrepreneurs Use Continuous Innovation to Create Radically Successful Businesses by Eric Ries


3D printing, barriers to entry, call centre, Clayton Christensen, clean water, cloud computing, commoditize, Computer Numeric Control, continuous integration, corporate governance, experimental subject, Frederick Winslow Taylor, Lean Startup, Marc Andreessen, Mark Zuckerberg, Metcalfe’s law, minimum viable product, Network effects, payday loans, Peter Thiel,, Ponzi scheme, pull request, risk tolerance, selection bias, Silicon Valley, Silicon Valley startup, six sigma, skunkworks, stealth mode startup, Steve Jobs, the scientific method, Toyota Production System, transaction costs

Proportional investment: Bennett and Jim—Write a unit or functional test in the API and CMS that will catch this in the future. Why do we add additional gems that we don’t intend to use right away? Answer: In preparation for a code push we wanted to get all new gems ready in the production environment. Even though our code deployments are fully automated, gems are not. Proportional investment: Bennett—Automate gem management and installation into Continuous Integration and Continuous Deployment process. Bonus—Why are we doing things in production on Friday nights? Answer: Because no one says we can’t and it was a convenient time for the developer to prepare for a deployment we’d be doing on Monday. Proportional investment: Tony—Make an announcement to the team. There will be no production changes on Friday, Saturday, or Sunday unless an exception has been made and approved by David (VP Engineering).

pages: 314 words: 94,600

Business Metadata: Capturing Enterprise Knowledge by William H. Inmon, Bonnie K. O'Neil, Lowell Fryman


affirmative action, bioinformatics, business intelligence, business process, call centre, carbon-based life, continuous integration, corporate governance, create, read, update, delete, database schema,, informal economy, knowledge economy, knowledge worker, semantic web, The Wisdom of Crowds, web application

Appendix Metadata System of Record Example Table 1 Metadata object system of record Meta Object Entity Name Entity Type Entity Definition Entity Scope Entity Active Ind Entity Logical Business Rule Logical Application Name Entity Synonym Name Logical Attribute Logical Attribute Definition Attribute Logical Business Rule Attribute Logical FK Ind Attribute Business Area Logical Business Function Data Subject Area Physical Column Name Physical Column Data Type Physical Column Length Physical Column Precision Strategic Modeling Tool Tactical Modeling Tool C C C C C U U C U DBMS Tool Data Integration Tool Reporting Tool U C C C C U C U C C U U C C U U C U C U C U C U (Cont.) 283 284 Appendix Table 1 Metadata object system of record (continued) Meta Object Strategic Modeling Tool Physical Column Decimal places Physical Column Default Value Physical Column Nullable Ind Physical Column Comment Physical Column Primary Key Ind Physical Column Foreign Key Ind Table Name Table Owner Table Type Table Comments Physical Table Name Physical Column Name Physical View Name Physical Database Name Physical Schema Name ETL Object Name Source Table Source Column Target Table Target Column ETL Job Name ETL Transformation Rule ETL Job Run Date ETL Job Execution Time ETL Job Row Count ETL Job Status Report Name Report Element Name Report Table Name Report Database Name Report DB Sequence Report Element Business Rule Tactical Modeling Tool DBMS Tool C U C U C U C U C C C C C U U U U U C C C C C Data Integration Tool Reporting Tool C R R R R C C C C C C C R R R R C Appendix 285 Metadata Usage Matrix Example The following table summarizes the metadata objects and the anticipated usage of each object in the following functions: Table 1 Summary of metadata objects usage Source – Metadata Object Entity Name Entity Type Entity Definition Entity Scope Entity Container Entity Active Ind Entity Logical Business Rule Logical Application Name Entity Synonym Name Logical Attribute Logical Attribute Definition Attribute Logical Business Rule Attribute Logical FK Ind Attribute Business Area Logical Business Function Data Subject Area Physical Column Name Physical Column Data Type Physical Column Length Physical Column Precision Physical Column Decimal Places Physical Column Default Value Physical Column Nullable Ind Data Lineage Impact Analysis Definition and or Glossary Y Y Y Y Y Y Y Y Y Y Y Y Y Y Y Y Y Y Y Y Y Y Y Y Y Y Y Y Y Y Y Y Y Y Y Y Y Y Y Y Y Y Y Y Y Y Y Y Y Y Y Y Y Y Y Y Y Y Y Y Y Y (Cont.) 286 Appendix Table 1 Summary of metadata objects usage (continued) Source – Metadata Object Physical Column Comment Physical Column Primary Key Ind Physical Column Foreign Key Ind Table Name Table Owner Table Type Table Comments Physical Table Name Physical Column Name Physical View Name Physical Database Name Physical Schema Name ETL Object Name Source Table Source Column Target Table Target Column ETL Job Name ETL Transformation Rule ETL Job Run Date ETL Job Execution Time ETL Job Row Count ETL Job Status Report Name Report Element Name Report Table Name Report Database Name Report DB Sequence Report Element Business Rule Data Lineage Impact Analysis Definition and or Glossary Y Y Y Y Y Y Y Y Y Y Y Y Y Y Y Y Y Y Y Y Y Y Y Y Y Y Y Y Y Y Y Y Y Y Y Y Y Y Y Y Y Y Y Y Y Y Y Y Y Y Y Y Y Y Y Y Y Y Y Y Y Y Y Y Y Y Y Y Index Abstraction, linking structured and unstructured data, 230–231 Accuracy, metadata information, 33–34 Administration, infrastructure issues functionality requirements, 169 history keeping, 169–170 BI, see Business intelligence Broader term, definition component, 61, 63–64 Business Glossary features, 111–112 integrated technical and business metadata delivery, 152–153 Business intelligence (BI), business metadata delivery infrastructure, 163 Business metadata capture, see Capture, business metadata components, 13 definition, 38 delivery, see Delivery, business metadata funding, see Funding, business metadata historical perspective, 3–11 importance, 274–275 locations corporate forms, 15–16 reports, 14, 29–31 screens, 13–14 origins, 19–20 repository construction, see Metadata project resources, 281–282 technical metadata comparison, 12–13 conversion, 135–136, 182–186 infrastructure for integration, 165–166 separation, 140 tracking over time, 20–21 types, 158–160 Business rules business management, 238–239 business metadata, 237, 245 capturing rationale, 238–239 definition, 235–236 maintenance, 242 management, 243 metadata repository, 244–245 ruleflow, 242–243 sources, 237–238 systems, 239, 242–243 Call center volume, search problem quantification, 70 Capability Maturity Model Integration (CMMI), note-taking as asset producing, 265 Capture, business metadata barriers, 275 corporate knowledge base, 93–94 culture, 95–96 editing automation, 128–129 expansion of definition and descriptions, 129–131 granularization, 129 homonym resolution, 132–134 manual editing, 134–135 staging area, 134 synonym resolution, 131–132 Governance Lite™, 107–109, 111 individual documentation problem, 114–115 knowledge socialization, see Knowledge socialization metadata sources comparison of sources, 127 data warehouse, 126 database management system system catalogs, 124 documents, 123 enterprise resource planning applications, 122 extract-transform-load, 124–125 legacy systems, 125–126 on-line analytical processing tools, 124 on-line transaction processing, 125–126 reports, 122–123 spreadsheets, 123 principles, 95–96 publicity, 112–113 rationale, 90–93 technical metadata conversion to business metadata, 135–136 287 288 Index Capture, business metadata (Continued) technology search, 109–111 Web 2.0 folksonomy, 118–119 mashups, 115–116 overview, 115 Card catalog, see Library card catalog CDC, see Centers for Disease Control Centers for Disease Control (CDC), linking structured and unstructured data, 231 CIF, see Corporate Information Factory C-map, see Concept Map CMMI, see Capability Maturity Model Integration Collective intelligence, knowledge socialization, 97 Communications audits, 251 clarity problems bad business decisions, 57 English language limitations, 59 everyday communications, 56–57 faulty rollups, 57–58 units of measure differences, 59 classification, see Taxonomy definitions components, 61–62 guidelines, 60–61 importance, 59–60 miscellaneous guidelines, 64 usage notes, 62–64 historic library creation, 253 human/computer communication problem, 276–278 screening, 251–253 search problem information and knowledge workers, 65–66 information provider guidelines, 71–72 quantification, 67–70 search techniques, 71 tracking down information, 66–67 Compliance communications audits, 251 historic library creation, 253 screening, 251–253 data profiling, 254–256 financial audit metadata utility, 250 transaction background activities, 250–251 prospects, 280 Sarbanes-Oxley Act provisions, 248–240 types, 249–251 Concept Map (C-map), semantic framework, 205, 209 Conceptual model, semantic framework, 204 Controlled vocabulary (CV), semantic framework, 200–201 Corporate forms, business metadata content, 15–16 Corporate Information Factory (CIF), implementation, 81–82 Corporate knowledge base, components, 93–94 Create Read Update Delete (CRUD), conflict resolution, 167 CRM, see Customer relationship management Cross selling, business metadata capture rationale, 92 CRUD, see Create Read Update Delete Customer relationship management (CRM), business metadata capture rationale, 92 Customer, definition, 56 CV, see Controlled vocabulary DASD, see Direct access storage device Data, definition, 176 Database management system (DBMS) historical perspective, 5 metadata storage, 19 system catalog as metadata resource, 124 Data Flux, data quality presentation, 190–191 Data Governance Council, metadata stewardship, 42–43 Data quality continuum, 190 Data Warehousing report, 49 definition, 177 presentation, 189–190 Data Stewardship Council, metadata stewardship, 43–44 Data warehouse historical perspective, 7–9 infrastructure, 160–161 metadata resource, 126 metadata warehouse features, 161–162 DBMS, see Database management system Decision table, business rule representation, 239–242 Decision tree, business rule representation, 239, 241 Index Definition components, 61–62 dictionary role in information quality, 186 guidelines, 60–61 importance, 59–60 miscellaneous guidelines, 64 usage notes, 62–64 Delivery, business metadata examples corporate dictionary, 147–148 integrated technical and business metadata delivery, 152–153 mashups, 149–150 technical use, 151–152 training, 148–149 visual analytic techniques, 150–151 indirect usage accessibility from multiple places, 142–143 application access, 147 interactive reports, 145–146 overview, 141–142 Web delivery, 143–144 information quality business metadata, 188–190 infrastructure considerations business intelligence environments, 163 graphical affinity, 163–164 legacy environment, 162–163 mashups, 164 principles, 140–141 prospects, 280 Description Logics (DL), semantic framework, 206 Dictionary, see Glossary Direct access storage device (DASD), historical perspective, 4–6 Disk storage, historical perspective, 4–6 DL, see Description Logics Documents, metadata resource, 123 Editing, metadata automation, 128–129 expansion of definition and descriptions, 129–131 granularization, 129 homonym resolution, 132–134 manual editing, 134–135 staging area, 134 synonym resolution, 131–132 Employee turnover, business metadata capture rationale, 91–92 289 Enterprise resource planning (ERP), metadata resource, 85, 122 Entity/relationship (ER) model, semantic framework, 203–204, 208, 210–211 ER model, see Entity/relationship model ERP, see Enterprise resource planning ETL, see Extract-transform-load Extract-transform-load (ETL), metadata resource, 85, 124–125 Federated metadata, integrated metadata management, 168 Financial audit metadata utility, 250 transaction background activities, 250–251 First-order logic (FOL), semantic framework, 206 FOL, see First-order logic Folksonomy knowledge capture, 118–119 self-organizing tags, 77 Forms, see Corporate forms Fourth generation language historical perspective, 6–7 metadata handling, 11 Funding, business metadata advantages and disadvantages of approaches, 52–53 centralized implementation, 51 localized implementation, 51–52 overview, 50–51 Glossary business functions, 60 information quality role, 186 semantic framework, 201 Governance Lite™, knowledge capture, 107–109, 111 Granularization, metadata, 129 Graphical affinity, business metadata delivery infrastructure, 163–164 Grid, metadata representation, 18 Groupware, knowledge socialization, 100–103, 279 Homonyms, resolution, 132–134 Industrial recognition, text, 227 Information quality business and technical metadata interaction, 177–186 business metadata delivery, 188–190 290 Index Information quality (Continued) definition, 177 dictionary role, 186 methodology, 187–188 Information Technology (IT) department challenges, 278–279 metadata responsibility, 38–39 Information, definition, 177 IT department, see Information Technology department KB, see Knowledge base KM, see Knowledge management Knowledge base (KB) definition, 267 building, 267 Knowledge management (KM) business metadata intersection artifact generation, 262–263 corporate dictionary example, 263 definition, 260 goals, 260–261 importance, 261 social issues graying work force, 269–270 socialization effect on knowledge, 270 tacit knowledge, see Tacit knowledge techniques, 267–268 Knowledge socialization collective intelligence, 97 experts, 97–98 groupware, 100–103, 279 knowledge management, 268, 270 technology fostering portal and collaboration servers, 100–103 social networking, 99 wikis, 103–106 Knowledge worker metadata capture, 94 search problem, 65–66 Legacy systems business metadata delivery infrastructure, 162–163 metadata resource, 125–126 Library card catalog, metadata analogy, 27–29 Life cycle, metadata, 45–48 Magnetic tape data storage, 4 languages for data reading, 10 Mashup business metadata delivery, 149–150, 164 knowledge capture, 115–116 Master data management (MDM) conflict resolution, 167 overview, 22 MDM, see Master data management Metadata definition, 9, 26–27 examples, 9 grid representation, 18 management importance, 32 metamodel, 158–160 system of record example, 283–284 usage matrix example, 285–286 Metadata project business metadata versus technical metadata, 83–84 buying versus building, 170–172 classification, 82–83 funding, see Funding, business metadata iterations of development, 84 local metadata tools, 85 metadata sources, 86–87 preexisting repositories, 172–173 rationale, 80–82 scope defining, 85–87 Metadata Stewardship Council, responsibilities, 44–45 Narrower term, definition component, 61, 63–64 National Cancer Institute (NCI), semantic vocabulary implementation, 214–216 NCI, see National Cancer Institute Null, data profiling, 183, 185–186 ODS, see Operational data store OLAP, see Online analytical processing On-line analytical processing, metadata resource, 85124 On-line transaction processing, metadata resource, 125–126 Ontology, semantic framework, 207 Operational data store (ODS), data warehousing, 8 Opportunity cost, search problem quantification, 69 OWL, see Web Ontology Language Ownership, definition, 40 Patterns, identification, 183–184, 186 PC, see Personal computer Index Personal computer (PC) historical perspective, 7 metadata handling, 11 Preferred term, definition component, 62 Punch cards historical perspective, 4 metadata, 9–10 Quality, see Data quality; Information quality Range of values, data profiling, 183, 186 RDF, see Resource Definition Framework Reference file overview, 21–22 updating, 22 Regulations, see Compliance Related term, definition component, 61 Reports interactive reports, 145–146 metadata resource, 14, 29–31, 122–123 Repository, see Metadata project Resource Definition Framework (RDF), semantic framework, 204–205 Reuse, metadata, 32–33 Sales, search problem quantification, 70 Sarbanes-Oxley Act, provisions, 248–240 Screen, business metadata content, 13–14 Search problem enterprise search, 279 information and knowledge workers, 65–66 information provider guidelines, 71–72 quantification, 67–70 search techniques, 71 tracking down information, 66–67 Self-organizing map, linking structured and unstructured data, 233 Self-organizing tags, taxonomy, 77 Semantics business metadata delivering definitions and relationships, 208–209 exposing semantics to business, 210–211 expression, 209–210 overview, 207–208 context sensitivity, 197–199 framework Concept Map, 205, 209 conceptual model, 204 controlled vocabulary, 200–201 Description Logics, 206 291 entity/relationship model, 203–204, 208, 210–211 first-order logic, 206 glossary, 201 ontology, 207 Resource Definition Framework, 204–205 taxonomy, 202 thesauri, 203 topic map, 205 UML, 205–206 Web Ontology Language, 204–205, 210 human/computer concept, 199–200, 278 importance, 196–197 practical issues integration, 211–212 National Cancer Institute semantic vocabulary implementation, 214–216 service-oriented architecture, 214 Web Services, 212–213 prospects for integration and discovery, 280 semantic Web, 195–196 spectrum, 200–201, 208 Semistructured data, examples and technologies, 222–223 Serial transfer, knowledge management, 267–268 Service-oriented architecture (SOA) integrated metadata management, 168 semantics integration, 214 SOA, see Service-oriented architecture SOAP, semantic interface, 212 Socialization, see Knowledge socialization Social networking, knowledge socialization, 99 Spreadsheets, metadata resource, 123 Stemmed words, text distillation, 225 Stewardship business metadata approaches, 44–45 artifacts, 44 Data Governance Council, 42–43 Data Stewardship Council, 43–44 historical perspective, 41–42 Metadata Stewardship Council, 44–45 definition, 40–41 Structured metadata characteristics, 16–18 examples and technologies, 219–220 linking structured and unstructured data abstraction, 230–231 examples, 231, 233 292 Index Structured metadata (Continued) integration, 230 unstructured data comparison, 221 Synonyms, resolution, 131–132 Tacit knowledge definition, 94, 264 note-taking as asset producing, 265–266 transfer nurturing, 266 Taxonomy basic rules, 73, 75 document categorization, 76 governance and taxonomy, 77 language and vocabulary, 75–76 lowest common denominator, 75 overview, 72 self-organizing tags, 77 semantic framework, 202 simplicity, 76 Team Room, knowledge socialization, 100–103 Technical metadata business metadata comparison, 12–13 conversion, 135–136 infrastructure for integration, 165–166 separation, 140 categories, 2 Technical metadata conversion to business metadata, 135–136, 182–186 profiling, 179–182 Text business metadata terms, 227–228 communications audits, 252–253 distillation, 224–229 extraneous words, 225 industrial recognition, 227 pulling, 223 relationship recognition, 228–229 stemmed words, 225 word counting, 226 Thesauri, semantic framework, 203 Topic map, semantic framework, 205 UML, semantic framework, 205–206 Unstructured metadata characteristics, 16–18 examples and technologies, 220–221 mining prospects, 281 structured data comparison, 221 text business metadata terms, 227–228 distillation, 224–229 extraneous words, 225 industrial recognition, 227 pulling, 223 relationship recognition, 228–229 stemmed words, 225 word counting, 226 Value/frequency report, data profiling, 181, 184–186 Web 2.0 knowledge capture folksonomy, 118–119 mashups, 115–116 overview, 115 semantic Web, 195–196 Web Ontology Language (OWL), semantic framework, 204–205, 210 Web Services, semantics interface, 212–213 Wiki governance, 106 knowledge capture, 104 limitations, 105–106 portal collaboration comparison, 105 wikinomics, 104 Wikipedia, 103–104, 212 Words, see Text

pages: 313 words: 34,042

Tools for Computational Finance by Rüdiger Seydel


bioinformatics, Black-Scholes formula, Brownian motion, commoditize, continuous integration, discrete time, implied volatility, incomplete markets, interest rate swap, linear programming, London Interbank Offered Rate, mandelbrot fractal, martingale, random walk, stochastic process, stochastic volatility, transaction costs, value at risk, volatility smile, Wiener process, zero-coupon bond

.: An Invitation to von Neumann Algebras Polster, B.: A Geometrical Picture Book Tamme, G.: Introduction to Étale Cohomology Tondeur, P.: Foliations on Riemannian Manifolds Toth, G.: Finite Möbius Groups, Minimal Immersions of Spheres, and Moduli Verhulst, F.: Nonlinear Differential Equations and Dynamical Systems Wong, M. W.: Weyl Transforms Xambó-Descamps, S.: Block Error-Correcting Codes Zaanen, A.C.: Continuity, Integration and Fourier Theory Zhang, F.: Matrix Theory Zong, C.: Sphere Packings Zong, C.: Strange Phenomena in Convex and Discrete Geometry Zorich, V. A.: Mathematical Analysis I Zorich, V. A.: Mathematical Analysis II

pages: 462 words: 172,671

Clean Code: A Handbook of Agile Software Craftsmanship by Robert C. Martin


continuous integration, database schema, domain-specific language, don't repeat yourself, Donald Knuth,, Eratosthenes, finite state, Ignaz Semmelweis: hand washing, iterative process, place-making, Rubik’s Cube, web application, WebSocket

And, more importantly, how can we write tests that will demonstrate failures in more complex code? How will we be able to discover if our code has failures when we do not know where to look? Here are a few ideas: • Monte Carlo Testing. Make tests flexible, so they can be tuned. Then run the test over and over—say on a test server—randomly changing the tuning values. If the tests ever fail, the code is broken. Make sure to start writing those tests early so a continuous integration server starts running them soon. By the way, make sure you carefully log the conditions under which the test failed. • Run the test on every one of the target deployment platforms. Repeatedly. Continuously. The longer the tests run without failure, the more likely that – The production code is correct or – The tests aren’t adequate to expose problems. • Run the tests on a machine with varying loads.

pages: 470 words: 109,589

Apache Solr 3 Enterprise Search Server by Unknown


bioinformatics, continuous integration, database schema,, fault tolerance, Firefox, full text search, information retrieval, Internet Archive, natural language processing, performance metric, platform as a service, Ruby on Rails, web application

You might be inclined to use a spreadsheet like Microsoft Excel, but that's really not the right tool. With luck, you may find some websites that will suffice, perhaps If your data changes in ways causing you to alter the constants in your function queries, then consider implementing a periodic automated test of your Solr data to ensure that the data fits within expected bounds. A Continuous Integration (CI) server might be configured to do this task. An approach is to run a search simply sorting by the data field in question to get the highest or lowest value. Formula: Logarithm The logarithm is a popular formula for inputs that grow without bounds, but the output is also unbounded. However, the growth of the curve is stunted for larger numbers. This in practice is usually fine even when you ideally want the output to be capped.

Version Control With Git: Powerful Tools and Techniques for Collaborative Software Development by Jon Loeliger, Matthew McCullough


continuous integration, Debian, distributed revision control, GnuPG, Larry Wall, peer-to-peer, peer-to-peer model, pull request, revision control, web application, web of trust

One Git-using development team has cryptographic code that had licensing constraints permitting only a handful of developers to see it. That code was stored as a Git submodule and when the superproject was cloned, the permissions denied the majority of developers from being able to clone that submodule. The build system for this project was carefully constructed to adapt to the missing source of the cryptographic component, outputting a developer-only build. The SSH key of the continuous integration server, on the other hand, does have permission to retrieve the cryptography submodule, thus producing the feature-complete builds that customers will ultimately receive. Multilevel Nesting of Repos The use of submodules discussed thus far can be extended to another level of recursion. submodules can in turn be superprojects, and thus contain submodules. This proliferated the use of custom automation scripts to recursively apply behavior to every nested submodule.

Common Knowledge?: An Ethnography of Wikipedia by Dariusz Jemielniak


Andrew Keen, barriers to entry, Benevolent Dictator For Life (BDFL), citation needed, collaborative consumption, collaborative editing, conceptual framework, continuous integration, crowdsourcing, Debian, deskilling, digital Maoism,, Filter Bubble, Google Glasses, Guido van Rossum, Hacker Ethic, hive mind, Internet Archive, invisible hand, Jaron Lanier, jimmy wales, job satisfaction, Julian Assange, knowledge economy, knowledge worker, Menlo Park, moral hazard, online collectivism, pirate software, RFC: Request For Comment, Richard Stallman, selection bias, Silicon Valley, Skype, slashdot, social software, Stewart Brand, The Nature of the Firm, The Wisdom of Crowds, transaction costs, WikiLeaks, wikimedia commons, zero-sum game

K. (2009). Wikitruth through wikiorder. Emory Law Journal, 59(1), 151–209. Hoffmann, W. H., Neumann, K., & Speckbacher, G. (2010). The effect of interorganizational trust on make-or-cooperate decisions: Disentangling opportunism-dependent 2 5 6    R e f e r e n c e s and opportunism-independent effects of trust. European Management Review, 7(2), 101–115. Holck, J., & Jørgensen, N. (2007). Continuous integration and quality assurance: A case study of two open source projects. Australasian Journal of Information Systems, 11(1), 40–53. Hollander, E. P. (1958). Conformity, status, and idiosyncrasy credit. Psychological Review, 65(2), 117–127. Hollander, E. P. (1992). The essential interdependence of leadership and followership. Current Directions in Psychological Science, 1(2), 71–75. Horn, L. (2012, April 20).

pages: 405 words: 117,219

In Our Own Image: Savior or Destroyer? The History and Future of Artificial Intelligence by George Zarkadakis


3D printing, Ada Lovelace, agricultural Revolution, Airbnb, Alan Turing: On Computable Numbers, with an Application to the Entscheidungsproblem, anthropic principle, Asperger Syndrome, autonomous vehicles, barriers to entry, battle of ideas, Berlin Wall, bioinformatics, British Empire, business process, carbon-based life, cellular automata, Claude Shannon: information theory, combinatorial explosion, complexity theory, continuous integration, Conway's Game of Life, cosmological principle, dark matter, dematerialisation, double helix, Douglas Hofstadter, Edward Snowden, epigenetics, Flash crash, Google Glasses, Gödel, Escher, Bach, income inequality, index card, industrial robot, Internet of things, invention of agriculture, invention of the steam engine, invisible hand, Isaac Newton, Jacquard loom, Jacquard loom, Jacques de Vaucanson, James Watt: steam engine, job automation, John von Neumann, Joseph-Marie Jacquard, liberal capitalism, lifelogging, millennium bug, Moravec's paradox, natural language processing, Norbert Wiener, off grid, On the Economy of Machinery and Manufactures, packet switching, pattern recognition, Paul Erdős, post-industrial society, prediction markets, Ray Kurzweil, Rodney Brooks, Second Machine Age, self-driving car, Silicon Valley, speech recognition, stem cell, Stephen Hawking, Steven Pinker, strong AI, technological singularity, The Coming Technological Singularity, The Future of Employment, the scientific method, theory of mind, Turing complete, Turing machine, Turing test, Tyler Cowen: Great Stagnation, Vernor Vinge, Von Neumann architecture, Watson beat the top human players on Jeopardy!, Y2K

If the brain is made out of base logic units that process information, then intelligence ought to emerge from the interconnectedness between these units. The brain is therefore a cybernetic system. As Dehaene’s research into consciousness has shown, the brain uses feedback loops that pass information from neuron to neuron and from groups of neurons to groups of neurons. Sensory inputs from the nervous system are continuously integrated at a neural level. These integrations affect internal states in the brain, such as memories and thoughts. Intelligence is an emergent behaviour as the brain instructs the body how to react and respond to external stimuli. If this hypothesis is true then the brain can be replicated in any medium that can process information in a similar, granular, logic unit base fashion. This medium could be made up of gears, nuts and pulleys, silicon chips, or water pipes – it does not matter what it is made of as long as it can process information in a similar manner to that of the brain.

pages: 298 words: 151,238

Excession by Iain M Banks - Culture 05


continuous integration, gravity well, hive mind, place-making

I still find it nard to believe tnat the rogue ship which tricked the ship store at Pittance was acting alone and that you merely took advantage of the ruse, despite your assurances. However, I have no evidence to the contrary. I have given my word and I will not go public with all this, but I will consider that agreement dependent on the continued well-being and freedom from persecution of both theSerious Callers Only and theShoot Them Later , as well, of course, as being contingent upon my own continued integrity. I don't doubt you will think me either paranoid or ridiculous for systematising this arrangement with various other friends and colleagues, particularly given the hostilities which commenced yesterday. I am thinking of taking some sabbatical time myself soon, and going off course-schedule. I shall, in any event, be quitting the Group. oo [ stuttered tight point, M32, tra. @4.28.883.2182 ] xGSV Sabbaticaler NoFixed Abode oGSVAnticipation Of A New Lover's Arrival, The I understand completely.

pages: 496 words: 174,084

Masterminds of Programming: Conversations With the Creators of Major Programming Languages by Federico Biancuzzi, Shane Warden


Benevolent Dictator For Life (BDFL), business intelligence, business process, cellular automata, cloud computing, commoditize, complexity theory, conceptual framework, continuous integration, data acquisition, domain-specific language, Douglas Hofstadter, Fellow of the Royal Society, finite state, Firefox, follow your passion, Frank Gehry, general-purpose programming language, Guido van Rossum, HyperCard, information retrieval, iterative process, John von Neumann, Larry Wall, linear programming, loose coupling, Mars Rover, millennium bug, NP-complete, Paul Graham, performance metric, Perl 6, QWERTY keyboard, RAND corporation, randomized controlled trial, Renaissance Technologies, Ruby on Rails, Sapir-Whorf hypothesis, Silicon Valley, slashdot, software as a service, software patent, sorting algorithm, Steve Jobs, traveling salesman, Turing complete, type inference, Valgrind, Von Neumann architecture, web application

New development is just a special case, changing from nothing to something. This view should penetrate everything you do and the practices that you deploy when developing software. There are basically two approaches to managing legacy systems and improve them. The first is to just deploy practices that don’t really change the product but improve the way you work, such as iterative development, continuous integration, test-driven development, use-case-driven development, user stories, pair programming, and cross-cutting teams. The cost and risks of introducing such practices are marginal, but for big companies still substantial. The second approach is more fundamental: change the actual product via practices such as architecture (at a simple level), enterprise architecture, product-line architecture, components, etc.

pages: 999 words: 194,942

Clojure Programming by Chas Emerick, Brian Carper, Christophe Grand


Amazon Web Services, Benoit Mandelbrot, cloud computing, continuous integration, database schema, domain-specific language, don't repeat yourself,, failed state, finite state, Firefox, game design, general-purpose programming language, Guido van Rossum, Larry Wall, mandelbrot fractal, Paul Graham, platform as a service, premature optimization, random walk, Ruby on Rails, Schrödinger's Cat, semantic web, software as a service, sorting algorithm, Turing complete, type inference, web application

Clojure web applications will run side by side with your other Java web applications without a hitch, you can call existing Java libraries from code you write in Clojure, and you can call Clojure functions and create instances of Clojure types from code you write in Java (or any other language that runs on the JVM, including JRuby, Jython, JavaScript [via Rhino], Scala, Groovy, and so on). Everything you’ve learned about builds, packaging, continuous integration, and JVM operations and tuning applies to the work you’ll do with Clojure. Clojure is an incremental addition to your existing JVM investment, not a radical departure. “Clojure is just another .jar file”. A corollary of the fact that Clojure lets you reuse your investments in Java is that Clojure really is “just another .jar file.” This means that you can package it as one dependency among others within a delivered application that includes Clojure source (or class files obtained from AOT-compiling such source), and your customers and clients will be none the wiser.

pages: 348 words: 39,850

Data Scientists at Work by Sebastian Gutierrez


Albert Einstein, algorithmic trading, Bayesian statistics, bioinformatics, bitcoin, business intelligence, chief data officer, clean water, cloud computing, commoditize, computer vision, continuous integration, correlation does not imply causation, creative destruction, crowdsourcing, data is the new oil, DevOps, domain-specific language, Donald Knuth, follow your passion, full text search, informal economy, information retrieval, Infrastructure as a Service, Intergovernmental Panel on Climate Change (IPCC), inventory management, iterative process, lifelogging, linked data, Mark Zuckerberg, microbiome, Moneyball by Michael Lewis explains big data, move fast and break things, move fast and break things, natural language processing, Network effects, nuclear winter, optical character recognition, pattern recognition, Paul Graham, personalized medicine, Peter Thiel, pre–internet, quantitative hedge fund, quantitative trading / quantitative finance, recommendation engine, Renaissance Technologies, Richard Feynman, Richard Feynman, self-driving car, side project, Silicon Valley, Skype, software as a service, speech recognition, statistical model, Steve Jobs, stochastic process, technology bubble, text mining, the scientific method, web application

So even though we may hire people who come in with very little programming experience, we work very hard to instill in them very quickly the importance of engineering, engineering practices, and a lot of good agile programming practices. This is helpful to them and us, as these can all be applied almost one-to-one to data science right now. If you look at dev ops right now, they have things such as continuous integration, continuous build, automated testing, and test harnesses—all of which map very well from the dev ops world to the data ops (a phrase I stole from Red Monk) world very easily. I think this is a very powerful notion. It is important to have testing frameworks for all of your data, so that if you make a code change, you can go back and test all of your data. Having an engineering mindset is essential to moving with high velocity in the data science world.

Understanding Power by Noam Chomsky


anti-communist, Ayatollah Khomeini, Berlin Wall, Bretton Woods, British Empire, Burning Man, business climate, cognitive dissonance, continuous integration, Corn Laws, cuban missile crisis, dark matter, David Ricardo: comparative advantage, deindustrialization, Deng Xiaoping, deskilling, European colonialism, Fall of the Berlin Wall, feminist movement, global reserve currency, Howard Zinn, labour market flexibility, liberation theology, Mahatma Gandhi, Mikhail Gorbachev, Monroe Doctrine, mortgage tax deduction, Paul Samuelson, Ralph Nader, reserve currency, Ronald Reagan, Rosa Parks, school choice, strikebreaker, structural adjustment programs, the scientific method, The Wealth of Nations by Adam Smith, union organizing, wage slave, women in the workforce

It would be like asking the New York City police force whether they would like to turn Harlem over to local mercenaries to patrol, while they hold on to Wall Street, the Upper East Side, Madison Avenue, and so on—if you asked the New York City police force that, I’m sure they’d be delighted. Who wants to patrol Harlem? Well, that’s in effect what’s happening in the Occupied Territories right now: the idea is, see if you can get local mercenaries, who are still always under your whip, to run the place for you, while you continue integrating the area into Israel. Actually, some Israeli commentators have used the term “neocolonialism” to describe what’s being done with the Territories, and that’s essentially correct, I think. 114 In fact, I think what’s been taking place in the Middle East is really just a part of something much broader that’s happened throughout the West in recent years, particularly since the Gulf War: there’s been a real revival of traditional European racism and imperialism, in a very dramatic way.

pages: 823 words: 206,070

The Making of Global Capitalism by Leo Panitch, Sam Gindin


accounting loophole / creative accounting, active measures, airline deregulation, anti-communist, Asian financial crisis, asset-backed security, bank run, banking crisis, barriers to entry, Basel III, Big bang: deregulation of the City of London, bilateral investment treaty, Branko Milanovic, Bretton Woods, BRICs, British Empire, call centre, capital controls, Capital in the Twenty-First Century by Thomas Piketty, Carmen Reinhart, central bank independence, collective bargaining, continuous integration, corporate governance, creative destruction, Credit Default Swap, crony capitalism, currency manipulation / currency intervention, currency peg, dark matter, Deng Xiaoping, disintermediation, ending welfare as we know it, eurozone crisis, facts on the ground, financial deregulation, financial innovation, Financial Instability Hypothesis, financial intermediation, floating exchange rates, full employment, Gini coefficient, global value chain, guest worker program, Hyman Minsky, imperial preference, income inequality, inflation targeting, interchangeable parts, interest rate swap, Kenneth Rogoff, land reform, late capitalism, liberal capitalism, liquidity trap, London Interbank Offered Rate, Long Term Capital Management, manufacturing employment, market bubble, market fundamentalism, Martin Wolf, means of production, money market fund, money: store of value / unit of account / medium of exchange, Monroe Doctrine, moral hazard, mortgage debt, mortgage tax deduction, Myron Scholes, new economy, non-tariff barriers, Northern Rock, oil shock, precariat, price stability, quantitative easing, Ralph Nader, RAND corporation, regulatory arbitrage, reserve currency, risk tolerance, Ronald Reagan, seigniorage, shareholder value, short selling, Silicon Valley, sovereign wealth fund, special drawing rights, special economic zone, structural adjustment programs, The Chicago School, The Great Moderation, the payments system, The Wealth of Nations by Adam Smith, too big to fail, trade liberalization, transcontinental railway, trickle-down economics, union organizing, very high income, Washington Consensus, Works Progress Administration, zero-coupon bond, zero-sum game

This pattern did not change all that much until the 1980s, when the political conditions were established—in the North as well as increasingly in the South—that laid the grounds for a truly global capitalism. Integrating Europe The accelerated push towards economic and monetary union in the 1980s, emerging at a time when Europe was mired in internal stagnation, needs to be understood in the context of the continuing integration of European and American capitalism. The abandonment of the Bretton Woods framework, wherein all European currencies had been fixed in a hub-and-spokes relationship to the dollar, was initially compensated for by the European states adopting a “currency snake” designed to prevent competitive devaluations; but “the instability of the snake reflected the fact that the 1970s was a low point for European cooperation.”1 The formation of the European Monetary System in 1979—“the most significant development in the EC arising out of the long crisis from 1969 to 1983”—was the first major step towards the common currency, a development that the US did not oppose.2 What it was much more concerned about, even in the wake of the election that brought Mrs.

pages: 719 words: 181,090

Site Reliability Engineering by Betsy Beyer, Chris Jones, Jennifer Petoff, Niall Richard Murphy

Air France Flight 447, anti-pattern, barriers to entry, business intelligence, business process, Checklist Manifesto, cloud computing, combinatorial explosion, continuous integration, correlation does not imply causation, crowdsourcing, database schema, defense in depth, DevOps,, fault tolerance, Flash crash, George Santayana, Google Chrome, Google Earth, job automation, job satisfaction, linear programming, load shedding, loose coupling, meta analysis, meta-analysis, minimum viable product, MVC pattern, performance metric, platform as a service, revision control, risk tolerance, side project, six sigma, the scientific method, Toyota Production System, trickle-down economics, web application, zero day

This functionality again reduces repetition, thus reducing the likelihood of bugs in the configuration. Of course, any high-level programming environment creates the opportunity for complexity, so Borgmon provides a way to build extensive unit and regression tests by synthesizing time-series data, in order to ensure that the rules behave as the author thinks they do. The Production Monitoring team runs a continuous integration service that executes a suite of these tests, packages the configuration, and ships the configuration to all the Borgmon in production, which then validate the configuration before accepting it. In the vast library of common templates that have been created, two classes of monitoring configuration have emerged. The first class simply codifies the emergent schema of variables exported from a given library of code, such that any user of the library can reuse the template of its varz.

pages: 578 words: 168,350

Scale: The Universal Laws of Growth, Innovation, Sustainability, and the Pace of Life in Organisms, Cities, Economies, and Companies by Geoffrey West

Alfred Russel Wallace, Anton Chekhov, Benoit Mandelbrot, Black Swan, British Empire, butterfly effect, carbon footprint, Cesare Marchetti: Marchetti’s constant, clean water, complexity theory, computer age, conceptual framework, continuous integration, corporate social responsibility, correlation does not imply causation, creative destruction, dark matter, Deng Xiaoping, double helix, Edward Glaeser, endogenous growth, Ernest Rutherford, first square of the chessboard, first square of the chessboard / second half of the chessboard, Frank Gehry, Geoffrey West, Santa Fe Institute, Guggenheim Bilbao, housing crisis, Index librorum prohibitorum, invention of agriculture, invention of the telephone, Isaac Newton, Jane Jacobs, Jeff Bezos, Johann Wolfgang von Goethe, John von Neumann, Kenneth Arrow, laissez-faire capitalism, life extension, Mahatma Gandhi, mandelbrot fractal, Marchetti’s constant, Masdar, megacity, Murano, Venice glass, Murray Gell-Mann, New Urbanism, Peter Thiel, profit motive, publish or perish, Ray Kurzweil, Richard Feynman, Richard Feynman, Richard Florida, Silicon Valley, smart cities, Stephen Hawking, Steve Jobs, Stewart Brand, technological singularity, The Coming Technological Singularity, The Death and Life of Great American Cities, the scientific method, too big to fail, transaction costs, urban planning, urban renewal, Vernor Vinge, Vilfredo Pareto, Von Neumann architecture, Whole Earth Catalog, Whole Earth Review, wikimedia commons, working poor

These extraordinary regularities open a window onto underlying mechanisms, dynamics, and structures common to all cities and strongly suggest that all of these phenomena are in fact highly correlated and interconnected, driven by the same underlying dynamics and constrained by the same set of “universal” principles. Consequently, each of these urban characteristics, each metric—whether wages, the length of all the roads, the number of AIDS cases, or the amount of crime—is interrelated and interconnected with every other one and together they form an overarching multiscale quintessentially complex adaptive system that is continuously integrating and processing energy, resources, and information. The result is the extraordinary collective phenomenon we call a city, whose origins emerge from the underlying dynamics and organization of how people interact with one another through social networks. To repeat: cities are an emergent self-organizing phenomenon that has resulted from the interaction and communication between human beings exchanging energy, resources, and information.

pages: 870 words: 259,362

Austerity Britain: 1945-51 by David Kynaston

Alistair Cooke, anti-communist, British Empire, Chelsea Manning, collective bargaining, continuous integration, deindustrialization, deskilling, Etonian, full employment, garden city movement, hiring and firing, industrial cluster, invisible hand, job satisfaction, labour mobility, light touch regulation, mass immigration, moral panic, Neil Kinnock, occupational segregation, price mechanism, rent control, reserve currency, road to serfdom, Ronald Reagan, stakhanovite, strikebreaker, the market place, upwardly mobile, urban planning, urban renewal, very high income, wage slave, washing machines reduced drudgery, wealth creators, women in the workforce, young professional

Despite increasingly persistent, disobliging complaints from abroad that British cars were becoming a byword for unreliability, it would take a lot to shake the industry’s complacent assumption that British was still best.11 It was an industry clustered in five main places. Each had significantly different characteristics, but in all of them the conveyor-belt assembly line – ‘the track’ – was the relentless, remorseless, unforgiving nerve centre of operations. Dagenham was the British home of Ford – a Detroit in miniature since the early 1930s. The works put a premium on continuous, integrated production and included a blast furnace, coke ovens, a powerhouse, iron and steel foundries, and fully mechanised jetties for loading and unloading that reached out into the Thames. ‘Ford has always applied the principle that higher wages and higher standards of living for all depend on lower costs and lower selling prices through increasingly large-scale production’ was how the British chairman, Lord Perry, summed up the Ford philosophy soon after the war.