premature optimization

22 results back to index

pages: 157 words: 35,874

Building Web Applications With Flask by Italo Maia


continuous integration, create, read, update, delete, Debian,, Firefox, full stack developer, minimum viable product, MVC pattern, premature optimization, web application

The message here is: do not make your product more robust or complex than you know it needs to be and do not waste time planning for what may, most likely, never happen. Tip Always plan for reasonable levels of safety, complexity, and performance. Premature optimization Is your software fast enough? Don't know? Then why are you optimizing that code, my friend? When you spend time optimizing software that you're not sure needs optimization, if no one complained about it being slow or you do not notice it to be slow in daily use, you're probably wasting time with premature optimization. And so, on to Flask. Blueprints 101 So far, our applications have all been flat: beautiful, single-file Web applications (templates and static resources not considered). In some cases, that's a nice approach; a reduced need for imports, easy to maintain with simple editors and all but… As our applications grow, we identify the need to contextually arrange our code.

SQLAlchemy Concepts Hands on Flask-SQLAlchemy MongoDB MongoEngine Flask-MongoEngine Relational versus NoSQL Summary 6. But I Wanna REST Mom, Now! Beyond GET Flask-Restless Summary 7. If Ain't Tested, It Ain't Game, Bro! What kinds of test are there? Unit testing Behavior testing Flask-testing LiveServer Extra assertions JSON handle Fixtures Extra – integration testing Summary 8. Tips and Tricks or Flask Wizardry 101 Overengineering Premature optimization Blueprints 101 Oh God, please tell me you have the logs… Debugging, DebugToolbar, and happiness Flask-DebugToolbar Sessions or storing user data between requests Exercise Summary 9. Extensions, How I Love Thee How to configure extensions Flask-Principal and Flask-Login (aka Batman and Robin) Admin like a boss Custom pages Summary 10. What Now? You deploy better than my ex Placing your code in a server Setting up your database Setting up the web server StackOverflow Structuring your projects Summary Postscript Index Building Web Applications with Flask * * * Building Web Applications with Flask Copyright © 2015 Packt Publishing All rights reserved.

You may purchase a nice Integrated Development Environment (IDE) such as PyCharm or WingIDE to improve your productivity or hire third-party services to help you test your code or control your development schedule, but these can do just so much. Good architecture and task automation will be your best friend in most projects. Before discussing suggestions on how to organize you code and which modules will help you save some typing here and there, let's discuss premature optimization and overengineering, two terrible symptoms of an anxious developer/analyst/nosy manager. Overengineering Making software is like making a condo, in a few ways. You'll plan ahead what you want to create before starting so that waste is kept to a minimum. Contrary to a condo, where it's advisable to plan the whole project before you start, you do not have to plan out your software because it will most likely change during development, and a lot of the planning may just go to waste.

pages: 1,758 words: 342,766

Code Complete (Developer Best Practices) by Steve McConnell


Ada Lovelace, Albert Einstein, Buckminster Fuller, call centre, choice architecture, continuous integration, data acquisition, database schema, don't repeat yourself, Donald Knuth, fault tolerance, Grace Hopper, haute cuisine, if you see hoof prints, think horses—not zebras, index card, inventory management, iterative process, Larry Wall, late fees, loose coupling, Menlo Park, Perl 6, place-making, premature optimization, revision control, Sapir-Whorf hypothesis, slashdot, sorting algorithm, statistical model, Tacoma Narrows Bridge, the scientific method, Thomas Kuhn: the structure of scientific revolutions, Turing machine, web application

This flies in the face of the folk wisdom that you can code like hell and then test all the mistakes out of the software. That idea is dead wrong. Testing merely tells you the specific ways in which your software is defective. Testing won't make your program more usable, faster, smaller, more readable, or more extensible. Premature optimization is another kind of process error. In an effective process, you make coarse adjustments at the beginning and fine adjustments at the end. If you were a sculptor, you'd rough out the general shape before you started polishing individual features. Premature optimization wastes time because you spend time polishing sections of code that don't need to be polished. You might polish sections that are small enough and fast enough as they are, you might polish code that you later throw away, and you might fail to throw away bad code because you've already spent time polishing it.

When you tune code, you're implicitly signing up to reprofile each optimization every time you change your compiler brand, compiler version, library version, and so on. If you don't reprofile, an optimization that improves performance under one version of a compiler or library might well degrade performance when you change the build environment. We should forget about small efficiencies, say about 97% of the time: premature optimization is the root of all evil. —Donald Knuth You should optimize as you go—false! One theory is that if you strive to write the fastest and smallest possible code as you write each routine, your program will be fast and small. This approach creates a forest-for-the-trees situation in which programmers ignore significant global optimizations because they're too busy with micro-optimizations.

Developers immerse themselves in algorithm analysis and arcane debates that in the end don't contribute much value to the user. Concerns such as correctness, information hiding, and readability become secondary goals, even though performance is easier to improve later than these other concerns are. Post hoc performance work typically affects less than five percent of a program's code. Would you rather go back and do performance work on five percent of the code or readability work on 100 percent? In short, premature optimization's primary drawback is its lack of perspective. Its victims include final code speed, performance attributes that are more important than code speed, program quality, and ultimately the software's users. If the development time saved by implementing the simplest program is devoted to optimizing the running program, the result will always be a program that runs faster than one developed with indiscriminate optimization efforts (Stevens 1981).

pages: 828 words: 205,338

Write Great Code, Volume 2 by Randall Hyde


complexity theory, Donald Knuth, locality of reference, NP-complete, premature optimization

Optimization can fine-tune the performance of a system, but it can rarely deliver a miracle. Although the quote is often attributed to Donald Knuth, who popularized it, it was Tony Hoare who originally said, “Premature optimization is the root of all evil.” This statement has long been the rallying cry of software engineers who avoid any thought of application performance until the very end of the software-development cycle—at which point the optimization phase is typically ignored for economic or time-to-market reasons. However, Hoare did not say, “Concern about application performance during the early stages of an application’s development is the root of all evil.” He specifically said premature optimization, which, back then, meant counting cycles and instructions in assembly language code—not the type of coding you want to do during initial program design, when the code base is rather fluid.

The following excerpt from a short essay by Charles Cook ( describes the problem with reading too much into this statement: I’ve always thought this quote has all too often led software designers into serious mistakes because it has been applied to a different problem domain to what was intended. The full version of the quote is “We should forget about small efficiencies, say about 97% of the time: premature optimization is the root of all evil.” and I agree with this. It’s usually not worth spending a lot of time micro-optimizing code before it’s obvious where the performance bottlenecks are. But, conversely, when designing software at a system level, performance issues should always be considered from the beginning. A good software developer will do this automatically, having developed a feel for where performance issues will cause problems.

These chapters describe disassemblers, object code dump tools, debuggers, various HLL compiler options for displaying assembly language code, and other useful software tools. The remainder of the book, Chapter 7 through Chapter 15, describes how compilers generate machine code for different HLL statements and data types. Armed with this knowledge, you will be able to choose the most appropriate data types, constants, variables, and control structures to produce efficient applications. While you read, keep Dr. Hoare’s quote in mind: “Premature optimization is the root of all evil.” It is certainly possible to misapply the information in this book and produce code that is difficult to read and maintain. This would be especially disastrous during the early stages of your project’s design and implementation, when the code is fluid and subject to change. But remember: This book is not about choosing the most efficient statement sequence, regardless of the consequences; it is about understanding the cost of various HLL constructs so that, when you have a choice, you can make an educated decision concerning which sequence to use.

pages: 108 words: 28,348

Code Simplicity by Max Kanat-Alexander


don't repeat yourself, premature optimization, the scientific method

That’s something you should fix. Sometimes a user will report that there’s a bug, when actually it’s the program behaving exactly as you intended it to. In this case, it’s a matter of majority rules. If a significant number of users think that the behavior is a bug, it’s a bug. If only a tiny minority (like one or two) think it’s a bug, it’s not a bug. The most famous error in this area is what we call “premature optimization.” That is, some developers seem to like to make things go fast, but they spend time optimizing their code before they know that it’s slow! This is like a charity sending food to rich people and saying, “We just wanted to help people!” Illogical, isn’t it? They’re solving a problem that doesn’t exist. The only parts of your program where you should be concerned about speed are the exact parts that you can show are causing a real performance problem for your users.

pages: 132 words: 31,976

Getting Real by Jason Fried, David Heinemeier Hansson, Matthew Linderman, 37 Signals


call centre, collaborative editing, David Heinemeier Hansson, iterative process, John Gruber, knowledge worker, Merlin Mann, Metcalfe's law, performance metric, premature optimization, Ruby on Rails, slashdot, Steve Jobs, web application

Most attempts at optimization — tying something down very explicitly — reduces the breadth and scope of interactions and relationships, which is the very source of emergence. In the flocking birds example, as with a well-designed system, it's the interactions and relationships that create the interesting behavior. The harder we tighten things down, the less room there is for a creative, emergent solution. Whether it's locking down requirements before they are well understood or prematurely optimizing code, or inventing complex navigation and workflow scenarios before letting end users play with the system, the result is the same: an overly complicated, stupid system instead of a clean, elegant system that harnesses emergence. Keep it small. Keep it simple. Let it happen. —Andrew Hunt, The Pragmatic Programmers Table of contents | Essay list for this chapter | Next essay The Three Musketeers Use a team of three for version 1.0 For the first version of your app, start with only three people.

pages: 450 words: 569

ANSI Common LISP by Paul Graham


Donald Knuth, general-purpose programming language, Paul Graham, premature optimization, Ralph Waldo Emerson, random walk

A profiler is a valuable tool—perhaps even a necessity—in producing the most efficient code. If your Lisp implementation provides one, use it to guide optimization. If not, you are reduced to guessing where the bottlenecks are, and you might be surprised how often such guesses turn out to be wrong. A corollary of the bottleneck rule is that one should not put too much effort into optimization early in a program's life. Knuth puts the point even more strongly: "Premature optimization is the root of all evil (or at least most of it) in programming."0 It's hard to see where the real bottlenecks will be when you've just started writing a program, so there's more chance you'll be wasting your time. Optimizations also tend to make a program harder to change, so trying to write a program and optimize it at the same time can be like trying to paint a picture with paint that dries too fast.

162 n i n t e r s e c t i o n function 357, 222 n i n t h function 357,40 no-applicable-method generic function 334 no-next-method generic function 334 Norvig, Peter 410,412 not function 321, 13 not any function 321 not every function 321 not i n l i n e declaration 315, 313 nreconc function 359 nreverse function 369, 222 n s e t - d i f f erence function 359, 222 n s e t - e x c l u s i v e - o r function 359 n s t r i n g - c a p i t a l i z e function 365 nstring-downcase function 365 n s t r i n g - u p c a s e function 365 n s u b l i s function 359 nsubst function 360, 222 n s u b s t - i f function 360, 222 n s u b s t - i f - n o t function 360 n s u b s t i t u t e function 369 nsubst i t u t e - i f function 369 n s u b s t i t u t e - i f - n o t function 370 n t h function 358,39 n t h - v a l u e macro 321 n t h c d r function 358,39 ntimes 167 n u l l function 358, 13 numbers comparison of 146 complex 143 conversion to reals 144 extracting components of 146 floating-point 143 contagion 143 limits of 150,407 overflow 150 printing 124 types of 150 integer 11 no limit on size of 150 parsing 68 see also: bignums, fixnums random 146 ratio 143 conversion to integers 144 extracting components of 146 types of 143 converting between 144 numberp function 351, 20 numerator function 351, 146 nunion function 360, 222 object-oriented programming 176 analogy to hardware 176 benefits of 178 broad application of term 285 implementing 269 for reusable software 104 as a way to get run-time typing 410 and spaghetti code 408 transcended by Lisp 2, 285 two models of 192 see also: classes, CLOS, encapsulation, inheritance, instances, messagepassing, methods, multimethods, slots oddp function 352, 44, 147 oil paint 5,402 open function 376, 120 open-stream-p generic function 377 operator 8 optimization destructive operations 222 efficient algorithms 214 fast operators 228 426 focus of 213 premature 214, 229 tail recursion—see recursion, tail see also: consing, avoiding optimize declaration 315, 313 feoptional—see parameters, optional or macro 321, 14 OS/360 4 otherwise symbol 316 output 18, 123 output-stream-p generic function 377 overflow—see numbers, floating-point packages based on names 239 default 136,236 defining 137 exporting from 137, 238 grossness of 411 importing into 238 nicknames of 137 purpose of 136,237 setting 137 used 137,239 •package* variable 394, 236 package-error-package generic function 345 package-name function 345, 236 package-nicknames function 345 package-shadowing-symbols function 345 p a c k a g e - u s e - l i s t function 346 p a c k a g e - u s e d - b y - l i s t function 346 packagep function 345 painting 5, 214, 402 p a i r l i s function 358 palindromes 46, 63 parentheses 8, 17 parameters 14 compilation—see compilation parameters congruent 186 efficiency of 228 keyword 103 optional 102 required 102 rest 102 specialized—see specialization p a r s e - i n t e g e r function 352, 68 parse-namestring function 372 pathname function 372 pathnames 120 pathname-device function 373 INDEX pathname-directory function 373 pathname-host function 372 pathname-mat ch-p function 373 pathname-name function 373 pathname-type function 373 pathname-vers ion function 373 pathnamep function 373 patterns for destructuring 85, 103 matching 249 peek-char function 377, 123 phase function 352 Perdue, Crispin 401 PGM 152 p i constant 394, 149 Pitman, Kent M. 401 planning 5,229 plists—see property lists plusp function 352, 147 poetry 407 pointers avoiding 219 conses as pairs of 32 implicit 34 see also: lists pop macro 359, 47 p o s i t i o n function 367, 64 p o s i t i o n - i f function 367, 65 p o s i t i o n - i f - n o t function 368 p p r i n t function 384, 168 p p r i n t - d i s p a t c h function 384 pprint-exit-if-list-exhausted macro 384 p p r i n t - f i l l function 384 p p r i n t - i n d e n t function 384 p p r i n t - l i n e a r function 384 p p r i n t - l o g i c a l - b l o c k macro 384 p p r i n t - n e w l i n e function 385 p p r i n t - p o p macro 385 p p r i n t - t a b function 385 p p r i n t - t a b u l a r function 385 precedence 182 implementing 274 purpose of 183 prefix notation 8 premature optimization 214 primary methods 187 p r i n l function 386, 123 p r i n l - t o - s t r i n g function 386 p r i n c function 385,123 p r i n c - t o - s t r i n g function 385 p r i n t function 385, 160 • p r i n t - a r r a y * variable 394, 59, 280 • p r i n t - b a s e * variable 394, 113 INDEX • p r i n t - c a s e * variable 394 • p r i n t - c i r c l e * variable 394, 208 • p r i n t - e s c a p e * variable 394 •print-gensym* variable 395 • p r i n t - l e n g t h * variable 395 • p r i n t - l e v e l * variable 395 • p r i n t - l i n e s * variable 395 • p r i n t - m i s e r - w i d t h * variable 395 p r i n t - n o t - r e a d a b l e - o b j ect generic function 385 p r i n t - o b j e c t generic function 385 *print-pprint-dispatch* variable 395 • p r i n t - p r e t t y * variable 395 • p r i n t - r a d i x * variable 395 • p r i n t - r e a d a b l y * variable 395 * p r i n t - r i g h t - m a r g i n * variable 395 p r i n t - u n r e a d a b l e - o b j e c t macro 386, 70 p r o b e - f i l e function 374 proclaim function 315, 215 profilers 214 prog macro 321 prog* macro 321 progl macro 321, 127, 274 prog2 macro 321 progn special operator 321, 24 progv special operator 321 prompt 7, 19 property lists 134 provide function 388 psetf macro 322 psetq macro 322 push macro 359,47 pushnew macro 359, 49 qualifiers 188 •query-io* variable 395 queues using lists as—see lists, as queues using vectors as 126 Quicksort 164 quote special operator 315, 10, 161 random function 352, 146 random-choice 170 •random-state* variable 395 random-state-p function 352 random text 138 rapid prototyping 3, 23, 401 rassoc function 359 r a s s o c - i f function 359 r a s s o c - i f - n o t function 359 427 ratios—see numbers, ratio r a t i o n a l function 352 r a t i o n a l i z e function 352 r a t ionalp function 352 ray-tracing 151 r c u r r y 110 read function 387, 18, 122 • r e a d - b a s e * variable 395 r e a d - b y t e function 377, 234 r e a d - c h a r function 377, 123 r e ad-char-no-hang function 377 •read-default-float-format* variable 396 r e a d - d e l i m i t e d - l i s t function 387, 236 *read-eval* variable 396, 406 r e a d - f r o m - s t r i n g function 387, 123 r e a d - l i n e function 377, 121 read-macros defining 235 dispatching 131, 235 predefined 399, 130 read-preserving-whitespace function 387 read-sequence function 377 *read-suppress* variable 396 • r e a d t a b l e * variable 396 r e a d t a b l e - c a s e function 387 r e a d t a b l e p function 387 r e a l p function 352 r e a l p a r t function 352, 146 Rees, Jonathan A. 405, 411 recursion 16 efficiency of 116 and functional programming 114 can't be inlined 217 local 101 proper metaphor for 16 tail 116,215,289,409 using 114 verifying 42 reduce function 368, 66 extends two-argument functions 66 : key argument to 406 more efficient than apply 228 r e i n i t i a l i z e - i n s t a n c e generic function 334 rem function 352, 145 remf macro 359 remhash function 371 remove function 368, 22, 55, 66 remove-duplicates function 368, 66 remove-if function 368, 66 remove-if-not function 368 428 remove-method generic function 335 remprop function 343 rename-f i l e function 374 rename-package function 346 rendering 151 r e p l a c e function 369 Replicator 414 r e q u i r e function 388 ftrest—see parameters, rest r e s t function 359 r e s t a r t - b i n d macro 340 r e s t a r t - c a s e macro 341 r e s t a r t - n a m e function 341 r e t u r n macro 322, 82 return-from special operator 322, 81 reusable software 3, 104 revappend function 359 r e v e r s e function 369, 46 rewriting 262,402 rhyming dictionaries 224 ring buffers 126 risk 5, 6 room function 390 r o t a t e f macro 322, 165 round function 352, 145 rounding to even digit 145 unpredictable, by format 125 row-major-aref function 363 row-major order 221 r p l a c a function 359 r p l a c d function 359 rules 247 run-length encoding 36 run-time typing 2,6,218,410 Russell, Stuart 412 s a f e t y compilation parameter 214 s b i t function 363 s c a l e - f l o a t function 352 schar function 364 Scheme 109,405,411 scope 112,405 search function 369 search binary 60 breadth-first 51 second function 357, 40 Sedgewick, Robert 402, 406 self-modifying programs 210 sequences 45, 63 access to elements of 63 copying segments of 46 INDEX finding the lengths of 45 finding elements of 64 removing duplicates from 66 replacing elements of 41 reversing 46 sorting 46, 164 see also: arrays, lists, vectors sequence functions 63 s e t function 343 sets hash tables as 77 lists as 43 s e t - d i f f e r e n c e function 359, 45 set-dispatch-macro-character function 387,235 s e t - e x c l u s i v e - o r function 359 set-macro-character function 387, 235 s e t - p p r i n t - d i s p a t c h function 386 set-syntax-from-char function 387 s e t f macro 322, 21 defining expansions of 100, 404 macros that expand into 168 macro call as first argument to 168 s e t q special operator 322 seventh function 357, 40 shadow function 346 shadowing-import function 346 Shalit, Andrew 405 s h a r e d - i n i t i a l i z e generic function 335 sharp-quote 25, 131 s h i f t f macro 322 s h o r t - f l o a t - e p s i l o n constant 392 s h o r t - f l o a t - n e g a t i v e - e p s i l o n constant 392 s h o r t - s i t e - n a m e function 390 shortest path 51 side-effects 19,22,201 s i g n a l function 341 signum function 353,146 s i m p l e - b i t - v e c t o r - p function 363 simple-condition-formatarguments generic function 341 simple-condition-format-control generic function 341 s i m p l e - s t r i n g - p function 364 simple-vector-p function 364 s i n function 353, 149 s i n g l e ?

pages: 593 words: 118,995

Relevant Search: With Examples Using Elasticsearch and Solr by Doug Turnbull, John Berryman


commoditize, crowdsourcing, domain-specific language, finite state, fudge factor, full text search, information retrieval, natural language processing, premature optimization, recommendation engine, sentiment analysis

Violating the prime directive Star Trek was notorious for having rules that brash starship captains routinely violated. Well, by indexing TMDB directly, you violated some advice we gave earlier. You directly placed the source data model into Elasticsearch. Shouldn’t you have done some signal modeling? If you use this data directly to create a search index, won’t you end up with relevance problems? Well, yes, but that’s for a good reason. Search is a place ripe for premature optimization. You’re likely to reach the heat death of the universe before achieving a perfect search solution in every direction. You know there will be relevance problems, but you don’t quite know what those are until you experiment with user searches. There are few areas that emphasize “fail fast” as much as search relevance. Load your data, get something basic working, find where it’s broken, reconfigure, reindex if need be, requery, rinse, and repeat.

combining high-value tiers scored with simple Solr multiplying variables MUST clause, 2nd MUST_NOT clause, 2nd my_doublemetaphone filter N n-gram token filter n-gramming analyzer name field named attributes negative boosting nested documents no_match_size parameter nongreedy clauses nonwinning fields normalize acronyms NOT operator number_of_fragments numerical attributes numerical boosts numerical data num_of_fragments parameter O OLAP (online analytical processing) optimizing signals OR operator order parameter ordering documents origin variable original_id field, 2nd overview field, 2nd P PageRank algorithm, 2nd, 3rd pair tuning, 2nd paired relevance tuning parent-child documents parentheses parsons analyzer Parsons code path_hierarchy analyzer path_hierarchy tokenizer paths, modeling specificity with pattern_capture filter payloads, 2nd field, 2nd persona personalizing search based on user behavior collaborative filtering tying behavior information back to search index based on user profiles gathering profile information tying profile information back to search index concept search and phone_number field phone_num_parts filter phonetic analyzer phonetic plugin phonetic tokenization, 2nd, 3rd phrase query, 2nd, 3rd phrase-matching clause phrases, concept search and pogo-sticking popularity field position entry positional and phrase matching postings highlighter, 2nd postings list, 2nd post_tags parameter precision analysis for by example combining field-centric and term-centric search multiple search terms and multiple fields phonetic tokenization scoring strength of feature in single field premature optimization pre_tags parameter price field prioritizing documents product codes product owner profile-based personalization profiles promoted field prose text pseudo-content Python example search application Q quadrants query behavior, explaining Query DSL, 2nd, 3rd query function query matching, debugging analysis to solve matching issues comparing query to inverted index fixing by changing analyzers query parsing underlying strategy query normalization query parameter query parsers, 2nd, 3rd query validation endpoint, 2nd query-time analysis, 2nd, 3rd query-time boosting query-time personalization queryNorm queryWeight, 2nd quotes R ranking adding high-value tiers adding new tier for medium-confidence boosts tiered relevance layers debugging computing weight explain feature scoring matches to measure relevance search term importance similarity vector-space model learning to rank term-centric real-estate search recall analysis for by example combining field-centric and term-centric search improving multiple search terms and multiple fields phonetic tokenization scoring strength of feature in single field recency achieving users’ recency goals overview reducing boost weight reindex function, 2nd, 3rd, 4th reindexing with English analyzer related_items field relevance engineers duties of gaining skills of overview relevance.

pages: 1,201 words: 233,519

Coders at Work by Peter Seibel


Ada Lovelace, bioinformatics, cloud computing, Conway's Game of Life, domain-specific language, don't repeat yourself, Donald Knuth, fault tolerance, Fermat's Last Theorem, Firefox, George Gilder, glass ceiling, Guido van Rossum, HyperCard, information retrieval, Larry Wall, loose coupling, Marc Andreessen, Menlo Park, Metcalfe's law, Perl 6, premature optimization, publish or perish, random walk, revision control, Richard Stallman, rolodex, Ruby on Rails, Saturday Night Live, side project, slashdot, speech recognition, the scientific method, Therac-25, Turing complete, Turing machine, Turing test, type inference, Valgrind, web application

So now I say, “When Alan Turing wrote the first programming manual for the Mark I, in 1950. …” Mathematical things: similarly I'll get people who miss it. So then I'll say, you know, I actually said it correctly, but I know I still have to change it and make it better. Seibel: When you publish a literate program, it's the final form of the program, typically. And you are often credited with saying, “Premature optimization is the root of all evil.” But by the time you get to the final form it's not premature—you may have optimized some parts to be very clever. But doesn't that make it hard to read? Knuth: No. A good literate program will show its history. A good literate program will say, “Here's the obvious way to do it and then why we don't follow that road?” When you put subtle stuff in your program, literate programming shines because you don't just have the code that does it but also your documentation.

And read version two or you'll never understand version three.” I write a whole variety of different kinds of programs. Sometimes I'll write a program where I couldn't care less about efficiency—I just want to get the answer. I'll use brute force, something that I'm guaranteed I won't have to think—there'll be no subtlety at all so I won't be outsmarting myself. There I'm not doing any premature optimization. Then I can change that into something else and see if I get something that agrees with my brute-force way. Then I can scale up the program and go to larger cases. Most programs stop at that stage because you're not going to execute the code a trillion times. When I'm doing an illustration for The Art of Computer Programming I may change that illustration several times and the people who translate my book might have to redo the program, but it doesn't matter that I drew the illustration by a very slow method because I've only got to generate that file once and then it goes off to the publisher and gets printed in a book.

Martin Kleppmann-Designing Data-Intensive Applications. The Big Ideas Behind Reliable, Scalable and Maintainable Systems-O’Reilly (2017) by Unknown

active measures, Amazon Web Services, bitcoin, blockchain, business intelligence, business process,, cloud computing, collaborative editing, commoditize, conceptual framework, cryptocurrency, database schema, DevOps, distributed ledger, Donald Knuth, Edward Snowden, ethereum blockchain, fault tolerance, finite state, Flash crash, full text search, general-purpose programming language, informal economy, information retrieval, Internet of things, iterative process, John von Neumann, loose coupling, Marc Andreessen, natural language processing, Network effects, packet switching, peer-to-peer, performance metric, place-making, premature optimization, recommendation engine, Richard Feynman, Richard Feynman, self-driving car, semantic web, Shoshana Zuboff, social graph, social web, software as a service, software is eating the world, sorting algorithm, source of truth, SPARQL, speech recognition, statistical model, web application, WebSocket, wikimedia commons

This book breaks down the internals of various databases and data processing systems, and it’s great fun to explore the bright thinking that went into their design. Sometimes, when discussing scalable data systems, people make comments along the lines of, “You’re not Google or Amazon. Stop worrying about scale and just use a relational database.” There is truth in that statement: building for scale that you don’t need is wasted effort and may lock you into an inflexible design. In effect, it is a form of premature optimization. However, it’s also important to choose the right tool for the job, and different technologies each have their own strengths and weaknesses. As we shall see, relational databases are important but not the final word on dealing with data. Scope of This Book This book does not attempt to give detailed instructions on how to install or use spe‐ cific software packages or APIs, since there is already plenty of documentation for those things.

A single integra‐ ted software product may also be able to achieve better and more predictable perfor‐ mance on the kinds of workloads for which it is designed, compared to a system consisting of several tools that you have composed with application code [23]. As I said in the Preface, building for scale that you don’t need is wasted effort and may lock you into an inflexible design. In effect, it is a form of premature optimization. The goal of unbundling is not to compete with individual databases on performance for particular workloads; the goal is to allow you to combine several different data‐ bases in order to achieve good performance for a much wider range of workloads than is possible with a single piece of software. It’s about breadth, not depth—in the same vein as the diversity of storage and processing models that we discussed in “Comparing Hadoop to Distributed Databases” on page 414.

pages: 757 words: 193,541

The Practice of Cloud System Administration: DevOps and SRE Practices for Web Services, Volume 2 by Thomas A. Limoncelli, Strata R. Chalup, Christina J. Hogan

active measures, Amazon Web Services, anti-pattern, barriers to entry, business process, cloud computing, commoditize, continuous integration, correlation coefficient, database schema, Debian, defense in depth, delayed gratification, DevOps, domain-specific language,, fault tolerance, finite state, Firefox, Google Glasses, information asymmetry, Infrastructure as a Service, intermodal, Internet of things, job automation, job satisfaction, load shedding, loose coupling, Malcom McLean invented shipping containers, Marc Andreessen, place-making, platform as a service, premature optimization, recommendation engine, revision control, risk tolerance, side project, Silicon Valley, software as a service, sorting algorithm, statistical model, Steven Levy, supply-chain management, Toyota Production System, web application, Yogi Berra

This is where the design features that enable further scaling come into play. While every effort is made to foresee potential scaling issues, not all of them can receive engineering attention. The additional design and coding effort that will help deal with future potential scaling issues is lower priority than writing code to fix the immediate issues of the day. Spending too much time preventing scaling problems that may or may not happen is called premature optimization and should be avoided. 5.1.1 Identify Bottlenecks A bottleneck is a point in the system where congestion occurs. It is a point that is resource starved in a way that limits performance. Every system has a bottleneck. If a system is underperforming, the bottleneck can be fixed to permit the system to perform better. If the system is performing well, knowing the location of the bottleneck can be useful because it enables us to predict and prevent future problems.

., 11 Playbooks oncall, 297–298 process, 153 Pods, 137 Points of presence (POPs), 83–85 Pollers, 352 Post-crash recovery, 35 Postmortems, 152 communication, 302 DevOps, 184 oncall, 291, 300–302 purpose, 300–301 reports, 301–302 templates, 484–485 Power failures, 34, 133 Power of 2 mapping process, 110–111 Practical Approach to Large-Scale Agile Development: How HP Transformed HP LaserJet FutureSmart Firmware, 188 Practice of System and Network Administration, 132, 204 Pre-checks, 141 Pre-shift oncall responsibilities, 294 Pre-submit checks in build phase, 202–203 Pre-submit tests, 267 Pre-web era (1985-1994), 452–455 Prefork processing module, 114 Premature optimization, 96 Prescriptive failure domains, 127 Primary resources capacity planning, 372 defined, 366 Prioritizing automation, 257–258 feature requests, 46 for stability, 150 Privacy in platform selection, 63 Private cloud factor in platform selection, 62 Private sandbox environments, 197 Proactive scaling solutions, 97–98 Problems to solve in DevOps, 187 Process watchers, 128 Processes automation benefits, 253 containers, 60 instead of threads, 114 Proctors for Game Day, 318 Product Management (PM) monitoring, 336 Production candidates, 216 Production health in continuous deployment, 237 Project-focused days, 162–163 Project planning frequencies, 410 Project work, 161–162 Promotion step in deployment phase, 212 Propellerheads, 451 Proportional shedding, 230 Protocols collections, 351 network, 489 Prototyping, 258 Provider comparisons in service platform selection, 53 Provisional end-of-shift reports, 299 Provisioning in capacity planning, 384–385 in DevOps, 185–186 Proxies monitoring, 352 reverse proxy service, 80 Public cloud factor in platform selection, 62 Public Information Officers in Incident Command System, 325–326 Public key infrastructure (PKI), 40 Public safety arena in Incident Command System, 325 Publishers in message bus architectures, 85 Publishing postmortems, 302 PubSub2 system, 86 Pull monitoring, 350–351 Puppet systems configuration management, 261 deployment phase, 213 multitenant, 271 Push conflicts in continuous deployment, 238 Push monitoring, 350–351 “Pushing Millions of Lines of Code Five Days a Week” presentation, 233 PV (paravirtualization), 58–59 Python language libraries, 55 overview, 259–261 QPS (queries per second) defined, 10 limiting, 40–41 Quadratic scaling, 476 Quality Assurance monitoring, 335 Quality assurance (QA) engineers, 199 Quality measurements, 402 Queries in HTTP, 69 Queries of death, 130–131 Queries per second (QPS) defined, 10 limiting, 40–41 Queues, 113 benefits, 113 draining, 35–36 issue tracking systems, 263 messages, 86 variations, 113–114 Quick fixes vs. long-term, 295–296 RabbitMQ service, 86 Rachitsky, L., 302 Rack diversity, 136 Racks failures, 136 locality, 137 RAID systems, 132 RAM for caching, 104–106 failures, 123, 131–132 Random testing for disaster preparedness, 314–315 Rapid development, 231–232 Rate limits in design for operations, 40–41 Rate monitoring, 348 Rationale, documenting, 276 Re-assimilate tool, 255 Read-only replica support, 37 Real-time analysis, 353 Real user monitoring (RUM), 333 Reboots, 34 Recommendations in postmortem reports, 301 Recommended reading, 487–489 Recovery-Oriented Computing (ROC), 461 Recovery tool, 255 Redis storage system, 24, 106 Reduced risk factor in service delivery, 200 Reducing risk, 309–311 Reducing toil, automation for, 257 Redundancy design for operations, 37 file chunks, 20 for resiliency, 124–125 servers, 17 Reengineering components, 97 Refactoring, 97 Regional collectors, 352–353 Registering packages, 204, 206 Regression analysis, 375–376 Regression lines, 376 Regression tests for performance, 156, 215 Regular meetings in DevOps, 187 Regular oncall responsibilities, 294–295 Regular software crashes, 128 Regular Tasks (RT) assessments, 423–425 operational responsibility, 403 Regulating system integration, 250 Relationships in DevOps, 182 Release atomicity, 240–241 Release candidates, 197 Release engineering practice in DevOps, 186 Release vehicle packaging in DevOps, 185 Releases defined, 196 DevOps, 185 Reliability automation for, 253 message bus architectures, 87 Reliability zones in service platform selection, 53–54 Remote hands, 163 Remote monitoring stations, 352 Remote Procedure Call (RPC) protocol, 41 Repair life cycle, 254–255 Repeatability automation for, 253 continuous delivery, 190 Repeatable level in CMM, 405 Replacement algorithms for caches, 107 Replicas, 124 in design for operations, 37–38 load balancers with, 12–13 three-tier web service, 76 updating, 18 Reports for postmortems, 301–302 Repositories in build phase, 197 Reproducibility in continuous deployment, 237 Requests in updating state, 18 “Resilience Engineering: Learning to Embrace Failure” article, 320 Resiliency, 119–120 capacity planning, 370–371 DevOps, 178 exercises, 143 failure domains, 126–128 human error, 141–142 malfunctions, 121–123 overload failures, 138–141 physical failures.

pages: 509 words: 92,141

The Pragmatic Programmer by Andrew Hunt, Dave Thomas


A Pattern Language, Broken windows theory, business process, buy low sell high,, combinatorial explosion, continuous integration, database schema, domain-specific language, don't repeat yourself, Donald Knuth, general-purpose programming language, George Santayana, Grace Hopper, if you see hoof prints, think horses—not zebras, index card, loose coupling, Menlo Park, MVC pattern, premature optimization, Ralph Waldo Emerson, revision control, Schrödinger's Cat, slashdot, sorting algorithm, speech recognition, traveling salesman, urban decay, Y2K

Best Isn't Always Best You also need to be pragmatic about choosing appropriate algorithms—the fastest one is not always the best for the job. Given a small input set, a straightforward insertion sort will perform just as well as a quicksort, and will take you less time to write and debug. You also need to be careful if the algorithm you choose has a high setup cost. For small input sets, this setup may dwarf the running time and make the algorithm inappropriate. Also be wary of premature optimization. It's always a good idea to make sure an algorithm really is a bottleneck before investing your precious time trying to improve it. Related sections include: Estimating, page 64 Challenges Every developer should have a feel for how algorithms are designed and analyzed. Robert Sedgewick has written a series of accessible books on the subject ([Sed83, SF96, Sed92] and others).

pages: 342 words: 88,736

The Big Ratchet: How Humanity Thrives in the Face of Natural Crisis by Ruth Defries


agricultural Revolution, Columbian Exchange, demographic transition, double helix, European colonialism, food miles, Francisco Pizarro, Haber-Bosch Process, Intergovernmental Panel on Climate Change (IPCC), Internet Archive, John Snow's cholera map, out of africa, planetary scale, premature optimization, profit motive, Ralph Waldo Emerson, Thomas Malthus, trade route, transatlantic slave trade, transatlantic slave trade

There’s no way to stop the process of natural selection. A pesticide might work for a decade or two. Beyond that, natural selection is likely to render the compound ineffective. Companies in the pesticide market need to continually synthesize new compounds to combat resistance. Many, many hundreds of different synthesized pesticides exist for this reason. It’s a costly endeavor with no endpoint. Resistance put a big dent in the premature optimism that DDT would once and for all make humanity the victor in the battle against pests. Pest resistance wrought by natural selection wasn’t the only problem with the DDT bonanza. The pesticide, when sprayed across fields and forests and inside homes, attacked all living organisms with which it came into contact. Again, this wasn’t new. Before DDT, strychnine to control rodents killed quail and songbirds, and arsenic to control tree diseases killed deer.

pages: 540 words: 103,101

Building Microservices by Sam Newman


airport security, Amazon Web Services, anti-pattern, business process, call centre, continuous integration, create, read, update, delete, defense in depth, don't repeat yourself, Edward Snowden, fault tolerance, index card, information retrieval, Infrastructure as a Service, inventory management, job automation, load shedding, loose coupling, platform as a service, premature optimization, pull request, recommendation engine, social graph, software as a service, source of truth, the built environment, web application, WebSocket, x509 certificate

One of the downsides is that this navigation of controls can be quite chatty, as the client needs to follow links to find the operation it wants to perform. Ultimately, this is a trade-off. I would suggest you start with having your clients navigate these controls first, then optimize later if necessary. Remember that we have a large amount of help out of the box by using HTTP, which we discussed earlier. The evils of premature optimization have been well documented before, so I don’t need to expand upon them here. Also note that a lot of these approaches were developed to create distributed hypertext systems, and not all of them fit! Sometimes you’ll find yourself just wanting good old-fashioned RPC. Personally, I am a fan of using links to allow consumers to navigate API endpoints. The benefits of progressive discovery of the API and reduced coupling can be significant.

pages: 1,025 words: 150,187

ZeroMQ by Pieter Hintjens


anti-pattern, carbon footprint, cloud computing, Debian, distributed revision control, domain-specific language, factory automation, fault tolerance, fear of failure, finite state, Internet of things, iterative process, premature optimization, profit motive, pull request, revision control, RFC: Request For Comment, Richard Stallman, Skype, smart transportation, software patent, Steve Jobs, Valgrind, WebSocket

The goal of MOPED is to define a process by which we can take a rough use case for a new distributed application, and go from “Hello World” to fully working prototype in any language in under a week. Using MOPED, you grow, more than build, a working ØMQ architecture from the ground up with minimal risk of failure. By focusing on the contracts rather than the implementations, you avoid the risk of premature optimization. By driving the design process through ultra-short test-based cycles, you can be more certain that what you have works before you add more. We can turn this into five real steps: Internalize the ØMQ semantics. Draw a rough architecture. Decide on the contracts. Make a minimal end-to-end solution. Solve one problem and repeat. Step 1: Internalize the Semantics You must learn and digest ØMQ’s “language,” that is, the socket patterns and how they work.

pages: 624 words: 127,987

The Personal MBA: A World-Class Business Education in a Single Volume by Josh Kaufman


Albert Einstein, Atul Gawande, Black Swan, business process, buy low sell high, capital asset pricing model, Checklist Manifesto, cognitive bias, correlation does not imply causation, Credit Default Swap, Daniel Kahneman / Amos Tversky, David Heinemeier Hansson, David Ricardo: comparative advantage, Dean Kamen, delayed gratification, discounted cash flows, Donald Knuth, double entry bookkeeping, Douglas Hofstadter,, Frederick Winslow Taylor, George Santayana, Gödel, Escher, Bach, high net worth, hindsight bias, index card, inventory management, iterative process, job satisfaction, Johann Wolfgang von Goethe, Kevin Kelly, Lao Tzu, loose coupling, loss aversion, Marc Andreessen, market bubble, Network effects, Parkinson's law, Paul Buchheit, Paul Graham, place-making, premature optimization, Ralph Waldo Emerson, rent control, side project, statistical model, stealth mode startup, Steve Jobs, Steve Wozniak, subscription business, telemarketer, the scientific method, time value of money, Toyota Production System, tulip mania, Upton Sinclair, Vilfredo Pareto, Walter Mischel, Y Combinator, Yogi Berra

The purpose of understanding and analyzing systems is to improve them, which is often tricky—changing systems can often create unintended consequences. In this chapter, you’ll learn the secrets of Optimization, how to remove unnecessary Friction from critical processes, and how to build Systems that can handle Uncertainty and Change. SHARE THIS CONCEPT: Optimization Premature optimization is the root of all evil. —DONALD KNUTH, COMPUTER SCIENTIST AND FORMER PROFESSOR AT STANFORD UNIVERSITY Optimization is the process of maximizing the output of a System or minimizing a specific input the system requires to operate. Optimization typically revolves around the systems and processes behind your Key Performance Indicators , which measure the critical elements of the system as a whole.

pages: 923 words: 516,602

The C++ Programming Language by Bjarne Stroustrup


combinatorial explosion, conceptual framework, database schema, distributed generation, Donald Knuth, fault tolerance, general-purpose programming language, index card, iterative process, job-hopping, locality of reference, Menlo Park, Parkinson's law, premature optimization, sorting algorithm

All rights reserved. 107 ________________________________________ ________________________________________________________________________________________________________________________________________________________________ 6 ________________________________________ ________________________________________________________________________________________________________________________________________________________________ Expressions and Statements Premature optimization is the root of all evil. – D. Knuth On the other hand, we cannot ignore efficiency. – Jon Bentley Desk calculator example — input — command line arguments — expression summary — logical and relational operators — increment and decrement — free store — explicit type conversion — statement summary — declarations — selection statements — declarations in conditions — iteration statements — the infamous ggoottoo — comments and indentation — advice — exercises. 6.1 A Desk Calculator [expr.calculator] Statements and expressions are introduced by presenting a desk calculator program that provides the four standard arithmetic operations as infix operators on floating-point numbers.

If a ‘‘maintenance crew’’ is left guessing about the architecture of the system or must deduce the purpose of system components from their implementation, the structure of a system can deteriorate rapidly under the impact of local patches. Documentation is typically much better at conveying details than in helping new people to understand key ideas and principles. 23.4.7 Efficiency [design.efficiency] Donald Knuth observed that ‘‘premature optimization is the root of all evil.’’ Some people have learned that lesson all too well and consider all concern for efficiency evil. On the contrary, efficiency must be kept in mind throughout the design and implementation effort. However, that does not mean the designer should be concerned with micro-efficiencies, but that first-order efficiency issues must be considered. The best strategy for efficiency is to produce a clean and simple design.

The Singularity Is Near: When Humans Transcend Biology by Ray Kurzweil


additive manufacturing, AI winter, Alan Turing: On Computable Numbers, with an Application to the Entscheidungsproblem, Albert Einstein, anthropic principle, Any sufficiently advanced technology is indistinguishable from magic, artificial general intelligence, Asilomar, augmented reality, autonomous vehicles, Benoit Mandelbrot, Bill Joy: nanobots, bioinformatics, brain emulation, Brewster Kahle, Brownian motion, business intelligence,, call centre, carbon-based life, cellular automata, Claude Shannon: information theory, complexity theory, conceptual framework, Conway's Game of Life, cosmological constant, cosmological principle, cuban missile crisis, data acquisition, Dava Sobel, David Brooks, Dean Kamen, disintermediation, double helix, Douglas Hofstadter,, epigenetics, factory automation, friendly AI, George Gilder, Gödel, Escher, Bach, informal economy, information retrieval, invention of the telephone, invention of the telescope, invention of writing, Isaac Newton, iterative process, Jaron Lanier, Jeff Bezos, job automation, job satisfaction, John von Neumann, Kevin Kelly, Law of Accelerating Returns, life extension, lifelogging, linked data, Loebner Prize, Louis Pasteur, mandelbrot fractal, Mikhail Gorbachev, mouse model, Murray Gell-Mann, mutually assured destruction, natural language processing, Network effects, new economy, Norbert Wiener, oil shale / tar sands, optical character recognition, pattern recognition, phenotype, premature optimization, randomized controlled trial, Ray Kurzweil, remote working, reversible computing, Richard Feynman, Richard Feynman, Robert Metcalfe, Rodney Brooks, Search for Extraterrestrial Intelligence, selection bias, semantic web, Silicon Valley, Singularitarianism, speech recognition, statistical model, stem cell, Stephen Hawking, Stewart Brand, strong AI, superintelligent machines, technological singularity, Ted Kaczynski, telepresence, The Coming Technological Singularity, Thomas Bayes, transaction costs, Turing machine, Turing test, Vernor Vinge, Y2K, Yogi Berra

We saw this in the railroad frenzy of the nineteenth century, which was followed by widespread bankruptcies. (I have some of these early unpaid railroad bonds in my collection of historical documents.) And we are still feeling the effects of the e-commerce and telecommunications busts of several years ago, which helped fuel a recession from which we are now recovering. AI experienced a similar premature optimism in the wake of programs such as the 1957 General Problem Solver created by Allen Newell, J. C. Shaw, and Herbert Simon, which was able to find proofs for theorems that had stumped mathematicians such as Bertrand Russell, and early programs from the MIT Artificial Intelligence Laboratory, which could answer SAT questions (such as analogies and story problems) at the level of college students.163 A rash of AI companies occurred in the 1970s, but when profits did not materialize there was an AI "bust" in the 1980s, which has become known as the "AI winter."

pages: 999 words: 194,942

Clojure Programming by Chas Emerick, Brian Carper, Christophe Grand


Amazon Web Services, Benoit Mandelbrot, cloud computing, continuous integration, database schema, domain-specific language, don't repeat yourself,, failed state, finite state, Firefox, game design, general-purpose programming language, Guido van Rossum, Larry Wall, mandelbrot fractal, Paul Graham, platform as a service, premature optimization, random walk, Ruby on Rails, Schrödinger's Cat, semantic web, software as a service, sorting algorithm, Turing complete, type inference, web application

That’s an optimization and should only be taken on when absolutely necessary, especially given the costs associated with it: efficient field access ties code that uses it to a particular type, which often complicates the implementation of generic functionality and limits composability.[432] * * * [431] The canonical and up-to-date version of this flowchart is maintained at along with a number of translations, including Dutch, German, Japanese, Portuguese, and Spanish so far. [432] Recall that “premature optimization is the root of all evil.” Thank you, Professor Knuth. Chapter 19. Introducing Clojure into Your Workplace (or, Sneaking Clojure Past the Boss[433]) It is a sad fact that many programmers, if not the majority, use languages and tools every day that they begrudge. Either through historical accident, organizational inertia, or hard facts of the business, we often find ourselves stuck wishing we were using something, anything else to get our jobs done.

pages: 671 words: 228,348

Pro AngularJS by Adam Freeman


business process, create, read, update, delete,, Google Chrome, information retrieval, inventory management, MVC pattern, place-making, premature optimization, revision control, Ruby on Rails, single page application, web application

Finally, I would have used the $animate service, which I describe in Chapter 23, to display short, focused animations to ease the transition from one view to another when the URL path changes. 186 Chapter 8 ■ SportsStore: Orders and Administration AVOIDING OPTIMIZATION PITFALLS You will notice that I say that I could consider reusing the category and pagination data, not that I would definitely do so. That’s because any kind of optimization should be carefully assessed to ensure it is sensible and that it avoids two main pitfalls that dog optimization efforts. The first pitfall is premature optimization, which is where a developer sees an opportunity to optimize an operation or task before the current implementation causes any problems or breaks a contract in the nonfunctional specification. This kind of optimization tends to make code more specific in its nature that it would otherwise be, and that can kill the easy movement of functionality from one component to another that is typical of AngularJS (and is one of the most enjoyable aspects of AngularJS development).

pages: 834 words: 180,700

The Architecture of Open Source Applications by Amy Brown, Greg Wilson


8-hour work day, anti-pattern, bioinformatics,, cloud computing, collaborative editing, combinatorial explosion, computer vision, continuous integration, create, read, update, delete, David Heinemeier Hansson, Debian, domain-specific language, Donald Knuth,, fault tolerance, finite state, Firefox, friendly fire, Guido van Rossum, linked data, load shedding, locality of reference, loose coupling, Mars Rover, MVC pattern, peer-to-peer, Perl 6, premature optimization, recommendation engine, revision control, Ruby on Rails, side project, Skype, slashdot, social web, speech recognition, the scientific method, The Wisdom of Crowds, web application, WebSocket

Design Reflections My experience in working on Graphite has reaffirmed a belief of mine that scalability has very little to do with low-level performance but instead is a product of overall design. I have run into many bottlenecks along the way but each time I look for improvements in design rather than speed-ups in performance. I have been asked many times why I wrote Graphite in Python rather than Java or C++, and my response is always that I have yet to come across a true need for the performance that another language could offer. In [Knu74], Donald Knuth famously said that premature optimization is the root of all evil. As long as we assume that our code will continue to evolve in non-trivial ways then all optimization6 is in some sense premature. One of Graphite's greatest strengths and greatest weaknesses is the fact that very little of it was actually "designed" in the traditional sense. By and large Graphite evolved gradually, hurdle by hurdle, as problems arose. Many times the hurdles were foreseeable and various pre-emptive solutions seemed natural.

pages: 669 words: 210,153

Tools of Titans: The Tactics, Routines, and Habits of Billionaires, Icons, and World-Class Performers by Timothy Ferriss


Airbnb, Alexander Shulgin, artificial general intelligence, asset allocation, Atul Gawande, augmented reality, back-to-the-land, Bernie Madoff, Bertrand Russell: In Praise of Idleness, Black Swan, blue-collar work, Buckminster Fuller, business process, Cal Newport, call centre, Checklist Manifesto, cognitive bias, cognitive dissonance, Colonization of Mars, Columbine, commoditize, correlation does not imply causation, David Brooks, David Graeber, diversification, diversified portfolio, Donald Trump, effective altruism, Elon Musk, fault tolerance, fear of failure, Firefox, follow your passion, future of work, Google X / Alphabet X, Howard Zinn, Hugh Fearnley-Whittingstall, Jeff Bezos, job satisfaction, Johann Wolfgang von Goethe, John Markoff, Kevin Kelly, Kickstarter, Lao Tzu, life extension, lifelogging, Mahatma Gandhi, Marc Andreessen, Mark Zuckerberg, Mason jar, Menlo Park, Mikhail Gorbachev, Nicholas Carr, optical character recognition, PageRank, passive income, pattern recognition, Paul Graham, peer-to-peer, Peter H. Diamandis: Planetary Resources, Peter Singer: altruism, Peter Thiel, phenotype, PIHKAL and TIHKAL, post scarcity, premature optimization, QWERTY keyboard, Ralph Waldo Emerson, Ray Kurzweil, recommendation engine, rent-seeking, Richard Feynman, Richard Feynman, risk tolerance, Ronald Reagan, selection bias, sharing economy, side project, Silicon Valley, skunkworks, Skype, Snapchat, social graph, software as a service, software is eating the world, stem cell, Stephen Hawking, Steve Jobs, Stewart Brand, superintelligent machines, Tesla Model S, The Wisdom of Crowds, Thomas L Friedman, Wall-E, Washington Consensus, Whole Earth Catalog, Y Combinator, zero-sum game

for each, brainstorming the ramifications. Can You Flip the Deferred-Life Plan and Make It Work? “Many, many people are working very hard, trying to save their money to retire so they can travel. Well, I decided to flip it around and travel when I was really young, when I had zero money. And I had experiences that, basically, even a billion dollars couldn’t have bought.” “You Don’t Want ‘Premature Optimization’” “I really recommend slack. ‘Productive’ is for your middle ages. When you’re young, you want to be prolific and make and do things, but you don’t want to measure them in terms of productivity. You want to measure them in terms of extreme performance, you want to measure them in extreme satisfaction.” The Ideas You Can’t Give Away or Kill . . . “I became a proponent of trying to give things away first.

pages: 1,085 words: 219,144

Solr in Action by Trey Grainger, Timothy Potter


business intelligence, cloud computing, commoditize, conceptual framework, crowdsourcing, data acquisition,, failed state, fault tolerance, finite state, full text search, glass ceiling, information retrieval, natural language processing, performance metric, premature optimization, recommendation engine, web application

Technically an index-time boost is distributed (multiplied) into each term’s relevancy, which is somewhat different than using a function query against a popularity field, which is added to the overall score. Although it’s possible to construct your function queries in such a way as to mimic the index-time boost, in practice the additive boost will likely accomplish your desired outcome, so too much focus on this detail is likely a premature optimization until you discover a problem with this approach. Both the index-time document boost and the boosting of a document by a function on a popularity field are focused upon globally boosting a document’s relevancy versus all other documents. This might make sense for an e-commerce application in which certain products tend to sell better overall or for a news website where certain popular articles are trending.