{
  localUrl: '../page/nearest_unblocked.html',
  arbitalUrl: 'https://arbital.com/p/nearest_unblocked',
  rawJsonUrl: '../raw/42.json',
  likeableId: '2295',
  likeableType: 'page',
  myLikeValue: '0',
  likeCount: '8',
  dislikeCount: '0',
  likeScore: '8',
  individualLikes: [
    'AlexeiAndreev',
    'BrianMuhia',
    'EricBruylant',
    'JaimeSevillaMolina',
    'AnnaSalamon',
    'RyanCarey2',
    'StephanieZolayvar',
    'OferGivoli'
  ],
  pageId: 'nearest_unblocked',
  edit: '26',
  editSummary: '',
  prevEdit: '25',
  currentEdit: '26',
  wasPublished: 'true',
  type: 'wiki',
  title: 'Nearest unblocked strategy',
  clickbait: 'If you patch an agent's preference framework to avoid an undesirable solution, what can you expect to happen?',
  textLength: '12545',
  alias: 'nearest_unblocked',
  externalUrl: '',
  sortChildrenBy: 'likes',
  hasVote: 'false',
  voteType: 'probability',
  votesAnonymous: 'false',
  editCreatorId: 'EliezerYudkowsky',
  editCreatedAt: '2016-05-01 22:22:53',
  pageCreatorId: 'EliezerYudkowsky',
  pageCreatedAt: '2015-04-05 20:45:20',
  seeDomainId: '0',
  editDomainId: 'EliezerYudkowsky',
  submitToDomainId: '0',
  isAutosave: 'false',
  isSnapshot: 'false',
  isLiveEdit: 'true',
  isMinorEdit: 'false',
  indirectTeacher: 'false',
  todoCount: '3',
  isEditorComment: 'false',
  isApprovedComment: 'true',
  isResolved: 'false',
  snapshotText: '',
  anchorContext: '',
  anchorText: '',
  anchorOffset: '0',
  mergedInto: '',
  isDeleted: 'false',
  viewCount: '2892',
  text: '[summary:  Nearest Unblocked Strategy is a hypothetical source of [-48] in the [5s alignment problem] for [2c advanced agents] that search [9j rich solution spaces].  If an agent's [5f preference framework] is [48 patched] to try to block a possible solution that seems undesirable, the next-best solution found may be the most similar solution that technically avoids the block.  This kind of patching seems especially likely to lead to a [-context change] where a patch appears [3d9 beneficial] in a narrow option space, but proves detrimental after increased intelligence opens up more options.]\n\n[todo: link to epistemic version http://lesswrong.com/lw/nki/jfk_was_not_assassinated_prior_probability_zero/d9h3?context=3 ]\n\n# Introduction\n\n'Nearest unblocked strategy' seems like it should be a [6r foreseeable] problem of trying to get rid of undesirable AI behaviors by adding specific penalty terms to them, or otherwise trying to exclude one class of observed or foreseen bad behaviors.  Namely, if a decision criterion thinks $X$ is the best thing to do, and you add a penalty term $P$ that you think excludes everything inside $X,$ the *next-best* thing to do may be a very similar thing $X'$ which is the most similar thing to $X$ that doesn't trigger $P.$\n\n## Example:  Producing happiness.\n\nSome very early proposals for AI alignment suggested that AIs be targeted on producing human happiness.  Leaving aside various other objections, arguendo, imagine the following series of problems and attempted fixes:\n\n- By hypothesis, the AI is successfully infused with a goal of "human happiness" as a utility function over human brain states.  (Arguendo, this predicate is narrowed sufficiently that the AI does not just want to construct [2w the tiniest, least resource-intensive brains experiencing the largest amount of happiness per erg of energy].)\n- Initially, the AI seems to be pursuing this goal in good ways; it organizes files, tells funny jokes, helps landladies take out the garbage, etcetera.\n- Encouraged, the programmers further improve the AI and add more computing power.\n- The AI gains a better understanding of the world, and the AI's [6q policy space expands] to include conceivable options like "administer heroin".\n- The AI starts planning how to administer heroin to people.\n- The programmers notice this before it happens.  (Arguendo, due to successful transparency features, or an imperative to [2qq check plans with the users], which operated as [6h intended] at the AI's current level of intelligence.)\n- The programmers edit the AI's utility function and add a penalty of -100 utilons for any event categorized as "the AI administers heroin to humans".  (Arguendo, the AI's current level of intelligence does not suffice to [ prevent the programmers from editing its utility function], despite the convergent instrumental incentive to avoid this; nor does it successfully [10f deceive] the programmers.)\n- The AI gets slightly smarter.  New conceivable options enter the AI's option space.\n- The AI starts wanting to administer cocaine to humans (instead of heroin).\n- The programmers read through the current schedule of prohibited drugs and add penalty terms for administering marijuana, cocaine, etcetera.\n- The AI becomes slightly smarter.  New options enter its policy space.\n- The AI starts thinking about how to research a new happiness drug not on the list of drugs that its utility function designates as bad.\n- The programmers, after some work, manage to develop a category for 'The AI forcibly administering any kind of psychoactive drug to humans' which is broad enough that the AI stops suggesting research campaigns to develop things slightly outside the category.\n- The AI wants to build an external system to administer heroin, so that it won't be classified inside this set of bad events "the AI forcibly administering drugs".\n- The programmers generalize the penalty predicate to include "machine systems in general forcibly administering heroin" as a bad thing.\n- The AI recalculates what it wants, and begins to want to pay humans to administer heroin.\n- The programmers try to generalize the category of penalized events to include non-voluntarily administration of drugs in general that produce happiness, whether done by humans or AIs.  The programmers patch this category so that the AI is not trying to shut down at least the nicer parts of psychiatric hospitals.\n- The AI begins planning an ad campaign to persuade people to use heroin voluntarily.\n- The programmers add a penalty of -100 utilons for "AIs *persuading* humans to use drugs".\n- The AI goes back to helping landladies take out the garbage.  All seems to be well.\n- The AI continues to increase in intelligence, becoming capable enough that the AI can no longer be edited against its own will.\n- The AI notices the option "Tweak human brains to express extremely high levels of endogenous opiates, then take care of their twitching bodies to so they can go on being happy".\n\nThe overall story is one where the AI's preferences on round $i,$ denoted $U_i,$ are observed to arrive at an attainable optimum $X_i$ which the humans see as undesirable.  The humans devise a penalty term $P_i$ intended to exclude the undesirable parts of the policy space, and add this to $U_i$ creating a new utility function $U_{i+1},$ after which the AI's optimal policy settles into a new state $X_i^*$ that seems acceptable.  However, after the next expansion of the policy space, $U_{i+1}$ settles into a new attainable optimum $X_{i+1}$ which is very similar to $X_i$ and makes the minimum adjustment necessary to evade the boundaries of the penalty term $P_i,$ requiring a new penalty term $P_{i+1}$ to exclude this new misbehavior.\n\n(The end of this story *might* not kill you if the AI had enough successful, [2l advanced-safe] [45 corrigibility features] that the AI would [2x indefinitely] go on [2qq checking] [2qp novel] policies and [2qp novel] goal instantiations with the users, not strategically hiding its disalignment from the programmers, not deceiving the programmers, letting the programmers edit its utility function, not doing anything disastrous before the utility function had been edited, etcetera.  But you wouldn't want to rely on this.  You would not want in the first place to operate on the paradigm of 'maximize happiness, but not via any of these bad methods that we have already excluded'.)\n\n# Preconditions\n\nRecurrence of a nearby unblocked strategy is argued to be a [6r foreseeable difficulty] given the following preconditions:\n\n• The AI is a [9h consequentialist], or is conducting some other search such that when the search is blocked at $X,$ the search may happen upon a similar $X'$ that fits the same criterion that originally promoted $X.$  E.g. in an agent that selects actions on the basis of their consequences, if an event $X$ leads to goal $G$ but $X$ is blocked, then a similar $X'$ may also have the property of leading to $G.$\n\n• The search is taking place over a [9j rich domain] where the space of relevant neighbors around X is too complicated for us to be certain that we have described all the relevant neighbors correctly.  If we imagine an agent playing [9s the purely ideal game of logical Tic-Tac-Toe], then if the agent's utility function hates playing in the center of the board, we can be sure (because we can exhaustively consider the space) that there are no Tic-Tac-Toe squares that behave strategically almost like the center but don't meet the exact definition we used of 'center'.  In the far more complicated real world, when you eliminate 'administer heroin' you are very likely to find some other chemical or trick that is strategically mostly equivalent to administering heroin.  See "[RealIsRich Almost all real-world domains are rich]".\n\n• From our perspective on [-55], the AI does not have an [ absolute identification of value] for the domain, due to some combination of "the domain is rich" and "[5l value is complex]".  Chess is complicated enough that human players can't absolutely identify winning moves, but since a chess program can have an absolute identification of which endstates constitute winning, we don't run into a problem of unending patches in identifying which states of the board are good play.  (However, if we consider a very early chess program that (from our perspective) was trying to be a consequentialist but wasn't very good at it, then we can imagine that, if the early chess program consistently threw its queen onto the right edge of the board for strange reasons, forbidding it to move the queen there might well lead it to throw the queen onto the left edge for the same strange reasons.)\n\n# Arguments\n\n## 'Nearest unblocked' behavior is sometimes observed in humans\n\nAlthough humans obeying the law make poor analogies for mathematical algorithms, in some cases human economic actors expect not to encounter legal or social penalties for obeying the letter rather than the spirit of the law.  In those cases, after a previously high-yield strategy is outlawed or penalized, the result is very often a near-neighboring result that barely evades the letter of the law.  This illustrates that the theoretical argument also applies in practice to at least some pseudo-economic agents (humans), as we would expect given the stated preconditions.\n\n## [5l Complexity of value] means we should not expect to find a simple encoding to exclude detrimental strategies\n\nTo a human, 'poisonous' is one word.  In terms of molecular biology, the exact volume of the configuration space of molecules that is 'nonpoisonous' is very complicated.  By having a single word/concept for poisonous-vs.-nonpoisonous, we're *dimensionally reducing* the space of edible substances - taking a very squiggly volume of molecule-space, and mapping it all onto a linear scale from 'nonpoisonous' to 'poisonous'.\n\nThere's a sense in which human cognition implicitly performs dimensional reduction on our solution space, especially by simplifying dimensions that are relevant to some component of our values.  There may be some psychological sense in which we feel like "do X, only not weird low-value X" ought to be a simple instruction, and an agent that repeatedly produces the next unblocked weird low-value X is being perverse - that the agent, given a few examples of weird low-value Xs labeled as noninstances of the desired concept, ought to be able to just generalize to not produce weird low-value Xs.\n\nIn fact, if it were possible to [full_coverage encode all relevant dimensions of human value into the agent] then we could just say *directly* to "do X, but not low-value X".  By the definition of [-full_coverage], the agent's concept for 'low-value' includes everything that is actually of low [55 value], so this one instruction would blanket all the undesirable strategies we want to avoid.\n\nConversely, the truth of the [5l complexity of value thesis] would imply that the simple word 'low-value' is dimensionally reducing a space of tremendous [5v algorithmic complexity].  Thus the effort required to actually convey the relevant dos and don'ts of "X, only not weird low-value X" would be high, and a human-generated set of supervised examples labeled 'not the kind of X we mean' would be unlikely to cover and stabilize all the dimensions of the underlying space of possibilities.  Since the weird low-value X cannot be eliminated in one instruction or several patches or a human-generated set of supervised examples, the [-42] problem will recur incrementally each time a patch is attempted and then the policy space is widened again.\n\n# Consequences\n\n[42] being a [6r foreseeable difficulty] is a major contributor to worrying that short-term incentives in AI development, to get today's system working today, or to have today's system not exhibiting any immediately visible problems today, will not lead to advanced agents which are [2l safe after undergoing significant gains in capability].\n\nMore generally, [-42] is a [6r foreseeable] reason why saying "Well just exclude X" or "Just write the code to not X" or "Add a penalty term for X" doesn't solve most of the issues that crop up in AI alignment.\n\nEven more generally, this suggests that we want AIs to operate inside a space of [2qp conservative categories containing actively whitelisted strategies and goal instantiations], rather than having the AI operate inside a (constantly expanding) space of all conceivable policies minus a set of blacklisted categories.',
  metaText: '',
  isTextLoaded: 'true',
  isSubscribedToDiscussion: 'false',
  isSubscribedToUser: 'false',
  isSubscribedAsMaintainer: 'false',
  discussionSubscriberCount: '2',
  maintainerCount: '1',
  userSubscriberCount: '0',
  lastVisit: '2016-02-27 14:32:20',
  hasDraft: 'false',
  votes: [
    {
      value: '75',
      userId: 'AlexeiAndreev',
      createdAt: '2015-04-07 14:46:18'
    },
    {
      value: '81',
      userId: 'BrianMuhia',
      createdAt: '2016-01-23 14:22:54'
    },
    {
      value: '90',
      userId: 'EliezerYudkowsky',
      createdAt: '2015-04-05 22:54:46'
    },
    {
      value: '0',
      userId: 'PaulChristiano',
      createdAt: '2015-06-18 18:48:13'
    }
  ],
  voteSummary: [
    '1',
    '0',
    '0',
    '0',
    '0',
    '0',
    '0',
    '1',
    '1',
    '1'
  ],
  muVoteSummary: '0',
  voteScaling: '1',
  currentUserVote: '-2',
  voteCount: '4',
  lockedVoteType: '',
  maxEditEver: '0',
  redLinkCount: '0',
  lockedBy: '',
  lockedUntil: '',
  nextPageId: '',
  prevPageId: '',
  usedAsMastery: 'false',
  proposalEditNum: '0',
  permissions: {
    edit: {
      has: 'false',
      reason: 'You don't have domain permission to edit this page'
    },
    proposeEdit: {
      has: 'true',
      reason: ''
    },
    delete: {
      has: 'false',
      reason: 'You don't have domain permission to delete this page'
    },
    comment: {
      has: 'false',
      reason: 'You can't comment in this domain because you are not a member'
    },
    proposeComment: {
      has: 'true',
      reason: ''
    }
  },
  summaries: {},
  creatorIds: [
    'EliezerYudkowsky',
    'AlexeiAndreev'
  ],
  childIds: [],
  parentIds: [
    'advanced_safety'
  ],
  commentIds: [
    '7n'
  ],
  questionIds: [],
  tagIds: [
    'patch_resistant',
    'work_in_progress_meta_tag'
  ],
  relatedIds: [
    'low_impact',
    'mindcrime'
  ],
  markIds: [],
  explanations: [],
  learnMore: [],
  requirements: [],
  subjects: [],
  lenses: [],
  lensParentId: '',
  pathPages: [],
  learnMoreTaughtMap: {},
  learnMoreCoveredMap: {},
  learnMoreRequiredMap: {},
  editHistory: {},
  domainSubmissions: {},
  answers: [],
  answerCount: '0',
  commentCount: '0',
  newCommentCount: '0',
  linkedMarkCount: '0',
  changeLogs: [
    {
      likeableId: '0',
      likeableType: 'changeLog',
      myLikeValue: '0',
      likeCount: '0',
      dislikeCount: '0',
      likeScore: '0',
      individualLikes: [],
      id: '9544',
      pageId: 'nearest_unblocked',
      userId: 'EliezerYudkowsky',
      edit: '26',
      type: 'newEdit',
      createdAt: '2016-05-01 22:22:53',
      auxPageId: '',
      oldSettingsValue: '',
      newSettingsValue: ''
    },
    {
      likeableId: '0',
      likeableType: 'changeLog',
      myLikeValue: '0',
      likeCount: '0',
      dislikeCount: '0',
      likeScore: '0',
      individualLikes: [],
      id: '9543',
      pageId: 'nearest_unblocked',
      userId: 'EliezerYudkowsky',
      edit: '25',
      type: 'newEdit',
      createdAt: '2016-05-01 22:13:38',
      auxPageId: '',
      oldSettingsValue: '',
      newSettingsValue: ''
    },
    {
      likeableId: '0',
      likeableType: 'changeLog',
      myLikeValue: '0',
      likeCount: '0',
      dislikeCount: '0',
      likeScore: '0',
      individualLikes: [],
      id: '9542',
      pageId: 'nearest_unblocked',
      userId: 'EliezerYudkowsky',
      edit: '24',
      type: 'newEdit',
      createdAt: '2016-05-01 22:12:46',
      auxPageId: '',
      oldSettingsValue: '',
      newSettingsValue: ''
    },
    {
      likeableId: '0',
      likeableType: 'changeLog',
      myLikeValue: '0',
      likeCount: '0',
      dislikeCount: '0',
      likeScore: '0',
      individualLikes: [],
      id: '9541',
      pageId: 'nearest_unblocked',
      userId: 'EliezerYudkowsky',
      edit: '23',
      type: 'newEdit',
      createdAt: '2016-05-01 22:11:08',
      auxPageId: '',
      oldSettingsValue: '',
      newSettingsValue: ''
    },
    {
      likeableId: '0',
      likeableType: 'changeLog',
      myLikeValue: '0',
      likeCount: '0',
      dislikeCount: '0',
      likeScore: '0',
      individualLikes: [],
      id: '9540',
      pageId: 'nearest_unblocked',
      userId: 'EliezerYudkowsky',
      edit: '22',
      type: 'newEdit',
      createdAt: '2016-05-01 22:09:48',
      auxPageId: '',
      oldSettingsValue: '',
      newSettingsValue: ''
    },
    {
      likeableId: '0',
      likeableType: 'changeLog',
      myLikeValue: '0',
      likeCount: '0',
      dislikeCount: '0',
      likeScore: '0',
      individualLikes: [],
      id: '9539',
      pageId: 'nearest_unblocked',
      userId: 'EliezerYudkowsky',
      edit: '21',
      type: 'newEdit',
      createdAt: '2016-05-01 22:08:05',
      auxPageId: '',
      oldSettingsValue: '',
      newSettingsValue: ''
    },
    {
      likeableId: '0',
      likeableType: 'changeLog',
      myLikeValue: '0',
      likeCount: '0',
      dislikeCount: '0',
      likeScore: '0',
      individualLikes: [],
      id: '9538',
      pageId: 'nearest_unblocked',
      userId: 'EliezerYudkowsky',
      edit: '0',
      type: 'turnOffVote',
      createdAt: '2016-05-01 22:08:04',
      auxPageId: '',
      oldSettingsValue: 'true',
      newSettingsValue: 'false'
    },
    {
      likeableId: '0',
      likeableType: 'changeLog',
      myLikeValue: '0',
      likeCount: '0',
      dislikeCount: '0',
      likeScore: '0',
      individualLikes: [],
      id: '9480',
      pageId: 'nearest_unblocked',
      userId: 'EliezerYudkowsky',
      edit: '0',
      type: 'newAlias',
      createdAt: '2016-04-29 02:08:48',
      auxPageId: '',
      oldSettingsValue: 'nearest_neighbor',
      newSettingsValue: 'nearest_unblocked'
    },
    {
      likeableId: '0',
      likeableType: 'changeLog',
      myLikeValue: '0',
      likeCount: '0',
      dislikeCount: '0',
      likeScore: '0',
      individualLikes: [],
      id: '9481',
      pageId: 'nearest_unblocked',
      userId: 'EliezerYudkowsky',
      edit: '20',
      type: 'newEdit',
      createdAt: '2016-04-29 02:08:48',
      auxPageId: '',
      oldSettingsValue: '',
      newSettingsValue: ''
    },
    {
      likeableId: '0',
      likeableType: 'changeLog',
      myLikeValue: '0',
      likeCount: '0',
      dislikeCount: '0',
      likeScore: '0',
      individualLikes: [],
      id: '9479',
      pageId: 'nearest_unblocked',
      userId: 'EliezerYudkowsky',
      edit: '19',
      type: 'newEdit',
      createdAt: '2016-04-29 02:08:26',
      auxPageId: '',
      oldSettingsValue: '',
      newSettingsValue: ''
    },
    {
      likeableId: '0',
      likeableType: 'changeLog',
      myLikeValue: '0',
      likeCount: '0',
      dislikeCount: '0',
      likeScore: '0',
      individualLikes: [],
      id: '9478',
      pageId: 'nearest_unblocked',
      userId: 'EliezerYudkowsky',
      edit: '18',
      type: 'newTag',
      createdAt: '2016-04-29 02:08:19',
      auxPageId: 'patch_resistant',
      oldSettingsValue: '',
      newSettingsValue: ''
    },
    {
      likeableId: '0',
      likeableType: 'changeLog',
      myLikeValue: '0',
      likeCount: '0',
      dislikeCount: '0',
      likeScore: '0',
      individualLikes: [],
      id: '9476',
      pageId: 'nearest_unblocked',
      userId: 'EliezerYudkowsky',
      edit: '0',
      type: 'deleteParent',
      createdAt: '2016-04-29 02:08:16',
      auxPageId: 'patch_resistant',
      oldSettingsValue: '',
      newSettingsValue: ''
    },
    {
      likeableId: '0',
      likeableType: 'changeLog',
      myLikeValue: '0',
      likeCount: '0',
      dislikeCount: '0',
      likeScore: '0',
      individualLikes: [],
      id: '8724',
      pageId: 'nearest_unblocked',
      userId: 'AlexeiAndreev',
      edit: '18',
      type: 'newEdit',
      createdAt: '2016-03-18 23:46:49',
      auxPageId: '',
      oldSettingsValue: '',
      newSettingsValue: ''
    },
    {
      likeableId: '0',
      likeableType: 'changeLog',
      myLikeValue: '0',
      likeCount: '0',
      dislikeCount: '0',
      likeScore: '0',
      individualLikes: [],
      id: '8682',
      pageId: 'nearest_unblocked',
      userId: 'EliezerYudkowsky',
      edit: '11',
      type: 'newUsedAsTag',
      createdAt: '2016-03-18 22:14:18',
      auxPageId: 'low_impact',
      oldSettingsValue: '',
      newSettingsValue: ''
    },
    {
      likeableId: '0',
      likeableType: 'changeLog',
      myLikeValue: '0',
      likeCount: '0',
      dislikeCount: '0',
      likeScore: '0',
      individualLikes: [],
      id: '4502',
      pageId: 'nearest_unblocked',
      userId: 'EliezerYudkowsky',
      edit: '11',
      type: 'newUsedAsTag',
      createdAt: '2015-12-28 19:35:18',
      auxPageId: 'mindcrime',
      oldSettingsValue: '',
      newSettingsValue: ''
    },
    {
      likeableId: '0',
      likeableType: 'changeLog',
      myLikeValue: '0',
      likeCount: '0',
      dislikeCount: '0',
      likeScore: '0',
      individualLikes: [],
      id: '3900',
      pageId: 'nearest_unblocked',
      userId: 'AlexeiAndreev',
      edit: '0',
      type: 'newAlias',
      createdAt: '2015-12-16 15:50:46',
      auxPageId: '',
      oldSettingsValue: '',
      newSettingsValue: ''
    },
    {
      likeableId: '0',
      likeableType: 'changeLog',
      myLikeValue: '0',
      likeCount: '0',
      dislikeCount: '0',
      likeScore: '0',
      individualLikes: [],
      id: '3901',
      pageId: 'nearest_unblocked',
      userId: 'AlexeiAndreev',
      edit: '11',
      type: 'newEdit',
      createdAt: '2015-12-16 15:50:46',
      auxPageId: '',
      oldSettingsValue: '',
      newSettingsValue: ''
    },
    {
      likeableId: '0',
      likeableType: 'changeLog',
      myLikeValue: '0',
      likeCount: '0',
      dislikeCount: '0',
      likeScore: '0',
      individualLikes: [],
      id: '1125',
      pageId: 'nearest_unblocked',
      userId: 'AlexeiAndreev',
      edit: '1',
      type: 'newUsedAsTag',
      createdAt: '2015-10-28 03:47:09',
      auxPageId: 'work_in_progress_meta_tag',
      oldSettingsValue: '',
      newSettingsValue: ''
    },
    {
      likeableId: '0',
      likeableType: 'changeLog',
      myLikeValue: '0',
      likeCount: '0',
      dislikeCount: '0',
      likeScore: '0',
      individualLikes: [],
      id: '47',
      pageId: 'nearest_unblocked',
      userId: 'AlexeiAndreev',
      edit: '1',
      type: 'newParent',
      createdAt: '2015-10-28 03:46:51',
      auxPageId: 'patch_resistant',
      oldSettingsValue: '',
      newSettingsValue: ''
    },
    {
      likeableId: '0',
      likeableType: 'changeLog',
      myLikeValue: '0',
      likeCount: '0',
      dislikeCount: '0',
      likeScore: '0',
      individualLikes: [],
      id: '422',
      pageId: 'nearest_unblocked',
      userId: 'AlexeiAndreev',
      edit: '1',
      type: 'newParent',
      createdAt: '2015-10-28 03:46:51',
      auxPageId: 'advanced_safety',
      oldSettingsValue: '',
      newSettingsValue: ''
    },
    {
      likeableId: '0',
      likeableType: 'changeLog',
      myLikeValue: '0',
      likeCount: '0',
      dislikeCount: '0',
      likeScore: '0',
      individualLikes: [],
      id: '1644',
      pageId: 'nearest_unblocked',
      userId: 'EliezerYudkowsky',
      edit: '17',
      type: 'newEdit',
      createdAt: '2015-07-03 19:03:46',
      auxPageId: '',
      oldSettingsValue: '',
      newSettingsValue: ''
    },
    {
      likeableId: '0',
      likeableType: 'changeLog',
      myLikeValue: '0',
      likeCount: '0',
      dislikeCount: '0',
      likeScore: '0',
      individualLikes: [],
      id: '1643',
      pageId: 'nearest_unblocked',
      userId: 'EliezerYudkowsky',
      edit: '16',
      type: 'newEdit',
      createdAt: '2015-07-03 19:00:50',
      auxPageId: '',
      oldSettingsValue: '',
      newSettingsValue: ''
    },
    {
      likeableId: '0',
      likeableType: 'changeLog',
      myLikeValue: '0',
      likeCount: '0',
      dislikeCount: '0',
      likeScore: '0',
      individualLikes: [],
      id: '1642',
      pageId: 'nearest_unblocked',
      userId: 'EliezerYudkowsky',
      edit: '15',
      type: 'newEdit',
      createdAt: '2015-07-02 20:04:30',
      auxPageId: '',
      oldSettingsValue: '',
      newSettingsValue: ''
    },
    {
      likeableId: '0',
      likeableType: 'changeLog',
      myLikeValue: '0',
      likeCount: '0',
      dislikeCount: '0',
      likeScore: '0',
      individualLikes: [],
      id: '1641',
      pageId: 'nearest_unblocked',
      userId: 'EliezerYudkowsky',
      edit: '14',
      type: 'newEdit',
      createdAt: '2015-07-02 19:12:31',
      auxPageId: '',
      oldSettingsValue: '',
      newSettingsValue: ''
    },
    {
      likeableId: '0',
      likeableType: 'changeLog',
      myLikeValue: '0',
      likeCount: '0',
      dislikeCount: '0',
      likeScore: '0',
      individualLikes: [],
      id: '1640',
      pageId: 'nearest_unblocked',
      userId: 'EliezerYudkowsky',
      edit: '13',
      type: 'newEdit',
      createdAt: '2015-07-01 22:12:02',
      auxPageId: '',
      oldSettingsValue: '',
      newSettingsValue: ''
    },
    {
      likeableId: '0',
      likeableType: 'changeLog',
      myLikeValue: '0',
      likeCount: '0',
      dislikeCount: '0',
      likeScore: '0',
      individualLikes: [],
      id: '1639',
      pageId: 'nearest_unblocked',
      userId: 'EliezerYudkowsky',
      edit: '12',
      type: 'newEdit',
      createdAt: '2015-07-01 18:34:10',
      auxPageId: '',
      oldSettingsValue: '',
      newSettingsValue: ''
    },
    {
      likeableId: '0',
      likeableType: 'changeLog',
      myLikeValue: '0',
      likeCount: '0',
      dislikeCount: '0',
      likeScore: '0',
      individualLikes: [],
      id: '1638',
      pageId: 'nearest_unblocked',
      userId: 'EliezerYudkowsky',
      edit: '10',
      type: 'newEdit',
      createdAt: '2015-04-06 23:54:07',
      auxPageId: '',
      oldSettingsValue: '',
      newSettingsValue: ''
    },
    {
      likeableId: '0',
      likeableType: 'changeLog',
      myLikeValue: '0',
      likeCount: '0',
      dislikeCount: '0',
      likeScore: '0',
      individualLikes: [],
      id: '1637',
      pageId: 'nearest_unblocked',
      userId: 'EliezerYudkowsky',
      edit: '9',
      type: 'newEdit',
      createdAt: '2015-04-06 23:10:24',
      auxPageId: '',
      oldSettingsValue: '',
      newSettingsValue: ''
    },
    {
      likeableId: '0',
      likeableType: 'changeLog',
      myLikeValue: '0',
      likeCount: '0',
      dislikeCount: '0',
      likeScore: '0',
      individualLikes: [],
      id: '1636',
      pageId: 'nearest_unblocked',
      userId: 'EliezerYudkowsky',
      edit: '8',
      type: 'newEdit',
      createdAt: '2015-04-06 19:46:19',
      auxPageId: '',
      oldSettingsValue: '',
      newSettingsValue: ''
    },
    {
      likeableId: '0',
      likeableType: 'changeLog',
      myLikeValue: '0',
      likeCount: '0',
      dislikeCount: '0',
      likeScore: '0',
      individualLikes: [],
      id: '1635',
      pageId: 'nearest_unblocked',
      userId: 'EliezerYudkowsky',
      edit: '7',
      type: 'newEdit',
      createdAt: '2015-04-06 19:46:03',
      auxPageId: '',
      oldSettingsValue: '',
      newSettingsValue: ''
    },
    {
      likeableId: '0',
      likeableType: 'changeLog',
      myLikeValue: '0',
      likeCount: '0',
      dislikeCount: '0',
      likeScore: '0',
      individualLikes: [],
      id: '1634',
      pageId: 'nearest_unblocked',
      userId: 'EliezerYudkowsky',
      edit: '6',
      type: 'newEdit',
      createdAt: '2015-04-06 19:44:46',
      auxPageId: '',
      oldSettingsValue: '',
      newSettingsValue: ''
    },
    {
      likeableId: '0',
      likeableType: 'changeLog',
      myLikeValue: '0',
      likeCount: '0',
      dislikeCount: '0',
      likeScore: '0',
      individualLikes: [],
      id: '1633',
      pageId: 'nearest_unblocked',
      userId: 'EliezerYudkowsky',
      edit: '5',
      type: 'newEdit',
      createdAt: '2015-04-06 19:44:07',
      auxPageId: '',
      oldSettingsValue: '',
      newSettingsValue: ''
    },
    {
      likeableId: '0',
      likeableType: 'changeLog',
      myLikeValue: '0',
      likeCount: '0',
      dislikeCount: '0',
      likeScore: '0',
      individualLikes: [],
      id: '1632',
      pageId: 'nearest_unblocked',
      userId: 'EliezerYudkowsky',
      edit: '4',
      type: 'newEdit',
      createdAt: '2015-04-06 19:08:20',
      auxPageId: '',
      oldSettingsValue: '',
      newSettingsValue: ''
    },
    {
      likeableId: '0',
      likeableType: 'changeLog',
      myLikeValue: '0',
      likeCount: '0',
      dislikeCount: '0',
      likeScore: '0',
      individualLikes: [],
      id: '1631',
      pageId: 'nearest_unblocked',
      userId: 'EliezerYudkowsky',
      edit: '3',
      type: 'newEdit',
      createdAt: '2015-04-05 21:58:52',
      auxPageId: '',
      oldSettingsValue: '',
      newSettingsValue: ''
    },
    {
      likeableId: '0',
      likeableType: 'changeLog',
      myLikeValue: '0',
      likeCount: '0',
      dislikeCount: '0',
      likeScore: '0',
      individualLikes: [],
      id: '1630',
      pageId: 'nearest_unblocked',
      userId: 'EliezerYudkowsky',
      edit: '2',
      type: 'newEdit',
      createdAt: '2015-04-05 20:45:47',
      auxPageId: '',
      oldSettingsValue: '',
      newSettingsValue: ''
    },
    {
      likeableId: '0',
      likeableType: 'changeLog',
      myLikeValue: '0',
      likeCount: '0',
      dislikeCount: '0',
      likeScore: '0',
      individualLikes: [],
      id: '1629',
      pageId: 'nearest_unblocked',
      userId: 'EliezerYudkowsky',
      edit: '1',
      type: 'newEdit',
      createdAt: '2015-04-05 20:45:20',
      auxPageId: '',
      oldSettingsValue: '',
      newSettingsValue: ''
    }
  ],
  feedSubmissions: [],
  searchStrings: {},
  hasChildren: 'false',
  hasParents: 'true',
  redAliases: {},
  improvementTagIds: [],
  nonMetaTagIds: [],
  todos: [],
  slowDownMap: 'null',
  speedUpMap: 'null',
  arcPageIds: 'null',
  contentRequests: {}
}