Status: work-in-progress - 20% complete
Last modified: 2016-12-07

Negotiating Team Roles and Responsibilities

This is a model of self-organizing teams – specifically small teams of design engineers (“Designers”) – but it could be any type of team doing any type of work.

The goal of this model is to study the effects of Designer migration. Actors on a given team become socialized and aculturated to that team, including its institutions. When an actor moves to a new team with different culture and institutions, conflicts can arise. Is this conflict positive or negative? How does it affect team versatility and creativity? Is it a temporary disturbance or a permanent change? If new members challenge or reconfigure the team’s ‘institutions, are the effects positive, negative, or mixed?

The focus of the model is narrow: the framing process at the start of a design project where Designers interact, discuss, and ultimately come to some agreement on what sort of project this is, what are the goals, who does what (roles and responsibilities), and something about how design work will proceed. Note that this is not a task of creating a shared mental model or shared perceptions (thought that may happen). Instead, the design team only needs to “create equifinal meaning from which organized action can follow” refp:donnellon_communication_1986.

Model Description

During the course of every design project, each member builds perceptions of the project and its outcomes. All Designers can generate one or more “frames” (see below) that is consistent with but not identical to their perceptions. When a new team forms, each Designer brings their past experience and relevant “frames”.

Framing process

The framing process is modeled as a sequential, iterative bargaining game. Each round, Designers are random selected to propose a frame for the design project. The other Designers evaluate the proposed frame and either accept it or reject it. For a frame to be accepted, all Designer must vote “accept”. If no frame is accepted in a given round, the game repeats until a frame is acceptable to all Designers or the iteration limit is reached.

“Frames” as Messages

In the interactional co-construction process of framing refp:dewulf_disentangling_2009, “frames are communicative devices that individuals and groups use to negotiate their interactions”. In our model, frames are a structured messages that can be interpreted/decoded as a partial/ provisional specification for a project (i.e. a “sketch”, reft:cross_design_2011, p 78, 120):

  1. What? – (required) boundary specifications : what is/is not included for consideration in the project
    • Scoping – elements of frame that are included, e.g. purpose, team design
    • Naming – codifying the relevant phenomena
  2. Why? – (optional) purpose specifications
    • Problem – problem/conflict specification: what sort of problem is this? What aspects are problematic?
    • Solution – solution criteria and concepts: What defines a good or acceptable solution? What is the space of solutions?
    • Value – What are the goals and metrics? What good will be achieved and for whom?
  3. How? – (optional) team design specification in FBS ontology :
    • Relevant Knowledge – (optional) first principles, evidence
    • Team Structure – (optional) declarative rules: project parameters, organization structure, roles, responsiblities
    • Team Behaviors – (optional) procedural rules: processes, methods, tools, design artifacts, norms, rules, contingencies, etc.
    • Team Functions – (optional) teleologic rules: the functions that need to be performed in order to fulfull the goals, including performance metrics and dependencies/interrelations.
  4. Beliefs – (optional) value statements : assertions of anticpated benefits (or detriments) of the frame. For example, “low risk of exceeding budget” – including coherence and completeness implications.

Since most of these elements are optional, a frame-as-message may be more or less detailed – “sketchy”. Any elements that are omitted from a given frame can be filled in by individual Designers using whatever default or prior values they might draw on.

Interpretations and Implications of Frames

Frames are shared information, but the interpretations of each Designer are private and individualistic. The implications of interpretation … [ADD MORE HERE]

[ADD MORE HERE] and it serves two purposes for Designers individually: 1) to be compared and contrasted with other frames, and perhaps blended to produce new frames; and 2) to serve as a basis for evaluation, either by a) pattern recognition; b) analogy; or c) thought experiment.

All Designers have capabilities to a) generate their own frames, perhaps by building on other member’s frames; b) interact with frames (compare/contrast, blend); and c) evaluate frames.

All Designers also have a frame acceptance criteria, which is some individualistic convex combination of individual success and team success.

Effect of Designer Experience and Expertise on Frame Evaluation

Due to past experience, Designers have different types of experience and different levels of expertise. To the extent that the proposed frame is a close match to their past experience, the Designer will have a rich capability to evaluate the frame, especially through pattern recognition. If the frame is a intermediate match, then the designer would need to evaluate by analogy and/or thought experiment. If the frame is a poor match with past experience, the designer either a) reaches no evaluation (this round) and follows the lead of more experienced designers in later rounds; or b) guesses at an evaluation, perhaps through a partial thought experiment.

Framing Process Leads to Priming and Situating

Finally, through the frame evaluatation process, each Designer is both primed and situated. “Primed” means that they have formed expectations and they have brought forward relevant doman knowledge. “Situated” means that they have activated the relevant cognitive schema appropriate to their initial tasks, their role and responsibilities, and team norms and rules. The cognitive schema includes conceptual structures, mental models, and appraisals based on perceptions. Finally, being “situated” means activating selective attention.

Of course, all this may be provisional and is subject to revision and reflection as the project unfolds, especially when unforseen problems, conflicts, or opportunities arise.

Frame Message Encoded as Binary String

For modeling purposes we will pre-specify the coding scheme for all possible frames (i.e. messages in a framing process) in the form of a binary string (reft:page_two_1996, reft:hong_problem_2001, reft:ethiraj_bounded_2004).

The frame-as-binary-string (“frame string”) will be encoded with ordered elements (“bits”) for the presence or absence of each frame specification alternative. The set of all possible frame binary strings of length is denoted by . Each element in a string is referred to as a bit. The -th bit of a string is denoted by . Letting denote “yes” and denote “no”, a binary string can denote the set of potential projects to be undertaken refp:page_two_1996.

For a hypothetical example, the boundary specification might be encoded in five bits :

: Market acceptance (i.e. sales and market share)
: Problem definitions
: Solution concepts
: Understanding customer behavior/motivations/needs
: Team structure, functions, and behaviors

Using this template the string would encode the partial message: “In scope for this frame: 1) Market acceptance and 5) Team structure, functions and behaviors”.

The frame string is the concatenation of the frame elements:

If the bit length of each element is , with , then the full string is:

For the computational model, below, frame strings consist of bits , defined as follows:


: Problem specification
: Solution specification
: Design approach
: Team structure and behavior


: Problem includes market acceptance
: Problem includes customer behavior/motivations/needs
: Solution based on ideal
: Solution based on improvement


: Top-down design approach
: Bottom-up design approach
: Function: influence customer behavior/motivations/needs
: Behavior: use formal design methods


: ↓ Schedule
: ↑ Market acceptance
: ↑ Customer behavior/motivations/needs
: ↓ Intra-team conflict

Dependency Matrix

There is usually meaningful and significant dependence between elements in a frame, formalized as a dependency matrix refp:ethiraj_bounded_2004. The nature of the dependence may be functional, logical, conditional, parametric, or it may invoke a default interpretation when a dependent element is missing. This dependence matrix is not part of the encoded message. Instead, it is generated by each Designer as part of the frame interpretation/evaluation process. The dependency matrix is used by Designers to construct meaning via a semiotic process which, fo simplicity, will be modeled as a “black box”. For even more simplicity, the dependency matrix will be pre-specified and constant, and will be initialized under experimental control.

Designer-specific Interpretation and Implications

Following reft:hong_problem_2001, each Designer will include state variables for perspective and heuristics , as follows.

A Designer’s “perspective” is the internal encoding of frames as binary strings.

A perspective is one–to–one and onto.

As a mapping, “perspective” is effectively a set of path-dependent representations (codification rules) that “chuck” the frame strings into semantically equivalent classes. A perspective may not be defined over all of , so the Designer need not be able to represent all possible frame strings. A perspective also may be many-to-one, i.e. more than one set of frame string elements are mapped to the same representation in the internal language.

(a.k.a. path-dependent abstraction rules that guide comparing/contrasting and blending of frames).

A Designer’s heuristics define similarity neighborhoods between frame strings.

A heuristic is a finite collection of mappings, , each a mapping from the set to , i.e., and for any .

For the binary string case, reft:hong_problem_2001 define a flipset heuristic, based on the elementary flipset, which is a mapping , where , where is defined according to the following flipset rule:

A flipset heuristic

We can think of the flipset heuristic as exhaustive search of nearest neighbors, given the chucking representation of perspective .

To this we add dependent-flipset heuristic which involves toggling all dependent elements of , using this dependent-flipset rule:

A dependent-flipset heuristic

The dependent-flipset heuristic takes bigger leaps by skipping over “non-sensical” frame strings. I put this in quotes, because there may be creative possibilities in those “nonsense” frame strings, but we won’t deal with them here.

Finally, we also will define an analogical-flipset heuristic , which is a masked version of the dependent-flipset heuristic . The mask is a string of length with elements and : , using this mask rule with removes selected elements from the flipset:

A analogical-flipset heuristic

Computational Model

 
// Wet Grass causal model

///fold:
// Helper function
var mapToString = function(map){
  var keys = Object.keys(map);
  var mapString = reduce(function(x,acc){
    var sep = acc.length == 0 ? "" : ", "
    return acc.concat(sep + x + " : " + map[x] )
  },"",keys );
  return "{" + mapString + "}";
}

// Generative model
var grassGetsWet = function(){
  var cloudy = flip(0.5);
  var rain = cloudy ? flip(0.8) : flip(0.2);
  var sprinkler = cloudy ? flip(0.1) : flip(0.5);
  var wetGrass = rain && sprinkler 
                   ? flip(.99)  
                   : (rain && !sprinkler) || (!rain && sprinkler) 
                        ? flip(0.9) 
                        : flip(0.0001); // what is the prob. of some 
                                        //   other cause, not in model?
                              // was flip(0.0); // impossibility
  return {wetGrass: wetGrass,
          rain: rain,
          sprinkler: sprinkler,
          cloudy: cloudy};
}

// Generalized inference function given any combination of evidence
var inference = function(evidence){
  var applyEvidence = Infer({ method: 'enumerate' }, function(){
  var trial = grassGetsWet();
   if ("sprinkler" in evidence){
      condition(trial.sprinkler === evidence.sprinkler);
   }
   if ("rain" in evidence){
      condition(trial.rain === evidence.rain);
   }
   if ("cloudy" in evidence){
      condition(trial.cloudy === evidence.cloudy);
   }
   if ("wetGrass" in evidence){
      condition(trial.wetGrass === evidence.wetGrass);
   }
   return {rain: trial.rain,
           cloudy: trial.cloudy,
           sprinkler: trial.sprinkler,
           wetGrass: trial.wetGrass,
          };
});
  
  return applyEvidence;
}
///

// ENTER EVIDENCE HERE in this form:
//      { cloudy : true,
//          rain : false }
// Leave out any entry where there is no evidence

var evidence = {cloudy : false,
               wetGrass: true};

var evidenceString = mapToString(evidence);

// Output
print("Given evidence = " + mapToString(evidence) + "...");
var allCombinations = inference(evidence);

viz.table(allCombinations);

// Extract marginal probabilities:
///fold:
var results = {
  rain: Math.exp(
                 Infer({method: 'enumerate'},
                   function(){
                      var trial = sample(allCombinations);
                      return trial.rain;
                   }
                 ).score(true) 
               ),
  cloudy: Math.exp(
                 Infer({method: 'enumerate'},
                   function(){
                      var trial = sample(allCombinations);
                      return trial.cloudy;
                   }
                 ).score(true) 
               ),
  wetGrass: Math.exp(
                 Infer({method: 'enumerate'},
                   function(){
                      var trial = sample(allCombinations);
                      return trial.wetGrass;
                   }
                 ).score(true) 
               ),
  sprinkler: Math.exp(
                 Infer({method: 'enumerate'},
                   function(){
                      var trial = sample(allCombinations);
                      return trial.sprinkler;
                   }
                 ).score(true) 
               )
}
///
  
print("... Pr(rain) = " + results.rain);
print("    Pr(cloudy) = " + results.cloudy);
print("    Pr(wetGrass) = " + results.wetGrass);
print("    Pr(sprinkler) = " + results.sprinkler);

Updating Model Parameters



// Wet Grass causal model 2

// In this setting, evidence arrives in a time sequence
// As evidence arrives, we want to adjust to the model parameters.
// This is basically like estimating the bias in a coin after each toss.

// Step 1. Make an inference based on the current model + evidence
// Step 2. Update model parameters conditioned on the 
//    cumulative to that point in time

//************************
// UTILITY FUNCTIONS
//************************
//   count(item,arr); takeN(n, arr); shuffle(arr) 
///fold:
// count: count the number of "item" in "arr"
var count = function(item,arr){
    if (arr.length === 0 || item.length === 0){
       return 0;
    } else {
      return filter(function(x){return x == item;},arr).length;
    }
}

// takeN: return the first "n" elements of "arr"
var takeN = function(n, arr){
    return n <= 0 || arr.length === 0
        ? 0
        : remove(null, mapIndexed(function(i,x){
                                  return i < n ? x : null},arr));
}

var removeIndexed = function(i, arr){
    return remove(null,mapIndexed(function(j,x){
                                  return j === i ? null : x;}, arr));
}

// add random elements from array to an accumulator
//  This is a recursive function, with safety counter
var addRandomElement = function(arr,acc,count){
    if (count >= 0 && arr.length > 0){
        var x = sample(RandomInteger({n:arr.length}));
        var newAcc = acc.concat(arr[x]);
        var newArr = removeIndexed(x,arr);
        var newCount = count - 1;
        addRandomElement(newArr,newAcc,newCount);
    } else {
        return acc;
    }
}

// shuffle: return an array in random order
var shuffle = function(arr){
    return addRandomElement(arr,[],arr.length);
}
///

//************************
// EXPERIMENT PARAMETERS
//************************

var K = 10; // total number of observations in sequence
var numH = 7; // number of Heads in all observations in the trial
var N = 10; // number of (randomly selected) observations in observedData

//************************
// OBSERVED DATA ("EVIDENCE")
//************************
var trial = mapN(function(x){return x < numH ? "H": "T";},K);
var observations = shuffle(trial);
var observedData = takeN(N,observations);
var obsH = count("H", observedData);

//************************
// PRIOR KNOWLEDGE
//************************
// Prior distribution over (0,1), 
//  with mean and "informativeness" parameter in (0,1)
///fold:
// Uses a Beta distribution, we smoothly transition from uniform distribution
//  (uninformative) to Gaussian-like (informative), with the middle ground
//  being "somewhat informative" Beta(2,2), assuming mean of 0.5
// Mean is adjusted by informativeness parameter if < 0.5

// First shape parameter "a" ranges from 1 to 3, 
//       where 1 = uniform distribution
// Second shape parameter is derived from "a" and "mean"
///
var priorPr = function(mean,informative){
  var a = 1 + informative * 2
  var adjMean = informative < 0.5 
      ? (1 - (informative * 2)) * 0.5 + (informative * 2) * mean
      : mean;
  var b = (a * ( 1 - adjMean ) ) / adjMean;
  return beta(a,b);
}


//************************
// MODEL and INFERENCE
//************************

// toss: function that returns "H" with probability r, otherwise "T"
var toss = function(r) {return flip(r) ? "H" : "T";}

// We'll use MCMC inference, since our variable of interest is continuous with
//  finite support (0,1), and without multiple modes or other complications.
var mcmcParms = {method: 'MCMC', kernal : "MH", samples:1000, burn: 200};
var posterior = function(prior) {
    return Infer(mcmcParms,
        function () {
            //  p is defined as the probability of "H" on a single toss, 
            //  in the range (0,1). p is our variable of interest 
            var p = prior();
            // data: N random draws from toss(p), given random draw of p
            var data = repeat(N,function(){return toss(p);});
            // Count the number of heads, since we don't care about the order
            var dataH = count("H",data);
            // Upweight likelihood when # of "H" in data = # of "H" in observed
    //        observe(Gaussian({mu: dataH, sigma: 0.2}), obsH);
            // ^^^^^ try commenting this out, and uncomment "factor(...)" below

    // "factor()" is a second method for weighting likelihood.
    //   This is a "softer" method because it downweights non-matching
    //   execution traces by an amount proportional to the number 
    //   of tosses, as opposed to downweight by -Infinity, 
    //   as in condition() and observe().
    //   The justification is that with few tosses, you have 
    //   less justification for modifying your prior beliefs
     factor (dataH == obsH ? 0 : -( N / 2.5));
    //  ^^^^^ try uncommenting this, 
    //          while also commenting out "observe(...)" above
            return {p: p};
        });
}


///fold:
// Helper function
var mapToString = function(map){
  var keys = Object.keys(map);
  var mapString = reduce(function(x,acc){
    var sep = acc.length == 0 ? "" : ", "
    return acc.concat(sep + x + " : " + map[x] )
  },"",keys );
  return "{" + mapString + "}";
}

///

// Generative model
var grassGetsWet = function(prC,prRC,prSC,prSnC,prGRS, prGRoS, prO){
  
  var cloudy = flip(prC); // 0.5
  var rain = cloudy ? flip(prRC) : flip(1 - prRC); //flip(0.8) : flip(0.2);
  var sprinkler = cloudy ? flip(prSC) : flip(prSnC); //flip(0.1) : flip(0.5);
  var wetGrass = rain && sprinkler 
                   ? flip(prGRS)    // 0.99
                   : (rain && !sprinkler) || (!rain && sprinkler) 
                        ? flip(prGRoS)    //  flip(0.9)
                        : flip(prO); // 0.0001 = prob. of some 
                                        //   other cause, not in model?
                              // was flip(0.0); // impossibility
  return {wetGrass: wetGrass,
          rain: rain,
          sprinkler: sprinkler,
          cloudy: cloudy};
}
var informed = 0.5;  // somewhat informed, midway between 0 and 1
var prC = listMean(repeat(500,function(){return priorPr(0.5,informed);}));
var prRC = listMean(repeat(500,function(){return priorPr(0.8,informed);}));
var prSC = listMean(repeat(500,function(){return priorPr(0.1,informed);}));
var prSnC = listMean(repeat(500,function(){return priorPr(0.5,informed);}));
var prGRS = listMean(repeat(500,function(){return priorPr(0.99,informed);}));
var prGRoS = listMean(repeat(500,function(){return priorPr(0.9,informed);}));
var prO = listMean(repeat(500,function(){return priorPr(0.0001,informed);}));

// Generalized inference function given any combination of evidence
var inference = function(evidence){
  var applyEvidence = Infer({ method: 'enumerate' }, function(){
  var trial = grassGetsWet(prC,prRC,prSC,prSnC,prGRS, prGRoS, prO);
   if ("sprinkler" in evidence){
      condition(trial.sprinkler === evidence.sprinkler);
   }
   if ("rain" in evidence){
      condition(trial.rain === evidence.rain);
   }
   if ("cloudy" in evidence){
      condition(trial.cloudy === evidence.cloudy);
   }
   if ("wetGrass" in evidence){
      condition(trial.wetGrass === evidence.wetGrass);
   }
   return {rain: trial.rain,
           cloudy: trial.cloudy,
           sprinkler: trial.sprinkler,
           wetGrass: trial.wetGrass,
          };
});
  
  return applyEvidence;
}


// ENTER EVIDENCE HERE in this form:
//      { cloudy : true,
//          rain : false }
// Leave out any entry where there is no evidence

var evidence = {cloudy : false,
               wetGrass: true};

var evidenceString = mapToString(evidence);

// Output
print("Model parameters:");
///fold:
print("Pr(cloudy) = " + prC);
print("Pr(rain|cloudy) = " + prRC);
print("Pr(sprinkler|cloudy) = " + prSC);
print("Pr(sprinkler| not cloudy) = " + prSnC);
print("Pr(wetGrass| rain AND sprinkler) = " + prGRS);
print("Pr(wetGrass| rain XOR sprinkler) = " + prGRoS);
print("Pr(wetGrass| other) = " + prO + "\n");
///
print("Given evidence = " + mapToString(evidence) + " =>");
var allCombinations = inference(evidence);

//viz.table(allCombinations);

// Extract marginal probabilities:
///fold:
var results = {
  rain: Math.exp(
                 Infer({method: 'enumerate'},
                   function(){
                      var trial = sample(allCombinations);
                      return trial.rain;
                   }
                 ).score(true) 
               ),
  cloudy: Math.exp(
                 Infer({method: 'enumerate'},
                   function(){
                      var trial = sample(allCombinations);
                      return trial.cloudy;
                   }
                 ).score(true) 
               ),
  wetGrass: Math.exp(
                 Infer({method: 'enumerate'},
                   function(){
                      var trial = sample(allCombinations);
                      return trial.wetGrass;
                   }
                 ).score(true) 
               ),
  sprinkler: Math.exp(
                 Infer({method: 'enumerate'},
                   function(){
                      var trial = sample(allCombinations);
                      return trial.sprinkler;
                   }
                 ).score(true) 
               )
}
///
  
print("=>  Pr(rain) = " + results.rain);
print("    Pr(cloudy) = " + results.cloudy);
print("    Pr(wetGrass) = " + results.wetGrass);
print("    Pr(sprinkler) = " + results.sprinkler);





RegEx Test

 

// regex tests


var data = ["000111","110011"];

var match = function(text,pattern){
  return text.search(pattern) > -1 ? true : false;
}
var elements = ["0","1","."];



var generateRandomPattern = function (len){
  // random string of "1", "0", ".", of length len
  var arr = repeat(len,function(){
      return elements[sample(RandomInteger({n:3}))];
      });
  var str = reduce(
    function(x,acc){return String.prototype.concat(acc,x );},"",arr);
  return str;
}

var samp = generateRandomPattern(6);
print(samp);


var str = 'For more information, see Chapter 3.4.5.1';
var re = /see (chapter \d)/i;
var found = str.search(re);

print("Match? " + match(str,re)); 

var pat = RegExp.prototype.constructor("see chapter","g");

print("Match? test: " + pat.exec(str) );
print("Match? " + match(str,pat));
print(pat.source + ",  " + pat.global + ", " + typeof pat);

 


 
var xs = ["0011", "0011001", "110", "1100010101", "110111","0011"];
var labels = [false, false, true, true, true,false];



var model = function(){
  var limitGaussian = function(mean,sd){
    var draw = sample(Gaussian({mu:mean,sigma:sd}));
    if (draw >= .8 || draw <= 0.2){
      return limitGaussian(mean,sd);
    } else {
      return draw;
    }
  }
  var p1 = beta(2,2);  //prefer Pr("0")=Pr("1")=0.5
  var p2 = beta(2,3); // prefer shorter/simpler
  var p3 = beta(1.5,1.5);
  var c0 = uniform(0,1);
  var c1 = uniform(0,1);
  var c2 = uniform(0,1);
  var c3 = uniform(0,1);
  var cSum = c0 + c1 + c2 + c3 ;
  var randInt4 = Categorical({ps: 
                 [c0 / cSum,
                  c1 / cSum,
                  c2 / cSum,
                  c3 / cSum]
                  , vs: [0,1,2,3]});
  var k0 = uniform(0,1);
  var k1 = uniform(0,1);
  var k2 = uniform(0,1);
  var kSum = k0 + k1 + k2;
  var randInt3 = Categorical({ps: 
                 [k0 / kSum,
                  k1 / kSum,
                  k2 / kSum]
                  , vs: [0,1,2]});
  var p4 = 0.5; //limitGaussian(0.3,0.05); //beta(2,3);  // prefer shorter/simpler
  var p5 = limitGaussian(0.3,0.05); //0.5; //beta(2,3);  // prefer shorter/simpler
  var p6 = limitGaussian(0.3,0.05); //0.5; //beta(2,3);  // prefer shorter/simpler
  var n0 = beta(5,2); // prefer re only
  var n1 = beta(1.5,1.5); //  then ^re$ 
  var n2 = beta(2,4);     //  then re$ 
  var n3 = beta(2,4);     //  or ^re 
  var nSum = n0 + n1 + n2 + n3;
  var randInt4a = Categorical({ps: 
                 [n0 / nSum,
                  n1 / nSum,
                  n2 / nSum,
                  n3 / nSum
                 ]
                  , vs: [0,1,2,3]});

//  ::= "0" | "1"  // BTW there are no meta char, so no "\" escape
var char = function(){return flip(p1)  ? "0" : "1";}
//	::=	 "-" 
//var range = function(){return char() + "-" + char()};
//	::=	 | 
//var set_item = function(){return flip() ? range() : char();}
var set_item = function(){return char();}
//	::=	 |  
var set_items = function(counter){
  if (counter <= 0){
    return set_item();
  } else {
    return flip(p2) ? set_item() : set_item() + set_items(counter - 1);
  }
}
//	::=	[^  "]"
var negative_set = function(){return "[^" + set_items(3) + "]";}
//	::=	[  "]"
var positive_set = function(){return "[" + set_items(3) + "]";}
//	::=	 | 
var set = function(){return flip(p3) ? positive_set() : negative_set() ;}
//	::=	$
var eos = "$";
//	::=	.
var any = ".";
var sos = "^";

//	::=	 |  |  | 
var elementary_re = function() {
  var draw = sample(randInt4);
  return draw == 0 ? group(1)
    : draw == 1 ? any
    : draw == 2 ? char()
    : draw == 3 ? set()
    : "";
}

//	::=	 "+"
var plus = function(){return elementary_re() + "+";}
//	::=	 "*"
var star = function(){return elementary_re() + "*"};
//	::=	 |  | 
var basic_re = function(){
   var draw = sample(randInt3);
   return draw == 0 ? star()
        : draw == 1 ? plus()
        : draw == 2 ? elementary_re()
        : "";
}
//	::=	 
//	::=	 | 
var simple_re = function(counter){
    if (counter <= 0){
      return basic_re();
  } else {
      return flip(p4) ?  basic_re()  : simple_re(counter - 1) + basic_re();
  }
}

//	::=	 "|" 
//	::=	 | 
var re = function(counter){
  if (counter <= 0){
    return simple_re(7);
  } else {
  return flip(p5)  
     ?  simple_re(7)  
     : flip(p6) ?  simple_re(7)  : re(counter - 1);
  }
}
//	::=	(  ")"
var group = function(counter){
  if (counter <= 0){
    return "";
  } else {
  return "(" + re(counter - 1) + ")";
  }
}

var regex = function(counter){
  var draw = sample(randInt4a);
  return draw == 0 ? re(counter)
    : draw == 1 ? sos + re(counter) + eos 
    : draw == 2 ? sos + re(counter)
    : draw == 3 ? re(counter) + eos : ""; 
}
  var pattern =  regex(3);
  var pat = RegExp.prototype.constructor(pattern,"g");
  
/*  
   map2(
    function(x, label) {
      factor(pat.test(x) == label ? - pattern.length : -Infinity);
    },
    xs,
    labels);
*/    
/*
  factor(pat.test(xs[0]) == labels[0]  ? - pattern.length / 3 : -1000);
  factor(pat.test(xs[1]) == labels[1]  ? - pattern.length / 3  : -1000);
  factor(pat.test(xs[2]) == labels[2]  ? - pattern.length / 3  : -1000);
  factor(pat.test(xs[3]) == labels[3]  ? - pattern.length / 3  : -1000);
  factor(pat.test(xs[4]) == labels[4]  ? - pattern.length / 3  : -1000);
  //factor(pat.test(xs[5]) == labels[5]  ? - pattern.length / 3  : -1000);
*/
return pattern;
}

//var result = Infer({method: 'MCMC', samples: 100, burn: 10}, model);

var run = model();
print(run);
var pat = RegExp.prototype.constructor(run,"g");
print(xs[0] + " match? " + pat.test(xs[0]));
print(pat.source + ",  " + pat.global + ", " + typeof pat);

var flags = "g"  ;
var flags = pat.ignoreCase ? flags + "i" : flags;
var flags = pat.multiline ? flags + "m" : flags;

var pat2 = RegExp.prototype.constructor("^1?.?..1$","g");

print(xs[0] + " <=> " + pat2.source + "  match? " + pat2.test(xs[0]));

//viz.auto(result);
</code></pre>


### Full Model

 

// First design project
// initialize perceptions

// form teams

// execute design project

// form new teams after member migration

// execute framing protocol

// randomize Designer sequence

// Chosen Designer generates a frame to propose

// other Designers evaluate the frame

// each Designer votes: "accept", "reject", or "don't care"

// is the frame accepted?  (requires at least one "accept" and zero "reject")

// if not accepted, then next frame proposal

// if round complete, new round, unless limit reached

// if frame accepted, then start design project

// if no frame accepted, then disband the team, have members join new teams

## Experiments ## Results ## Analysis ____ ## Endnotes

1. End note 1

____ ## References

cite:cross_design_2011

cite:dewulf_disentangling_2009

cite:donnellon_communication_1986

cite:sosa_computational_2005

To Do

  1. Add characteristics to each task that are interpreted as signs, symbols, and signals during task performance.
  2. Add to actors: differential skills in performing tasks (capabilities + routines)
  3. Add to actors: conception of their capabilities + routines, related to “getting the job done”
    • Maybe this could be in some mental frame construct
  4. Add memoization to performance landscape to save time on initialization and allow larger problems
  5. Add performance correlation between tasks according to the similarity of their characteristics.
____