diff --git a/dev/.documenter-siteinfo.json b/dev/.documenter-siteinfo.json index 9d2d5286e..de61fb576 100644 --- a/dev/.documenter-siteinfo.json +++ b/dev/.documenter-siteinfo.json @@ -1 +1 @@ -{"documenter":{"julia_version":"1.8.5","generation_timestamp":"2024-08-02T17:42:46","documenter_version":"1.5.0"}} \ No newline at end of file +{"documenter":{"julia_version":"1.8.5","generation_timestamp":"2024-08-13T20:43:34","documenter_version":"1.5.0"}} \ No newline at end of file diff --git a/dev/GaussHermite/2e5ba326.svg b/dev/GaussHermite/0cf58bcc.svg similarity index 95% rename from dev/GaussHermite/2e5ba326.svg rename to dev/GaussHermite/0cf58bcc.svg index 78ff18019..4f1824dc1 100644 --- a/dev/GaussHermite/2e5ba326.svg +++ b/dev/GaussHermite/0cf58bcc.svg @@ -14,15 +14,15 @@ - - + + Centered age (yr) - + -20 @@ -49,43 +49,43 @@ - - - - + + + + - - + + - - + + - + - + - + - + - + - - - + + + @@ -93,20 +93,20 @@ - - + + N - + Y - - + + Contraception use @@ -114,7 +114,7 @@ - + diff --git a/dev/GaussHermite/dfabb0f3.svg b/dev/GaussHermite/481ec0b3.svg similarity index 66% rename from dev/GaussHermite/dfabb0f3.svg rename to dev/GaussHermite/481ec0b3.svg index 213d1c606..8d46de844 100644 --- a/dev/GaussHermite/dfabb0f3.svg +++ b/dev/GaussHermite/481ec0b3.svg @@ -14,8 +14,8 @@ - - + + -5 @@ -32,170 +32,170 @@ - - - - + + + + - - + + - + - + - + - + - + - - + + - + - + - + - - - - + + + + - - + + 4.5127458633997832.2345844007746478e-5 - - - + + + - - + + 3.2054290028564690.002789141321231774 - - - + + + - - + + 2.0768479786778320.04991640676521808 - - - + + + - - + + 1.02325566378913550.244097502894938 - - - + + + - - + + 0.00.4063492063492072 - - - + + + - - + + -1.0232556637891310.24409750289493937 - - - + + + - - + + -2.0768479786778260.04991640676521781 - - - + + + - - + + -3.2054290028564650.002789141321231766 - - - + + + - - + + -4.5127458633997782.234584400774634e-5 @@ -203,33 +203,33 @@ - - - + + + - + - + - + - + - + - + - + - + @@ -237,40 +237,40 @@ - - + + 0.0 - + 0.1 - + 0.2 - + 0.3 - + 0.4 - + 0.5 - - + + Weight @@ -278,7 +278,7 @@ - + diff --git a/dev/GaussHermite/48c35dea.svg b/dev/GaussHermite/63da8d03.svg similarity index 66% rename from dev/GaussHermite/48c35dea.svg rename to dev/GaussHermite/63da8d03.svg index d6a37d34b..3ae32f7d9 100644 --- a/dev/GaussHermite/48c35dea.svg +++ b/dev/GaussHermite/63da8d03.svg @@ -14,8 +14,8 @@ - - + + -5 @@ -32,167 +32,167 @@ - - - - + + + + - - + + - + - + - + - + - - + + - + - + - + - - - - + + + + - - + + 4.512745863399783-15.449633937746196 - - - + + + - - + + 3.205429002856469-8.485963249444321 - - - + + + - - + + 2.076847978677832-4.324342104304117 - - - + + + - - + + 1.0232556637891355-2.0344705583898297 - - - + + + - - + + 0.0-1.2992080183872758 - - - + + + - - + + -1.023255663789131-2.0344705583898217 - - - + + + - - + + -2.076847978677826-4.324342104304124 - - - + + + - - + + -3.205429002856465-8.485963249444326 - - - + + + - - + + -4.512745863399778-15.449633937746205 @@ -200,33 +200,33 @@ - - - + + + - + - + - + - + - + - + - + - + @@ -234,35 +234,35 @@ - - + + 2-20 - + 2-15 - + 2-10 - + 2-5 - + 20 - - + + Weight (log scale) @@ -270,7 +270,7 @@ - + diff --git a/dev/GaussHermite/index.html b/dev/GaussHermite/index.html index 43086dade..11fb8ce1f 100644 --- a/dev/GaussHermite/index.html +++ b/dev/GaussHermite/index.html @@ -12,15 +12,15 @@ ev = eigen(SymTridiagonal(zeros(k), sqrt.(1:k-1))) ev.values, abs2.(ev.vectors[1,:]) end;
gausshermitenorm (generic function with 1 method)

providing

gausshermitenorm(3)
([-1.7320508075688739, 1.1102230246251565e-15, 1.7320508075688774], [0.16666666666666743, 0.6666666666666657, 0.16666666666666677])

The weights and positions are often shown as a lollipop plot. For the 9th order rule these are

gh9=gausshermitenorm(9)
-plot(x=gh9[1], y=gh9[2], Geom.hair, Geom.point, Guide.ylabel("Weight"), Guide.xlabel(""))
Example block output

Notice that the magnitudes of the weights drop quite dramatically away from zero, even on a logarithmic scale

plot(
+plot(x=gh9[1], y=gh9[2], Geom.hair, Geom.point, Guide.ylabel("Weight"), Guide.xlabel(""))
Example block output

Notice that the magnitudes of the weights drop quite dramatically away from zero, even on a logarithmic scale

plot(
     x=gh9[1], y=gh9[2], Geom.hair, Geom.point,
     Scale.y_log2, Guide.ylabel("Weight (log scale)"),
     Guide.xlabel(""),
-)
Example block output

The definition of MixedModels.GHnorm is similar to the gausshermitenorm function with some extra provisions for ensuring symmetry of the abscissae and the weights and for caching values once they have been calculated.

MixedModels.GHnormFunction
GHnorm(k::Int)

Return the (unique) GaussHermiteNormalized{k} object.

The function values are stored (memoized) when first evaluated. Subsequent evaluations for the same k have very low overhead.

source
using MixedModels
+)
Example block output

The definition of MixedModels.GHnorm is similar to the gausshermitenorm function with some extra provisions for ensuring symmetry of the abscissae and the weights and for caching values once they have been calculated.

MixedModels.GHnormFunction
GHnorm(k::Int)

Return the (unique) GaussHermiteNormalized{k} object.

The function values are stored (memoized) when first evaluated. Subsequent evaluations for the same k have very low overhead.

source
using MixedModels
 GHnorm(3)
MixedModels.GaussHermiteNormalized{3}([-1.7320508075688772, 0.0, 1.7320508075688772], [0.16666666666666666, 0.6666666666666666, 0.16666666666666666])

By the properties of the normal distribution, when $\mathcal{X}\sim\mathscr{N}(\mu, \sigma^2)$

\[\mathbb{E}[g(x)] \approx \sum_{i=1}^k g(\mu + \sigma z_i)\,w_i\]

For example, $\mathbb{E}[\mathcal{X}^2]$ where $\mathcal{X}\sim\mathcal{N}(2, 3^2)$ is

μ = 2; σ = 3; ghn3 = GHnorm(3);
 sum(@. ghn3.w * abs2(μ + σ * ghn3.z))  # should be μ² + σ² = 13
13.0

(In general a dot, '.', after the function name in a function call, as in abs2.(...), or before an operator creates a fused vectorized evaluation in Julia. The macro @. has the effect of vectorizing all operations in the subsequent expression.)

Application to a model for contraception use

A binary response is a "Yes"/"No" type of answer. For example, in a 1989 fertility survey of women in Bangladesh (reported in Huq, N. M. and Cleland, J., 1990) one response of interest was whether the woman used artificial contraception. Several covariates were recorded including the woman's age (centered at the mean), the number of live children the woman has had (in 4 categories: 0, 1, 2, and 3 or more), whether she lived in an urban setting, and the district in which she lived. The version of the data used here is that used in review of multilevel modeling software conducted by the Center for Multilevel Modelling, currently at University of Bristol (http://www.bristol.ac.uk/cmm/learning/mmsoftware/data-rev.html). These data are available as the :contra dataset.

contra = DataFrame(MixedModels.dataset(:contra))
 describe(contra)
5×7 DataFrame
Rowvariablemeanminmedianmaxnmissingeltype
SymbolUnion…AnyUnion…AnyInt64DataType
1distD01D610String
2urbanNY0String
3livch03+0String
4age0.00204757-13.56-1.5619.440Float64
5useNY0String

A smoothed scatterplot of contraception use versus age

plot(contra, x=:age, y=:use, Geom.smooth, Guide.xlabel("Centered age (yr)"),
-    Guide.ylabel("Contraception use"))
Example block output

shows that the proportion of women using artificial contraception is approximately quadratic in age.

A model with fixed-effects for age, age squared, number of live children and urban location and with random effects for district, is fit as

const form1 = @formula use ~ 1 + age + abs2(age) + livch + urban + (1|dist);
+    Guide.ylabel("Contraception use"))
Example block output

shows that the proportion of women using artificial contraception is approximately quadratic in age.

A model with fixed-effects for age, age squared, number of live children and urban location and with random effects for district, is fit as

const form1 = @formula use ~ 1 + age + abs2(age) + livch + urban + (1|dist);
 m1 = fit(MixedModel, form1, contra, Bernoulli(), fast=true)
Generalized Linear Mixed Model fit by maximum likelihood (nAGQ = 1)
   use ~ 1 + age + :(abs2(age)) + livch + urban + (1 | dist)
   Distribution: Bernoulli{Float64}
@@ -86,15 +86,15 @@
     
   
 
-
-  
+
+  
     
       
         u₁
       
     
   
-  
+  
     
       
         -5
@@ -111,52 +111,52 @@
       
     
   
-  
-    
-      
-        
+  
+    
+      
+        
           
         
       
-      
-        
+      
+        
           
         
-        
+        
           
         
-        
+        
           
         
-        
+        
           
         
-        
+        
           
         
-        
+        
           
         
       
-      
-        
+      
+        
           
         
-        
+        
           
         
-        
+        
           
         
       
-      
+      
         
           
           
         
-        
-          
-            
+        
+          
+            
               
             
           
@@ -164,40 +164,40 @@
       
     
   
-  
-    
+  
+    
       
         0
       
     
-    
+    
       
         100
       
     
-    
+    
       
         200
       
     
-    
+    
       
         300
       
     
-    
+    
       
         400
       
     
-    
+    
       
         500
       
     
   
-  
-    
+  
+    
       
         Deviance contribution
       
@@ -205,7 +205,7 @@
   
 
 
-  
+  
     
   
 
@@ -227,15 +227,15 @@
     
   
 
-
-  
+
+  
     
       
         u₃
       
     
   
-  
+  
     
       
         -5
@@ -252,49 +252,49 @@
       
     
   
-  
-    
-      
-        
+  
+    
+      
+        
           
         
       
-      
-        
+      
+        
           
         
-        
+        
           
         
-        
+        
           
         
-        
+        
           
         
-        
+        
           
         
       
-      
-        
+      
+        
           
         
-        
+        
           
         
-        
+        
           
         
       
-      
+      
         
           
           
         
-        
-          
-            
+        
+          
+            
               
             
           
@@ -302,35 +302,35 @@
       
     
   
-  
-    
+  
+    
       
         0
       
     
-    
+    
       
         10
       
     
-    
+    
       
         20
       
     
-    
+    
       
         30
       
     
-    
+    
       
         40
       
     
   
-  
-    
+  
+    
       
         Deviance contribution
       
@@ -338,7 +338,7 @@
   
 
 
-  
+  
     
   
 
@@ -373,15 +373,15 @@
     
   
 
-
-  
+
+  
     
       
         Scaled and shifted u₁
       
     
   
-  
+  
     
       
         -5
@@ -398,46 +398,46 @@
       
     
   
-  
-    
-      
-        
+  
+    
+      
+        
           
         
       
-      
-        
+      
+        
           
         
-        
+        
           
         
-        
+        
           
         
-        
+        
           
         
       
-      
-        
+      
+        
           
         
-        
+        
           
         
-        
+        
           
         
       
-      
+      
         
           
           
         
-        
-          
-            
+        
+          
+            
               
             
           
@@ -445,30 +445,30 @@
       
     
   
-  
-    
+  
+    
       
         0
       
     
-    
+    
       
         10
       
     
-    
+    
       
         20
       
     
-    
+    
       
         30
       
     
   
-  
-    
+  
+    
       
         Shifted deviance contribution
       
@@ -476,7 +476,7 @@
   
 
 
-  
+  
     
   
 
@@ -499,15 +499,15 @@
     
   
 
-
-  
+
+  
     
       
         Scaled and shifted u₃
       
     
   
-  
+  
     
       
         -5
@@ -524,52 +524,52 @@
       
     
   
-  
-    
-      
-        
+  
+    
+      
+        
           
         
       
-      
-        
+      
+        
           
         
-        
+        
           
         
-        
+        
           
         
-        
+        
           
         
-        
+        
           
         
-        
+        
           
         
       
-      
-        
+      
+        
           
         
-        
+        
           
         
-        
+        
           
         
       
-      
+      
         
           
           
         
-        
-          
-            
+        
+          
+            
               
             
           
@@ -577,40 +577,40 @@
       
     
   
-  
-    
+  
+    
       
         0
       
     
-    
+    
       
         5
       
     
-    
+    
       
         10
       
     
-    
+    
       
         15
       
     
-    
+    
       
         20
       
     
-    
+    
       
         25
       
     
   
-  
-    
+  
+    
       
         Shifted deviance contribution
       
@@ -618,7 +618,7 @@
   
 
 
-  
+  
     
   
 
@@ -649,15 +649,15 @@
     
   
 
-
-  
+
+  
     
       
         Scaled and shifted u₁
       
     
   
-  
+  
     
       
         -5
@@ -674,43 +674,43 @@
       
     
   
-  
-    
-      
-        
+  
+    
+      
+        
           
         
       
-      
-        
+      
+        
           
         
-        
+        
           
         
-        
+        
           
         
       
-      
-        
+      
+        
           
         
-        
+        
           
         
-        
+        
           
         
       
-      
+      
         
           
           
         
-        
-          
-            
+        
+          
+            
               
             
           
@@ -718,25 +718,25 @@
       
     
   
-  
-    
+  
+    
       
         0.0
       
     
-    
+    
       
         0.5
       
     
-    
+    
       
         1.0
       
     
   
-  
-    
+  
+    
       
         Conditional density
       
@@ -744,7 +744,7 @@
   
 
 
-  
+  
     
   
 
@@ -767,15 +767,15 @@
     
   
 
-
-  
+
+  
     
       
         Scaled and shifted u₃
       
     
   
-  
+  
     
       
         -5
@@ -792,43 +792,43 @@
       
     
   
-  
-    
-      
-        
+  
+    
+      
+        
           
         
       
-      
-        
+      
+        
           
         
-        
+        
           
         
-        
+        
           
         
       
-      
-        
+      
+        
           
         
-        
+        
           
         
-        
+        
           
         
       
-      
+      
         
           
           
         
-        
-          
-            
+        
+          
+            
               
             
           
@@ -836,25 +836,25 @@
       
     
   
-  
-    
+  
+    
       
         0.0
       
     
-    
+    
       
         0.5
       
     
-    
+    
       
         1.0
       
     
   
-  
-    
+  
+    
       
         Conditional density
       
@@ -862,7 +862,7 @@
   
 
 
-  
+  
     
   
 
@@ -892,15 +892,15 @@
     
   
 
-
-  
+
+  
     
       
         Scaled and shifted u₁
       
     
   
-  
+  
     
       
         -5
@@ -917,49 +917,49 @@
       
     
   
-  
-    
-      
-        
+  
+    
+      
+        
           
         
       
-      
-        
+      
+        
           
         
-        
+        
           
         
-        
+        
           
         
-        
+        
           
         
-        
+        
           
         
       
-      
-        
+      
+        
           
         
-        
+        
           
         
-        
+        
           
         
       
-      
+      
         
           
           
         
-        
-          
-            
+        
+          
+            
               
             
           
@@ -967,35 +967,35 @@
       
     
   
-  
-    
+  
+    
       
         0
       
     
-    
+    
       
         1
       
     
-    
+    
       
         2
       
     
-    
+    
       
         3
       
     
-    
+    
       
         4
       
     
   
-  
-    
+  
+    
       
         Kernel ratio
       
@@ -1003,7 +1003,7 @@
   
 
 
-  
+  
     
   
 
@@ -1025,15 +1025,15 @@
     
   
 
-
-  
+
+  
     
       
         Scaled and shifted u₃
       
     
   
-  
+  
     
       
         -5
@@ -1050,49 +1050,49 @@
       
     
   
-  
-    
-      
-        
+  
+    
+      
+        
           
         
       
-      
-        
+      
+        
           
         
-        
+        
           
         
-        
+        
           
         
-        
+        
           
         
-        
+        
           
         
       
-      
-        
+      
+        
           
         
-        
+        
           
         
-        
+        
           
         
       
-      
+      
         
           
           
         
-        
-          
-            
+        
+          
+            
               
             
           
@@ -1100,35 +1100,35 @@
       
     
   
-  
-    
+  
+    
       
         0.9
       
     
-    
+    
       
         1.0
       
     
-    
+    
       
         1.1
       
     
-    
+    
       
         1.2
       
     
-    
+    
       
         1.3
       
     
   
-  
-    
+  
+    
       
         Kernel ratio
       
@@ -1136,9 +1136,9 @@
   
 
 
-  
+  
     
   
 
 
-'/>
  • 1https://en.wikipedia.org/wiki/Gaussian_quadrature
+'/>
  • 1https://en.wikipedia.org/wiki/Gaussian_quadrature
diff --git a/dev/api/index.html b/dev/api/index.html index 680eab709..c15c1c678 100644 --- a/dev/api/index.html +++ b/dev/api/index.html @@ -1,35 +1,35 @@ -API · MixedModels

API

In addition to its own functionality, MixedModels.jl also implements extensive support for the StatsAPI.StatisticalModel and StatsAPI.RegressionModel API.

Types

MixedModels.BlockDescriptionType
BlockDescription

Description of blocks of A and L in a LinearMixedModel

Fields

  • blknms: Vector{String} of block names
  • blkrows: Vector{Int} of the number of rows in each block
  • ALtypes: Matrix{String} of datatypes for blocks in A and L.

When a block in L is the same type as the corresponding block in A, it is described with a single name, such as Dense. When the types differ the entry in ALtypes is of the form Diag/Dense, as determined by a shorttype method.

source
MixedModels.BlockedSparseType
BlockedSparse{Tv,S,P}

A SparseMatrixCSC whose nonzeros form blocks of rows or columns or both.

Members

  • cscmat: SparseMatrixCSC{Tv, Int32} representation for general calculations
  • nzasmat: nonzeros of cscmat as a dense matrix
  • colblkptr: pattern of blocks of columns

The only time these are created are as products of ReMats.

source
MixedModels.FeMatType
FeMat{T,S}

A matrix and a (possibly) weighted copy of itself.

Typically, an FeMat represents the fixed-effects model matrix with the response (y) concatenated as a final column.

Note

FeMat is not the same as FeTerm.

Fields

  • xy: original matrix, called xy b/c in practice this is hcat(fullrank(X), y)
  • wtxy: (possibly) weighted copy of xy (shares storage with xy until weights are applied)

Upon construction the xy and wtxy fields refer to the same matrix

source
MixedModels.FeTermType
FeTerm{T,S}

Term with an explicit, constant matrix representation

Typically, an FeTerm represents the model matrix for the fixed effects.

Note

FeTerm is not the same as FeMat!

Fields

  • x: full model matrix
  • piv: pivot Vector{Int} for moving linearly dependent columns to the right
  • rank: computational rank of x
  • cnames: vector of column names
source
MixedModels.GaussHermiteNormalizedType
GaussHermiteNormalized{K}

A struct with 2 SVector{K,Float64} members

  • z: abscissae for the K-point Gauss-Hermite quadrature rule on the Z scale
  • wt: Gauss-Hermite weights normalized to sum to unity
source
MixedModels.GeneralizedLinearMixedModelType
GeneralizedLinearMixedModel

Generalized linear mixed-effects model representation

Fields

  • LMM: a LinearMixedModel - the local approximation to the GLMM.
  • β: the pivoted and possibly truncated fixed-effects vector
  • β₀: similar to β. Used in the PIRLS algorithm if step-halving is needed.
  • θ: covariance parameter vector
  • b: similar to u, equivalent to broadcast!(*, b, LMM.Λ, u)
  • u: a vector of matrices of random effects
  • u₀: similar to u. Used in the PIRLS algorithm if step-halving is needed.
  • resp: a GlmResp object
  • η: the linear predictor
  • wt: vector of prior case weights, a value of T[] indicates equal weights.

The following fields are used in adaptive Gauss-Hermite quadrature, which applies only to models with a single random-effects term, in which case their lengths are the number of levels in the grouping factor for that term. Otherwise they are zero-length vectors.

  • devc: vector of deviance components
  • devc0: vector of deviance components at offset of zero
  • sd: approximate standard deviation of the conditional density
  • mult: multiplier

Properties

In addition to the fieldnames, the following names are also accessible through the . extractor

  • theta: synonym for θ
  • beta: synonym for β
  • σ or sigma: common scale parameter (value is NaN for distributions without a scale parameter)
  • lowerbd: vector of lower bounds on the combined elements of β and θ
  • formula, trms, A, L, and optsum: fields of the LMM field
  • X: fixed-effects model matrix
  • y: response vector
source
MixedModels.GroupingType
struct Grouping <: StatsModels.AbstractContrasts end

A placeholder type to indicate that a categorical variable is only used for grouping and not for contrasts. When creating a CategoricalTerm, this skips constructing the contrasts matrix which makes it robust to large numbers of levels, while still holding onto the vector of levels and constructing the level-to-index mapping (invindex field of the ContrastsMatrix.).

Note that calling modelcols on a CategoricalTerm{Grouping} is an error.

Examples

julia> schema((; grp = string.(1:100_000)))
+API · MixedModels

API

In addition to its own functionality, MixedModels.jl also implements extensive support for the StatsAPI.StatisticalModel and StatsAPI.RegressionModel API.

Types

MixedModels.BlockDescriptionType
BlockDescription

Description of blocks of A and L in a LinearMixedModel

Fields

  • blknms: Vector{String} of block names
  • blkrows: Vector{Int} of the number of rows in each block
  • ALtypes: Matrix{String} of datatypes for blocks in A and L.

When a block in L is the same type as the corresponding block in A, it is described with a single name, such as Dense. When the types differ the entry in ALtypes is of the form Diag/Dense, as determined by a shorttype method.

source
MixedModels.BlockedSparseType
BlockedSparse{Tv,S,P}

A SparseMatrixCSC whose nonzeros form blocks of rows or columns or both.

Members

  • cscmat: SparseMatrixCSC{Tv, Int32} representation for general calculations
  • nzasmat: nonzeros of cscmat as a dense matrix
  • colblkptr: pattern of blocks of columns

The only time these are created are as products of ReMats.

source
MixedModels.FeMatType
FeMat{T,S}

A matrix and a (possibly) weighted copy of itself.

Typically, an FeMat represents the fixed-effects model matrix with the response (y) concatenated as a final column.

Note

FeMat is not the same as FeTerm.

Fields

  • xy: original matrix, called xy b/c in practice this is hcat(fullrank(X), y)
  • wtxy: (possibly) weighted copy of xy (shares storage with xy until weights are applied)

Upon construction the xy and wtxy fields refer to the same matrix

source
MixedModels.FeTermType
FeTerm{T,S}

Term with an explicit, constant matrix representation

Typically, an FeTerm represents the model matrix for the fixed effects.

Note

FeTerm is not the same as FeMat!

Fields

  • x: full model matrix
  • piv: pivot Vector{Int} for moving linearly dependent columns to the right
  • rank: computational rank of x
  • cnames: vector of column names
source
MixedModels.GaussHermiteNormalizedType
GaussHermiteNormalized{K}

A struct with 2 SVector{K,Float64} members

  • z: abscissae for the K-point Gauss-Hermite quadrature rule on the Z scale
  • wt: Gauss-Hermite weights normalized to sum to unity
source
MixedModels.GeneralizedLinearMixedModelType
GeneralizedLinearMixedModel

Generalized linear mixed-effects model representation

Fields

  • LMM: a LinearMixedModel - the local approximation to the GLMM.
  • β: the pivoted and possibly truncated fixed-effects vector
  • β₀: similar to β. Used in the PIRLS algorithm if step-halving is needed.
  • θ: covariance parameter vector
  • b: similar to u, equivalent to broadcast!(*, b, LMM.Λ, u)
  • u: a vector of matrices of random effects
  • u₀: similar to u. Used in the PIRLS algorithm if step-halving is needed.
  • resp: a GlmResp object
  • η: the linear predictor
  • wt: vector of prior case weights, a value of T[] indicates equal weights.

The following fields are used in adaptive Gauss-Hermite quadrature, which applies only to models with a single random-effects term, in which case their lengths are the number of levels in the grouping factor for that term. Otherwise they are zero-length vectors.

  • devc: vector of deviance components
  • devc0: vector of deviance components at offset of zero
  • sd: approximate standard deviation of the conditional density
  • mult: multiplier

Properties

In addition to the fieldnames, the following names are also accessible through the . extractor

  • theta: synonym for θ
  • beta: synonym for β
  • σ or sigma: common scale parameter (value is NaN for distributions without a scale parameter)
  • lowerbd: vector of lower bounds on the combined elements of β and θ
  • formula, trms, A, L, and optsum: fields of the LMM field
  • X: fixed-effects model matrix
  • y: response vector
source
MixedModels.GroupingType
struct Grouping <: StatsModels.AbstractContrasts end

A placeholder type to indicate that a categorical variable is only used for grouping and not for contrasts. When creating a CategoricalTerm, this skips constructing the contrasts matrix which makes it robust to large numbers of levels, while still holding onto the vector of levels and constructing the level-to-index mapping (invindex field of the ContrastsMatrix.).

Note that calling modelcols on a CategoricalTerm{Grouping} is an error.

Examples

julia> schema((; grp = string.(1:100_000)))
 # out-of-memory error
 
-julia> schema((; grp = string.(1:100_000)), Dict(:grp => Grouping()))
source
MixedModels.LikelihoodRatioTestType
LikelihoodRatioTest

Results of MixedModels.likelihoodratiotest

Fields

  • formulas: Vector of model formulae
  • models: NamedTuple of the dof and deviance of the models
  • tests: NamedTuple of the sequential dofdiff, deviancediff, and resulting pvalues

Properties

  • deviance : note that this is actually -2 log likelihood for linear models (i.e. without subtracting the constant for a saturated model)
  • pvalues
source
MixedModels.LinearMixedModelType
LinearMixedModel(y, Xs, form, wts=[], σ=nothing, amalgamate=true)

Private constructor for a LinearMixedModel.

To construct a model, you only need the response (y), already assembled model matrices (Xs), schematized formula (form) and weights (wts). Everything else in the structure can be derived from these quantities.

Note

This method is internal and experimental and so may change or disappear in a future release without being considered a breaking change.

source
MixedModels.LinearMixedModelType
LinearMixedModel

Linear mixed-effects model representation

Fields

  • formula: the formula for the model
  • reterms: a Vector{AbstractReMat{T}} of random-effects terms.
  • Xymat: horizontal concatenation of a full-rank fixed-effects model matrix X and response y as an FeMat{T}
  • feterm: the fixed-effects model matrix as an FeTerm{T}
  • sqrtwts: vector of square roots of the case weights. Can be empty.
  • parmap : Vector{NTuple{3,Int}} of (block, row, column) mapping of θ to λ
  • dims : NamedTuple{(:n, :p, :nretrms),NTuple{3,Int}} of dimensions. p is the rank of X, which may be smaller than size(X, 2).
  • A: a Vector{AbstractMatrix} containing the row-major packed lower triangle of hcat(Z,X,y)'hcat(Z,X,y)
  • L: the blocked lower Cholesky factor of Λ'AΛ+I in the same Vector representation as A
  • optsum: an OptSummary object

Properties

  • θ or theta: the covariance parameter vector used to form λ
  • β or beta: the fixed-effects coefficient vector
  • λ or lambda: a vector of lower triangular matrices repeated on the diagonal blocks of Λ
  • σ or sigma: current value of the standard deviation of the per-observation noise
  • b: random effects on the original scale, as a vector of matrices
  • u: random effects on the orthogonal scale, as a vector of matrices
  • lowerbd: lower bounds on the elements of θ
  • X: the fixed-effects model matrix
  • y: the response vector
source
MixedModels.LinearMixedModelMethod
LinearMixedModel(y, feterm, reterms, form, wts=[], σ=nothing; amalgamate=true)

Private constructor for a LinearMixedModel given already assembled fixed and random effects.

To construct a model, you only need a vector of FeMats (the fixed-effects model matrix and response), a vector of AbstractReMat (the random-effects model matrices), the formula and the weights. Everything else in the structure can be derived from these quantities.

Note

This method is internal and experimental and so may change or disappear in a future release without being considered a breaking change.

source
MixedModels.MixedModelType
MixedModel

Abstract type for mixed models. MixedModels.jl implements two subtypes: LinearMixedModel and GeneralizedLinearMixedModel. See the documentation for each for more details.

This type is primarily used for dispatch in fit. Without a distribution and link function specified, a LinearMixedModel will be fit. When a distribution/link function is provided, a GeneralizedLinearModel is fit, unless that distribution is Normal and the link is IdentityLink, in which case the resulting GLMM would be equivalent to a LinearMixedModel anyway and so the simpler, equivalent LinearMixedModel will be fit instead.

source
MixedModels.MixedModelBootstrapType
MixedModelBootstrap{T<:AbstractFloat} <: MixedModelFitCollection{T}

Object returned by parametericbootstrap with fields

  • fits: the parameter estimates from the bootstrap replicates as a vector of named tuples.
  • λ: Vector{LowerTriangular{T,Matrix{T}}} containing copies of the λ field from ReMat model terms
  • inds: Vector{Vector{Int}} containing copies of the inds field from ReMat model terms
  • lowerbd: Vector{T} containing the vector of lower bounds (corresponds to the identically named field of OptSummary)
  • fcnames: NamedTuple whose keys are the grouping factor names and whose values are the column names

The schema of fits is, by default,

Tables.Schema:
+julia> schema((; grp = string.(1:100_000)), Dict(:grp => Grouping()))
source
MixedModels.LikelihoodRatioTestType
LikelihoodRatioTest

Results of MixedModels.likelihoodratiotest

Fields

  • formulas: Vector of model formulae
  • models: NamedTuple of the dof and deviance of the models
  • tests: NamedTuple of the sequential dofdiff, deviancediff, and resulting pvalues

Properties

  • deviance : note that this is actually -2 log likelihood for linear models (i.e. without subtracting the constant for a saturated model)
  • pvalues
source
MixedModels.LinearMixedModelType
LinearMixedModel(y, Xs, form, wts=[], σ=nothing, amalgamate=true)

Private constructor for a LinearMixedModel.

To construct a model, you only need the response (y), already assembled model matrices (Xs), schematized formula (form) and weights (wts). Everything else in the structure can be derived from these quantities.

Note

This method is internal and experimental and so may change or disappear in a future release without being considered a breaking change.

source
MixedModels.LinearMixedModelType
LinearMixedModel

Linear mixed-effects model representation

Fields

  • formula: the formula for the model
  • reterms: a Vector{AbstractReMat{T}} of random-effects terms.
  • Xymat: horizontal concatenation of a full-rank fixed-effects model matrix X and response y as an FeMat{T}
  • feterm: the fixed-effects model matrix as an FeTerm{T}
  • sqrtwts: vector of square roots of the case weights. Can be empty.
  • parmap : Vector{NTuple{3,Int}} of (block, row, column) mapping of θ to λ
  • dims : NamedTuple{(:n, :p, :nretrms),NTuple{3,Int}} of dimensions. p is the rank of X, which may be smaller than size(X, 2).
  • A: a Vector{AbstractMatrix} containing the row-major packed lower triangle of hcat(Z,X,y)'hcat(Z,X,y)
  • L: the blocked lower Cholesky factor of Λ'AΛ+I in the same Vector representation as A
  • optsum: an OptSummary object

Properties

  • θ or theta: the covariance parameter vector used to form λ
  • β or beta: the fixed-effects coefficient vector
  • λ or lambda: a vector of lower triangular matrices repeated on the diagonal blocks of Λ
  • σ or sigma: current value of the standard deviation of the per-observation noise
  • b: random effects on the original scale, as a vector of matrices
  • u: random effects on the orthogonal scale, as a vector of matrices
  • lowerbd: lower bounds on the elements of θ
  • X: the fixed-effects model matrix
  • y: the response vector
source
MixedModels.LinearMixedModelMethod
LinearMixedModel(y, feterm, reterms, form, wts=[], σ=nothing; amalgamate=true)

Private constructor for a LinearMixedModel given already assembled fixed and random effects.

To construct a model, you only need a vector of FeMats (the fixed-effects model matrix and response), a vector of AbstractReMat (the random-effects model matrices), the formula and the weights. Everything else in the structure can be derived from these quantities.

Note

This method is internal and experimental and so may change or disappear in a future release without being considered a breaking change.

source
MixedModels.MixedModelType
MixedModel

Abstract type for mixed models. MixedModels.jl implements two subtypes: LinearMixedModel and GeneralizedLinearMixedModel. See the documentation for each for more details.

This type is primarily used for dispatch in fit. Without a distribution and link function specified, a LinearMixedModel will be fit. When a distribution/link function is provided, a GeneralizedLinearModel is fit, unless that distribution is Normal and the link is IdentityLink, in which case the resulting GLMM would be equivalent to a LinearMixedModel anyway and so the simpler, equivalent LinearMixedModel will be fit instead.

source
MixedModels.MixedModelBootstrapType
MixedModelBootstrap{T<:AbstractFloat} <: MixedModelFitCollection{T}

Object returned by parametericbootstrap with fields

  • fits: the parameter estimates from the bootstrap replicates as a vector of named tuples.
  • λ: Vector{LowerTriangular{T,Matrix{T}}} containing copies of the λ field from ReMat model terms
  • inds: Vector{Vector{Int}} containing copies of the inds field from ReMat model terms
  • lowerbd: Vector{T} containing the vector of lower bounds (corresponds to the identically named field of OptSummary)
  • fcnames: NamedTuple whose keys are the grouping factor names and whose values are the column names

The schema of fits is, by default,

Tables.Schema:
  :objective  T
  :σ          T
  :β          NamedTuple{β_names}{NTuple{p,T}}
  :se         StaticArrays.SArray{Tuple{p},T,1,p}
- :θ          StaticArrays.SArray{Tuple{k},T,1,k}

where the sizes, p and k, of the β and θ elements are determined by the model.

Characteristics of the bootstrap replicates can be extracted as properties. The σs and σρs properties unravel the σ and θ estimates into estimates of the standard deviations and correlations of the random-effects terms.

source
MixedModels.MixedModelProfileType
 MixedModelProfile{T<:AbstractFloat}

Type representing a likelihood profile of a LinearMixedModel, including associated interpolation splines.

The function profile is used for computing profiles, while confint provides a useful method for constructing confidence intervals from a MixedModelProfile.

Note

The exact fields and their representation are considered implementation details and are not part of the public API.

source
MixedModels.OptSummaryType
OptSummary

Summary of an NLopt optimization

Fields

  • initial: a copy of the initial parameter values in the optimization
  • finitial: the initial value of the objective
  • lowerbd: lower bounds on the parameter values
  • ftol_rel: as in NLopt
  • ftol_abs: as in NLopt
  • xtol_rel: as in NLopt
  • xtol_abs: as in NLopt
  • initial_step: as in NLopt
  • maxfeval: as in NLopt (maxeval)
  • maxtime: as in NLopt
  • final: a copy of the final parameter values from the optimization
  • fmin: the final value of the objective
  • feval: the number of function evaluations
  • optimizer: the name of the optimizer used, as a Symbol
  • returnvalue: the return value, as a Symbol
  • xtol_zero_abs: the tolerance for a near zero parameter to be considered practically zero
  • ftol_zero_abs: the tolerance for change in the objective for setting a near zero parameter to zero
  • fitlog: A vector of tuples of parameter and objectives values from steps in the optimization
  • nAGQ: number of adaptive Gauss-Hermite quadrature points in deviance evaluation for GLMMs
  • REML: use the REML criterion for LMM fits
  • sigma: a priori value for the residual standard deviation for LMM

The last three fields are MixedModels functionality and not related directly to the NLopt package or algorithms.

Note

The internal storage of the parameter values within fitlog may change in the future to use a different subtype of AbstractVector (e.g., StaticArrays.SVector) for each snapshot without being considered a breaking change.

source
MixedModels.PCAType
PCA{T<:AbstractFloat}

Principal Components Analysis

Fields

  • covcorr covariance or correlation matrix
  • sv singular value decomposition
  • rnames rownames of the original matrix
  • corr is this a correlation matrix?
source
MixedModels.RaggedArrayType
RaggedArray{T,I}

A "ragged" array structure consisting of values and indices

Fields

  • vals: a Vector{T} containing the values
  • inds: a Vector{I} containing the indices

For this application a RaggedArray is used only in its sum! method.

source
MixedModels.ReMatType
ReMat{T,S} <: AbstractMatrix{T}

A section of a model matrix generated by a random-effects term.

Fields

  • trm: the grouping factor as a StatsModels.CategoricalTerm
  • refs: indices into the levels of the grouping factor as a Vector{Int32}
  • levels: the levels of the grouping factor
  • cnames: the names of the columns of the model matrix generated by the left-hand side of the term
  • z: transpose of the model matrix generated by the left-hand side of the term
  • wtz: a weighted copy of z (z and wtz are the same object for unweighted cases)
  • λ: a LowerTriangular or Diagonal matrix of size S×S
  • inds: a Vector{Int} of linear indices of the potential nonzeros in λ
  • adjA: the adjoint of the matrix as a SparseMatrixCSC{T}
  • scratch: a Matrix{T}
source
MixedModels.TableColumnsType
TableColumns

A structure containing the column names for the numeric part of the profile table.

The struct also contains a Dict giving the column ranges for Symbols like and . Finally it contains a scratch vector used to accumulate to values in a row of the profile table.

Note

This is an internal structure used in MixedModelProfile. As such, it may change or disappear in a future release without being considered breaking.

source
MixedModels.VarCorrType
VarCorr

Information from the fitted random-effects variance-covariance matrices.

Members

  • σρ: a NamedTuple of NamedTuples as returned from σρs
  • s: the estimate of the per-observation dispersion parameter

The main purpose of defining this type is to isolate the logic in the show method.

source

Exported Functions

LinearAlgebra.condMethod
cond(m::MixedModel)

Return a vector of condition numbers of the λ matrices for the random-effects terms

source
LinearAlgebra.logdetMethod
logdet(m::LinearMixedModel)

Return the value of log(det(Λ'Z'ZΛ + I)) + m.optsum.REML * log(det(LX*LX')) evaluated in place.

Here LX is the diagonal term corresponding to the fixed-effects in the blocked lower Cholesky factor.

source
MixedModels.GHnormMethod
GHnorm(k::Int)

Return the (unique) GaussHermiteNormalized{k} object.

The function values are stored (memoized) when first evaluated. Subsequent evaluations for the same k have very low overhead.

source
MixedModels.coefpvaluesMethod
coefpvalues(bsamp::MixedModelFitCollection)

Return a rowtable with columns (:iter, :coefname, :β, :se, :z, :p)

source
MixedModels.condVarMethod
condVar(m::LinearMixedModel)

Return the conditional variances matrices of the random effects.

The random effects are returned by ranef as a vector of length k, where k is the number of random effects terms. The ith element is a matrix of size vᵢ × ℓᵢ where vᵢ is the size of the vector-valued random effects for each of the ℓᵢ levels of the grouping factor. Technically those values are the modes of the conditional distribution of the random effects given the observed data.

This function returns an array of k three dimensional arrays, where the ith array is of size vᵢ × vᵢ × ℓᵢ. These are the diagonal blocks from the conditional variance-covariance matrix,

s² Λ(Λ'Z'ZΛ + I)⁻¹Λ'
source
MixedModels.condVartablesMethod
condVartables(m::LinearMixedModel)

Return the conditional covariance matrices of the random effects as a NamedTuple of columntables

source
MixedModels.fitted!Method
fitted!(v::AbstractArray{T}, m::LinearMixedModel{T})

Overwrite v with the fitted values from m.

See also fitted.

source
MixedModels.fixefMethod
fixef(m::MixedModel)

Return the fixed-effects parameter vector estimate of m.

In the rank-deficient case the truncated parameter vector, of length rank(m) is returned. This is unlike coef which always returns a vector whose length matches the number of columns in X.

source
MixedModels.fixefnamesMethod
fixefnames(m::MixedModel)

Return a (permuted and truncated in the rank-deficient case) vector of coefficient names.

source
MixedModels.fnamesMethod
fnames(m::MixedModel)

Return the names of the grouping factors for the random-effects terms.

source
MixedModels.fulldummyMethod
fulldummy(term::CategoricalTerm)

Assign "contrasts" that include all indicator columns (dummy variables) and an intercept column.

This will result in an under-determined set of contrasts, which is not a problem in the random effects because of the regularization, or "shrinkage", of the conditional modes.

The interaction of fulldummy with complex random effects is subtle and complex with numerous potential edge cases. As we discover these edge cases, we will document and determine their behavior. Until such time, please check the model summary to verify that the expansion is working as you expected. If it is not, please report a use case on GitHub.

source
MixedModels.issingularFunction
issingular(m::MixedModel, θ=m.θ)

Test whether the model m is singular if the parameter vector is θ.

Equality comparisons are used b/c small non-negative θ values are replaced by 0 in fit!.

Note

For GeneralizedLinearMixedModel, the entire parameter vector (including β in the case fast=false) must be specified if the default is not used.

source
MixedModels.issingularMethod
issingular(bsamp::MixedModelFitCollection)

Test each bootstrap sample for singularity of the corresponding fit.

Equality comparisons are used b/c small non-negative θ values are replaced by 0 in fit!.

See also issingular(::MixedModel).

source
MixedModels.lowerbdMethod
lowerbd{T}(A::ReMat{T})

Return the vector of lower bounds on the parameters, θ associated with A

These are the elements in the lower triangle of A.λ in column-major ordering. Diagonals have a lower bound of 0. Off-diagonals have a lower-bound of -Inf.

source
MixedModels.objective!Function
objective!(m::LinearMixedModel, θ)
-objective!(m::LinearMixedModel)

Equivalent to objective(updateL!(setθ!(m, θ))).

When m has a single, scalar random-effects term, θ can be a scalar.

The one-argument method curries and returns a single-argument function of θ.

Note that these methods modify m. The calling function is responsible for restoring the optimal θ.

source
MixedModels.parametricbootstrapMethod
parametricbootstrap([rng::AbstractRNG], nsamp::Integer, m::MixedModel{T}, ftype=T;
-    β = fixef(m), σ = m.σ, θ = m.θ, progress=true, optsum_overrides=(;))

Perform nsamp parametric bootstrap replication fits of m, returning a MixedModelBootstrap.

The default random number generator is Random.GLOBAL_RNG.

ftype can be used to store the computed bootstrap values in a lower precision. ftype is not a named argument because named arguments are not used in method dispatch and thus specialization. In other words, having ftype as a positional argument has some potential performance benefits.

Keyword Arguments

  • β, σ, and θ are the values of m's parameters for simulating the responses.
  • σ is only valid for LinearMixedModel and GeneralizedLinearMixedModel for

families with a dispersion parameter.

  • progress controls whether the progress bar is shown. Note that the progress

bar is automatically disabled for non-interactive (i.e. logging) contexts.

  • optsum_overrides is used to override values of OptSummary in the models

fit during the bootstrapping process. For example, optsum_overrides=(;ftol_rel=1e-08) reduces the convergence criterion, which can greatly speed up the bootstrap fits. Taking advantage of this speed up to increase n can often lead to better estimates of coverage intervals.

Note

All coefficients are bootstrapped. In the rank deficient case, the inestimatable coefficients are treated as -0.0 in the simulations underlying the bootstrap, which will generally result in their estimate from the simulated data also being being inestimable and thus set to -0.0. However this behavior may change in future releases to explicitly drop the extraneous columns before simulation and thus not include their estimates in the bootstrap result.

source
MixedModels.pirls!Method
pirls!(m::GeneralizedLinearMixedModel)

Use Penalized Iteratively Reweighted Least Squares (PIRLS) to determine the conditional modes of the random effects.

When varyβ is true both u and β are optimized with PIRLS. Otherwise only u is optimized and β is held fixed.

Passing verbose = true provides verbose output of the iterations.

source
MixedModels.profileMethod
profile(m::LinearMixedModel; threshold = 4)

Return a MixedModelProfile for the objective of m with respect to the fixed-effects coefficients.

m is refit! if !isfitted(m).

Profiling starts at the parameter estimate and continues until reaching a parameter bound or the absolute value of ζ exceeds threshold.

source
MixedModels.profilevcMethod
 profilevc(m::LinearMixedModel{T}, val::T, rowj::AbstractVector{T}) where {T}

Profile an element of the variance components.

Note

This method is called by profile and currently considered internal. As such, it may change or disappear in a future release without being considered breaking.

source
MixedModels.profileσMethod
profileσ(m::LinearMixedModel, tc::TableColumns; threshold=4)

Return a Table of the profile of σ for model m. The profile extends to where the magnitude of ζ exceeds threshold.

Note

This method is called by profile and currently considered internal. As such, it may change or disappear in a future release without being considered breaking.

source
MixedModels.ranefMethod
ranef(m::LinearMixedModel; uscale=false)

Return, as a Vector{Matrix{T}}, the conditional modes of the random effects in model m.

If uscale is true the random effects are on the spherical (i.e. u) scale, otherwise on the original scale.

For a named variant, see raneftables.

source
MixedModels.raneftablesMethod
raneftables(m::MixedModel; uscale = false)

Return the conditional means of the random effects as a NamedTuple of Tables.jl-compliant tables.

Note

The API guarantee is only that the NamedTuple contains Tables.jl tables and not on the particular concrete type of each table.

source
MixedModels.refit!Method
refit!(m::GeneralizedLinearMixedModel[, y::Vector];
+ :θ          StaticArrays.SArray{Tuple{k},T,1,k}

where the sizes, p and k, of the β and θ elements are determined by the model.

Characteristics of the bootstrap replicates can be extracted as properties. The σs and σρs properties unravel the σ and θ estimates into estimates of the standard deviations and correlations of the random-effects terms.

source
MixedModels.MixedModelProfileType
 MixedModelProfile{T<:AbstractFloat}

Type representing a likelihood profile of a LinearMixedModel, including associated interpolation splines.

The function profile is used for computing profiles, while confint provides a useful method for constructing confidence intervals from a MixedModelProfile.

Note

The exact fields and their representation are considered implementation details and are not part of the public API.

source
MixedModels.OptSummaryType
OptSummary

Summary of an NLopt optimization

Fields

  • initial: a copy of the initial parameter values in the optimization
  • finitial: the initial value of the objective
  • lowerbd: lower bounds on the parameter values
  • ftol_rel: as in NLopt
  • ftol_abs: as in NLopt
  • xtol_rel: as in NLopt
  • xtol_abs: as in NLopt
  • initial_step: as in NLopt
  • maxfeval: as in NLopt (maxeval)
  • maxtime: as in NLopt
  • final: a copy of the final parameter values from the optimization
  • fmin: the final value of the objective
  • feval: the number of function evaluations
  • optimizer: the name of the optimizer used, as a Symbol
  • returnvalue: the return value, as a Symbol
  • xtol_zero_abs: the tolerance for a near zero parameter to be considered practically zero
  • ftol_zero_abs: the tolerance for change in the objective for setting a near zero parameter to zero
  • fitlog: A vector of tuples of parameter and objectives values from steps in the optimization
  • nAGQ: number of adaptive Gauss-Hermite quadrature points in deviance evaluation for GLMMs
  • REML: use the REML criterion for LMM fits
  • sigma: a priori value for the residual standard deviation for LMM

The last three fields are MixedModels functionality and not related directly to the NLopt package or algorithms.

Note

The internal storage of the parameter values within fitlog may change in the future to use a different subtype of AbstractVector (e.g., StaticArrays.SVector) for each snapshot without being considered a breaking change.

source
MixedModels.PCAType
PCA{T<:AbstractFloat}

Principal Components Analysis

Fields

  • covcorr covariance or correlation matrix
  • sv singular value decomposition
  • rnames rownames of the original matrix
  • corr is this a correlation matrix?
source
MixedModels.RaggedArrayType
RaggedArray{T,I}

A "ragged" array structure consisting of values and indices

Fields

  • vals: a Vector{T} containing the values
  • inds: a Vector{I} containing the indices

For this application a RaggedArray is used only in its sum! method.

source
MixedModels.ReMatType
ReMat{T,S} <: AbstractMatrix{T}

A section of a model matrix generated by a random-effects term.

Fields

  • trm: the grouping factor as a StatsModels.CategoricalTerm
  • refs: indices into the levels of the grouping factor as a Vector{Int32}
  • levels: the levels of the grouping factor
  • cnames: the names of the columns of the model matrix generated by the left-hand side of the term
  • z: transpose of the model matrix generated by the left-hand side of the term
  • wtz: a weighted copy of z (z and wtz are the same object for unweighted cases)
  • λ: a LowerTriangular or Diagonal matrix of size S×S
  • inds: a Vector{Int} of linear indices of the potential nonzeros in λ
  • adjA: the adjoint of the matrix as a SparseMatrixCSC{T}
  • scratch: a Matrix{T}
source
MixedModels.TableColumnsType
TableColumns

A structure containing the column names for the numeric part of the profile table.

The struct also contains a Dict giving the column ranges for Symbols like and . Finally it contains a scratch vector used to accumulate to values in a row of the profile table.

Note

This is an internal structure used in MixedModelProfile. As such, it may change or disappear in a future release without being considered breaking.

source
MixedModels.VarCorrType
VarCorr

Information from the fitted random-effects variance-covariance matrices.

Members

  • σρ: a NamedTuple of NamedTuples as returned from σρs
  • s: the estimate of the per-observation dispersion parameter

The main purpose of defining this type is to isolate the logic in the show method.

source

Exported Functions

LinearAlgebra.condMethod
cond(m::MixedModel)

Return a vector of condition numbers of the λ matrices for the random-effects terms

source
LinearAlgebra.logdetMethod
logdet(m::LinearMixedModel)

Return the value of log(det(Λ'Z'ZΛ + I)) + m.optsum.REML * log(det(LX*LX')) evaluated in place.

Here LX is the diagonal term corresponding to the fixed-effects in the blocked lower Cholesky factor.

source
MixedModels.GHnormMethod
GHnorm(k::Int)

Return the (unique) GaussHermiteNormalized{k} object.

The function values are stored (memoized) when first evaluated. Subsequent evaluations for the same k have very low overhead.

source
MixedModels.coefpvaluesMethod
coefpvalues(bsamp::MixedModelFitCollection)

Return a rowtable with columns (:iter, :coefname, :β, :se, :z, :p)

source
MixedModels.condVarMethod
condVar(m::LinearMixedModel)

Return the conditional variances matrices of the random effects.

The random effects are returned by ranef as a vector of length k, where k is the number of random effects terms. The ith element is a matrix of size vᵢ × ℓᵢ where vᵢ is the size of the vector-valued random effects for each of the ℓᵢ levels of the grouping factor. Technically those values are the modes of the conditional distribution of the random effects given the observed data.

This function returns an array of k three dimensional arrays, where the ith array is of size vᵢ × vᵢ × ℓᵢ. These are the diagonal blocks from the conditional variance-covariance matrix,

s² Λ(Λ'Z'ZΛ + I)⁻¹Λ'
source
MixedModels.condVartablesMethod
condVartables(m::LinearMixedModel)

Return the conditional covariance matrices of the random effects as a NamedTuple of columntables

source
MixedModels.fitted!Method
fitted!(v::AbstractArray{T}, m::LinearMixedModel{T})

Overwrite v with the fitted values from m.

See also fitted.

source
MixedModels.fixefMethod
fixef(m::MixedModel)

Return the fixed-effects parameter vector estimate of m.

In the rank-deficient case the truncated parameter vector, of length rank(m) is returned. This is unlike coef which always returns a vector whose length matches the number of columns in X.

source
MixedModels.fixefnamesMethod
fixefnames(m::MixedModel)

Return a (permuted and truncated in the rank-deficient case) vector of coefficient names.

source
MixedModels.fnamesMethod
fnames(m::MixedModel)

Return the names of the grouping factors for the random-effects terms.

source
MixedModels.fulldummyMethod
fulldummy(term::CategoricalTerm)

Assign "contrasts" that include all indicator columns (dummy variables) and an intercept column.

This will result in an under-determined set of contrasts, which is not a problem in the random effects because of the regularization, or "shrinkage", of the conditional modes.

The interaction of fulldummy with complex random effects is subtle and complex with numerous potential edge cases. As we discover these edge cases, we will document and determine their behavior. Until such time, please check the model summary to verify that the expansion is working as you expected. If it is not, please report a use case on GitHub.

source
MixedModels.issingularFunction
issingular(m::MixedModel, θ=m.θ)

Test whether the model m is singular if the parameter vector is θ.

Equality comparisons are used b/c small non-negative θ values are replaced by 0 in fit!.

Note

For GeneralizedLinearMixedModel, the entire parameter vector (including β in the case fast=false) must be specified if the default is not used.

source
MixedModels.issingularMethod
issingular(bsamp::MixedModelFitCollection)

Test each bootstrap sample for singularity of the corresponding fit.

Equality comparisons are used b/c small non-negative θ values are replaced by 0 in fit!.

See also issingular(::MixedModel).

source
MixedModels.lowerbdMethod
lowerbd{T}(A::ReMat{T})

Return the vector of lower bounds on the parameters, θ associated with A

These are the elements in the lower triangle of A.λ in column-major ordering. Diagonals have a lower bound of 0. Off-diagonals have a lower-bound of -Inf.

source
MixedModels.objective!Function
objective!(m::LinearMixedModel, θ)
+objective!(m::LinearMixedModel)

Equivalent to objective(updateL!(setθ!(m, θ))).

When m has a single, scalar random-effects term, θ can be a scalar.

The one-argument method curries and returns a single-argument function of θ.

Note that these methods modify m. The calling function is responsible for restoring the optimal θ.

source
MixedModels.parametricbootstrapMethod
parametricbootstrap([rng::AbstractRNG], nsamp::Integer, m::MixedModel{T}, ftype=T;
+    β = fixef(m), σ = m.σ, θ = m.θ, progress=true, optsum_overrides=(;))

Perform nsamp parametric bootstrap replication fits of m, returning a MixedModelBootstrap.

The default random number generator is Random.GLOBAL_RNG.

ftype can be used to store the computed bootstrap values in a lower precision. ftype is not a named argument because named arguments are not used in method dispatch and thus specialization. In other words, having ftype as a positional argument has some potential performance benefits.

Keyword Arguments

  • β, σ, and θ are the values of m's parameters for simulating the responses.
  • σ is only valid for LinearMixedModel and GeneralizedLinearMixedModel for

families with a dispersion parameter.

  • progress controls whether the progress bar is shown. Note that the progress

bar is automatically disabled for non-interactive (i.e. logging) contexts.

  • optsum_overrides is used to override values of OptSummary in the models

fit during the bootstrapping process. For example, optsum_overrides=(;ftol_rel=1e-08) reduces the convergence criterion, which can greatly speed up the bootstrap fits. Taking advantage of this speed up to increase n can often lead to better estimates of coverage intervals.

Note

All coefficients are bootstrapped. In the rank deficient case, the inestimatable coefficients are treated as -0.0 in the simulations underlying the bootstrap, which will generally result in their estimate from the simulated data also being being inestimable and thus set to -0.0. However this behavior may change in future releases to explicitly drop the extraneous columns before simulation and thus not include their estimates in the bootstrap result.

source
MixedModels.pirls!Method
pirls!(m::GeneralizedLinearMixedModel)

Use Penalized Iteratively Reweighted Least Squares (PIRLS) to determine the conditional modes of the random effects.

When varyβ is true both u and β are optimized with PIRLS. Otherwise only u is optimized and β is held fixed.

Passing verbose = true provides verbose output of the iterations.

source
MixedModels.profileMethod
profile(m::LinearMixedModel; threshold = 4)

Return a MixedModelProfile for the objective of m with respect to the fixed-effects coefficients.

m is refit! if !isfitted(m).

Profiling starts at the parameter estimate and continues until reaching a parameter bound or the absolute value of ζ exceeds threshold.

source
MixedModels.profilevcMethod
 profilevc(m::LinearMixedModel{T}, val::T, rowj::AbstractVector{T}) where {T}

Profile an element of the variance components.

Note

This method is called by profile and currently considered internal. As such, it may change or disappear in a future release without being considered breaking.

source
MixedModels.profileσMethod
profileσ(m::LinearMixedModel, tc::TableColumns; threshold=4)

Return a Table of the profile of σ for model m. The profile extends to where the magnitude of ζ exceeds threshold.

Note

This method is called by profile and currently considered internal. As such, it may change or disappear in a future release without being considered breaking.

source
MixedModels.ranefMethod
ranef(m::LinearMixedModel; uscale=false)

Return, as a Vector{Matrix{T}}, the conditional modes of the random effects in model m.

If uscale is true the random effects are on the spherical (i.e. u) scale, otherwise on the original scale.

For a named variant, see raneftables.

source
MixedModels.raneftablesMethod
raneftables(m::MixedModel; uscale = false)

Return the conditional means of the random effects as a NamedTuple of Tables.jl-compliant tables.

Note

The API guarantee is only that the NamedTuple contains Tables.jl tables and not on the particular concrete type of each table.

source
MixedModels.refit!Method
refit!(m::GeneralizedLinearMixedModel[, y::Vector];
        fast::Bool = (length(m.θ) == length(m.optsum.final)),
        nAGQ::Integer = m.optsum.nAGQ,
-       kwargs...)

Refit the model m after installing response y.

If y is omitted the current response vector is used.

If not specified, the fast and nAGQ options from the previous fit are used. kwargs are the same as fit!

source
MixedModels.refit!Method
refit!(m::LinearMixedModel[, y::Vector]; REML=m.optsum.REML, kwargs...)

Refit the model m after installing response y.

If y is omitted the current response vector is used. kwargs are the same as fit!.

source
MixedModels.replicateMethod
replicate(f::Function, n::Integer; progress=true)

Return a vector of the values of n calls to f() - used in simulations where the value of f is stochastic.

progress controls whether the progress bar is shown. Note that the progress bar is automatically disabled for non-interactive (i.e. logging) contexts.

source
MixedModels.restoreoptsum!Method
restoreoptsum!(m::LinearMixedModel, io::IO; atol::Real=0, rtol::Real=atol>0 ? 0 : √eps)
-restoreoptsum!(m::LinearMixedModel, filename; atol::Real=0, rtol::Real=atol>0 ? 0 : √eps)

Read, check, and restore the optsum field from a JSON stream or filename.

source
MixedModels.restorereplicatesMethod
restorereplicates(f, m::MixedModel{T})
+       kwargs...)

Refit the model m after installing response y.

If y is omitted the current response vector is used.

If not specified, the fast and nAGQ options from the previous fit are used. kwargs are the same as fit!

source
MixedModels.refit!Method
refit!(m::LinearMixedModel[, y::Vector]; REML=m.optsum.REML, kwargs...)

Refit the model m after installing response y.

If y is omitted the current response vector is used. kwargs are the same as fit!.

source
MixedModels.replicateMethod
replicate(f::Function, n::Integer; progress=true)

Return a vector of the values of n calls to f() - used in simulations where the value of f is stochastic.

progress controls whether the progress bar is shown. Note that the progress bar is automatically disabled for non-interactive (i.e. logging) contexts.

source
MixedModels.restoreoptsum!Method
restoreoptsum!(m::LinearMixedModel, io::IO; atol::Real=0, rtol::Real=atol>0 ? 0 : √eps)
+restoreoptsum!(m::LinearMixedModel, filename; atol::Real=0, rtol::Real=atol>0 ? 0 : √eps)

Read, check, and restore the optsum field from a JSON stream or filename.

source
MixedModels.restorereplicatesMethod
restorereplicates(f, m::MixedModel{T})
 restorereplicates(f, m::MixedModel{T}, ftype::Type{<:AbstractFloat})
-restorereplicates(f, m::MixedModel{T}, ctype::Type{<:MixedModelFitCollection{S}})

Restore replicates from f, using m to create the desired subtype of MixedModelFitCollection.

f can be any entity supported by Arrow.Table. m does not have to be fitted, but it must have been constructed with the same structure as the source of the saved replicates.

The two-argument method constructs a MixedModelBootstrap with the same eltype as m. If an eltype is specified as the third argument, then a MixedModelBootstrap is returned. If a subtype of MixedModelFitCollection is specified as the third argument, then that is the return type.

See also savereplicates, restoreoptsum!.

source
MixedModels.saveoptsumMethod
saveoptsum(io::IO, m::LinearMixedModel)
-saveoptsum(filename, m::LinearMixedModel)

Save m.optsum (w/o the lowerbd field) in JSON format to an IO stream or a file

The reason for omitting the lowerbd field is because it often contains -Inf values that are not allowed in JSON.

source
MixedModels.sdestMethod
sdest(m::LinearMixedModel)

Return the estimate of σ, the standard deviation of the per-observation noise.

source
MixedModels.sdestMethod
sdest(m::GeneralizedLinearMixedModel)

Return the estimate of the dispersion, i.e. the standard deviation of the per-observation noise.

For models with a dispersion parameter ϕ, this is simply ϕ. For models without a dispersion parameter, this value is missing. This differs from disperion, which returns 1 for models without a dispersion parameter.

For Gaussian models, this parameter is often called σ.

source
MixedModels.setθ!Method
setθ!(bsamp::MixedModelFitCollection, θ::AbstractVector)
-setθ!(bsamp::MixedModelFitCollection, i::Integer)

Install the values of the i'th θ value of bsamp.fits in bsamp.λ

source
MixedModels.shortestcovintMethod
shortestcovint(bsamp::MixedModelFitCollection, level = 0.95)

Return the shortest interval containing level proportion for each parameter from bsamp.allpars.

Warning

Currently, correlations that are systematically zero are included in the the result. This may change in a future release without being considered a breaking change.

source
MixedModels.simulate!Method
simulate!([rng::AbstractRNG,] y::AbstractVector, m::MixedModel{T}[, newdata];
+restorereplicates(f, m::MixedModel{T}, ctype::Type{<:MixedModelFitCollection{S}})

Restore replicates from f, using m to create the desired subtype of MixedModelFitCollection.

f can be any entity supported by Arrow.Table. m does not have to be fitted, but it must have been constructed with the same structure as the source of the saved replicates.

The two-argument method constructs a MixedModelBootstrap with the same eltype as m. If an eltype is specified as the third argument, then a MixedModelBootstrap is returned. If a subtype of MixedModelFitCollection is specified as the third argument, then that is the return type.

See also savereplicates, restoreoptsum!.

source
MixedModels.saveoptsumMethod
saveoptsum(io::IO, m::LinearMixedModel)
+saveoptsum(filename, m::LinearMixedModel)

Save m.optsum (w/o the lowerbd field) in JSON format to an IO stream or a file

The reason for omitting the lowerbd field is because it often contains -Inf values that are not allowed in JSON.

source
MixedModels.sdestMethod
sdest(m::LinearMixedModel)

Return the estimate of σ, the standard deviation of the per-observation noise.

source
MixedModels.sdestMethod
sdest(m::GeneralizedLinearMixedModel)

Return the estimate of the dispersion, i.e. the standard deviation of the per-observation noise.

For models with a dispersion parameter ϕ, this is simply ϕ. For models without a dispersion parameter, this value is missing. This differs from disperion, which returns 1 for models without a dispersion parameter.

For Gaussian models, this parameter is often called σ.

source
MixedModels.setθ!Method
setθ!(bsamp::MixedModelFitCollection, θ::AbstractVector)
+setθ!(bsamp::MixedModelFitCollection, i::Integer)

Install the values of the i'th θ value of bsamp.fits in bsamp.λ

source
MixedModels.shortestcovintMethod
shortestcovint(bsamp::MixedModelFitCollection, level = 0.95)

Return the shortest interval containing level proportion for each parameter from bsamp.allpars.

Warning

Currently, correlations that are systematically zero are included in the the result. This may change in a future release without being considered a breaking change.

source
MixedModels.simulate!Method
simulate!([rng::AbstractRNG,] y::AbstractVector, m::MixedModel{T}[, newdata];
                 β = coef(m), σ = m.σ, θ = T[], wts=m.wts)
 simulate([rng::AbstractRNG,] m::MixedModel{T}[, newdata];
-                β = coef(m), σ = m.σ, θ = T[], wts=m.wts)

Simulate a new response vector, optionally overwriting a pre-allocated vector.

New data can be optionally provided in tabular format.

This simulation includes sampling new values for the random effects. Thus in contrast to predict, there is no distinction in between "new" and "old" / previously observed random-effects levels.

Unlike predict, there is no type parameter for GeneralizedLinearMixedModel because the noise term in the model and simulation is always on the response scale.

The wts argument is currently ignored except for GeneralizedLinearMixedModel models with a Binomial distribution.

Note

Note that simulate! methods with a y::AbstractVector as the first argument (besides the RNG) and simulate methods return the simulated response. This is in contrast to simulate! methods with a m::MixedModel as the first argument, which modify the model's response and return the entire modified model.

source
MixedModels.simulate!Method
simulate!(rng::AbstractRNG, m::MixedModel{T}; β=fixef(m), σ=m.σ, θ=T[])
-simulate!(m::MixedModel; β=fixef(m), σ=m.σ, θ=m.θ)

Overwrite the response (i.e. m.trms[end]) with a simulated response vector from model m.

This simulation includes sampling new values for the random effects.

β can be specified either as a pivoted, full rank coefficient vector (cf. fixef) or as an unpivoted full dimension coefficient vector (cf. coef), where the entries corresponding to redundant columns will be ignored.

Note

Note that simulate! methods with a y::AbstractVector as the first argument (besides the RNG) and simulate methods return the simulated response. This is in contrast to simulate! methods with a m::MixedModel as the first argument, which modify the model's response and return the entire modified model.

source
MixedModels.sparseLMethod
sparseL(m::LinearMixedModel; fname::Symbol=first(fnames(m)), full::Bool=false)

Return the lower Cholesky factor L as a SparseMatrix{T,Int32}.

full indicates whether the parts of L associated with the fixed-effects and response are to be included.

fname specifies the first grouping factor to include. Blocks to the left of the block corresponding to fname are dropped. The default is the first, i.e., leftmost block and hence all blocks.

source
MixedModels.stderror!Method
stderror!(v::AbstractVector, m::LinearMixedModel)

Overwrite v with the standard errors of the fixed-effects coefficients in m

The length of v should be the total number of coefficients (i.e. length(coef(m))). When the model matrix is rank-deficient the coefficients forced to -0.0 have an undefined (i.e. NaN) standard error.

source
MixedModels.updateL!Method
updateL!(m::LinearMixedModel)

Update the blocked lower Cholesky factor, m.L, from m.A and m.reterms (used for λ only)

This is the crucial step in evaluating the objective, given a new parameter value.

source
MixedModels.varestMethod
varest(m::LinearMixedModel)

Returns the estimate of σ², the variance of the conditional distribution of Y given B.

source
MixedModels.varestMethod
varest(m::GeneralizedLinearMixedModel)

Returns the estimate of ϕ², the variance of the conditional distribution of Y given B.

For models with a dispersion parameter ϕ, this is simply ϕ². For models without a dispersion parameter, this value is missing. This differs from disperion, which returns 1 for models without a dispersion parameter.

For Gaussian models, this parameter is often called σ².

source
Statistics.stdMethod
std(m::MixedModel)

Return the estimated standard deviations of the random effects as a Vector{Vector{T}}.

FIXME: This uses an old convention of isfinite(sdest(m)). Probably drop in favor of m.σs

source
StatsAPI.confintMethod
confint(pr::MixedModelProfile; level::Real=0.95)

Compute profile confidence intervals for coefficients and variance components, with confidence level level (by default 95%).

Note

The API guarantee is for a Tables.jl compatible table. The exact return type is an implementation detail and may change in a future minor release without being considered breaking.

Note

The "row names" indicating the associated parameter name are guaranteed to be unambiguous, but their precise naming scheme is not yet stable and may change in a future release without being considered breaking.

source
StatsAPI.confintMethod
confint(pr::MixedModelBootstrap; level::Real=0.95, method=:shortest)

Compute bootstrap confidence intervals for coefficients and variance components, with confidence level level (by default 95%).

The keyword argument method determines whether the :shortest, i.e. highest density, interval is used or the :equaltail, i.e. quantile-based, interval is used. For historical reasons, the default is :shortest, but :equaltail gives the interval that is most comparable to the profile and Wald confidence intervals.

Note

The API guarantee is for a Tables.jl compatible table. The exact return type is an implementation detail and may change in a future minor release without being considered breaking.

Note

The "row names" indicating the associated parameter name are guaranteed to be unambiguous, but their precise naming scheme is not yet stable and may change in a future release without being considered breaking.

See also shortestcovint.

source
StatsAPI.confintMethod
confint(pr::MixedModelProfile; level::Real=0.95)

Compute profile confidence intervals for (fixed effects) coefficients, with confidence level level (by default 95%).

Note

The API guarantee is for a Tables.jl compatible table. The exact return type is an implementation detail and may change in a future minor release without being considered breaking.

source
StatsAPI.devianceMethod
deviance(m::GeneralizedLinearMixedModel{T}, nAGQ=1)::T where {T}

Return the deviance of m evaluated by the Laplace approximation (nAGQ=1) or nAGQ-point adaptive Gauss-Hermite quadrature.

If the distribution D does not have a scale parameter the Laplace approximation is the squared length of the conditional modes, $u$, plus the determinant of $Λ'Z'WZΛ + I$, plus the sum of the squared deviance residuals.

source
StatsAPI.dof_residualMethod
dof_residual(m::MixedModel)

Return the residual degrees of freedom of the model.

Note

The residual degrees of freedom for mixed-effects models is not clearly defined due to partial pooling. The classical nobs(m) - dof(m) fails to capture the extra freedom granted by the random effects, but nobs(m) - nranef(m) would overestimate the freedom granted by the random effects. nobs(m) - sum(leverage(m)) provides a nice balance based on the relative influence of each observation, but is computationally expensive for large models. This problem is also fundamentally related to long-standing debates about the appropriate treatment of the denominator degrees of freedom for $F$-tests. In the future, MixedModels.jl may provide additional methods allowing the user to choose the computation to use.

Warning

Currently, the residual degrees of freedom is computed as nobs(m) - dof(m), but this may change in the future without being considered a breaking change because there is no canonical definition of the residual degrees of freedom in a mixed-effects model.

source
StatsAPI.fit!Method
fit!(m::GeneralizedLinearMixedModel; fast=false, nAGQ=1,
+                β = coef(m), σ = m.σ, θ = T[], wts=m.wts)

Simulate a new response vector, optionally overwriting a pre-allocated vector.

New data can be optionally provided in tabular format.

This simulation includes sampling new values for the random effects. Thus in contrast to predict, there is no distinction in between "new" and "old" / previously observed random-effects levels.

Unlike predict, there is no type parameter for GeneralizedLinearMixedModel because the noise term in the model and simulation is always on the response scale.

The wts argument is currently ignored except for GeneralizedLinearMixedModel models with a Binomial distribution.

Note

Note that simulate! methods with a y::AbstractVector as the first argument (besides the RNG) and simulate methods return the simulated response. This is in contrast to simulate! methods with a m::MixedModel as the first argument, which modify the model's response and return the entire modified model.

source
MixedModels.simulate!Method
simulate!(rng::AbstractRNG, m::MixedModel{T}; β=fixef(m), σ=m.σ, θ=T[])
+simulate!(m::MixedModel; β=fixef(m), σ=m.σ, θ=m.θ)

Overwrite the response (i.e. m.trms[end]) with a simulated response vector from model m.

This simulation includes sampling new values for the random effects.

β can be specified either as a pivoted, full rank coefficient vector (cf. fixef) or as an unpivoted full dimension coefficient vector (cf. coef), where the entries corresponding to redundant columns will be ignored.

Note

Note that simulate! methods with a y::AbstractVector as the first argument (besides the RNG) and simulate methods return the simulated response. This is in contrast to simulate! methods with a m::MixedModel as the first argument, which modify the model's response and return the entire modified model.

source
MixedModels.sparseLMethod
sparseL(m::LinearMixedModel; fname::Symbol=first(fnames(m)), full::Bool=false)

Return the lower Cholesky factor L as a SparseMatrix{T,Int32}.

full indicates whether the parts of L associated with the fixed-effects and response are to be included.

fname specifies the first grouping factor to include. Blocks to the left of the block corresponding to fname are dropped. The default is the first, i.e., leftmost block and hence all blocks.

source
MixedModels.stderror!Method
stderror!(v::AbstractVector, m::LinearMixedModel)

Overwrite v with the standard errors of the fixed-effects coefficients in m

The length of v should be the total number of coefficients (i.e. length(coef(m))). When the model matrix is rank-deficient the coefficients forced to -0.0 have an undefined (i.e. NaN) standard error.

source
MixedModels.updateL!Method
updateL!(m::LinearMixedModel)

Update the blocked lower Cholesky factor, m.L, from m.A and m.reterms (used for λ only)

This is the crucial step in evaluating the objective, given a new parameter value.

source
MixedModels.varestMethod
varest(m::LinearMixedModel)

Returns the estimate of σ², the variance of the conditional distribution of Y given B.

source
MixedModels.varestMethod
varest(m::GeneralizedLinearMixedModel)

Returns the estimate of ϕ², the variance of the conditional distribution of Y given B.

For models with a dispersion parameter ϕ, this is simply ϕ². For models without a dispersion parameter, this value is missing. This differs from disperion, which returns 1 for models without a dispersion parameter.

For Gaussian models, this parameter is often called σ².

source
Statistics.stdMethod
std(m::MixedModel)

Return the estimated standard deviations of the random effects as a Vector{Vector{T}}.

FIXME: This uses an old convention of isfinite(sdest(m)). Probably drop in favor of m.σs

source
StatsAPI.confintMethod
confint(pr::MixedModelProfile; level::Real=0.95)

Compute profile confidence intervals for coefficients and variance components, with confidence level level (by default 95%).

Note

The API guarantee is for a Tables.jl compatible table. The exact return type is an implementation detail and may change in a future minor release without being considered breaking.

Note

The "row names" indicating the associated parameter name are guaranteed to be unambiguous, but their precise naming scheme is not yet stable and may change in a future release without being considered breaking.

source
StatsAPI.confintMethod
confint(pr::MixedModelBootstrap; level::Real=0.95, method=:shortest)

Compute bootstrap confidence intervals for coefficients and variance components, with confidence level level (by default 95%).

The keyword argument method determines whether the :shortest, i.e. highest density, interval is used or the :equaltail, i.e. quantile-based, interval is used. For historical reasons, the default is :shortest, but :equaltail gives the interval that is most comparable to the profile and Wald confidence intervals.

Note

The API guarantee is for a Tables.jl compatible table. The exact return type is an implementation detail and may change in a future minor release without being considered breaking.

Note

The "row names" indicating the associated parameter name are guaranteed to be unambiguous, but their precise naming scheme is not yet stable and may change in a future release without being considered breaking.

See also shortestcovint.

source
StatsAPI.confintMethod
confint(pr::MixedModelProfile; level::Real=0.95)

Compute profile confidence intervals for (fixed effects) coefficients, with confidence level level (by default 95%).

Note

The API guarantee is for a Tables.jl compatible table. The exact return type is an implementation detail and may change in a future minor release without being considered breaking.

source
StatsAPI.devianceMethod
deviance(m::GeneralizedLinearMixedModel{T}, nAGQ=1)::T where {T}

Return the deviance of m evaluated by the Laplace approximation (nAGQ=1) or nAGQ-point adaptive Gauss-Hermite quadrature.

If the distribution D does not have a scale parameter the Laplace approximation is the squared length of the conditional modes, $u$, plus the determinant of $Λ'Z'WZΛ + I$, plus the sum of the squared deviance residuals.

source
StatsAPI.dof_residualMethod
dof_residual(m::MixedModel)

Return the residual degrees of freedom of the model.

Note

The residual degrees of freedom for mixed-effects models is not clearly defined due to partial pooling. The classical nobs(m) - dof(m) fails to capture the extra freedom granted by the random effects, but nobs(m) - nranef(m) would overestimate the freedom granted by the random effects. nobs(m) - sum(leverage(m)) provides a nice balance based on the relative influence of each observation, but is computationally expensive for large models. This problem is also fundamentally related to long-standing debates about the appropriate treatment of the denominator degrees of freedom for $F$-tests. In the future, MixedModels.jl may provide additional methods allowing the user to choose the computation to use.

Warning

Currently, the residual degrees of freedom is computed as nobs(m) - dof(m), but this may change in the future without being considered a breaking change because there is no canonical definition of the residual degrees of freedom in a mixed-effects model.

source
StatsAPI.fit!Method
fit!(m::GeneralizedLinearMixedModel; fast=false, nAGQ=1,
                                      verbose=false, progress=true,
                                      thin::Int=1,
-                                     init_from_lmm=Set())

Optimize the objective function for m.

When fast is true a potentially much faster but slightly less accurate algorithm, in which pirls! optimizes both the random effects and the fixed-effects parameters, is used.

If progress is true, the default, a ProgressMeter.ProgressUnknown counter is displayed. during the iterations to minimize the deviance. There is a delay before this display is initialized and it may not be shown at all for models that are optimized quickly.

If verbose is true, then both the intermediate results of both the nonlinear optimization and PIRLS are also displayed on standard output.

At every thinth iteration is recorded in fitlog, optimization progress is saved in m.optsum.fitlog.

By default, the starting values for model fitting are taken from a (non mixed, i.e. marginal ) GLM fit. Experience with larger datasets (many thousands of observations and/or hundreds of levels of the grouping variables) has suggested that fitting a (Gaussian) linear mixed model on the untransformed data may provide better starting values and thus overall faster fits even though an entire LMM must be fit before the GLMM can be fit. init_from_lmm can be used to specify which starting values from an LMM to use. Valid options are any collection (array, set, etc.) containing one or more of and , the default is the empty set.

Note

Initializing from an LMM requires fitting the entire LMM first, so when progress=true, there will be two progress bars: first for the LMM, then for the GLMM.

Warning

The init_from_lmm functionality is experimental and may change or be removed entirely without being considered a breaking change.

source
StatsAPI.fit!Method
fit!(m::LinearMixedModel; progress::Bool=true, REML::Bool=m.optsum.REML,
+                                     init_from_lmm=Set())

Optimize the objective function for m.

When fast is true a potentially much faster but slightly less accurate algorithm, in which pirls! optimizes both the random effects and the fixed-effects parameters, is used.

If progress is true, the default, a ProgressMeter.ProgressUnknown counter is displayed. during the iterations to minimize the deviance. There is a delay before this display is initialized and it may not be shown at all for models that are optimized quickly.

If verbose is true, then both the intermediate results of both the nonlinear optimization and PIRLS are also displayed on standard output.

At every thinth iteration is recorded in fitlog, optimization progress is saved in m.optsum.fitlog.

By default, the starting values for model fitting are taken from a (non mixed, i.e. marginal ) GLM fit. Experience with larger datasets (many thousands of observations and/or hundreds of levels of the grouping variables) has suggested that fitting a (Gaussian) linear mixed model on the untransformed data may provide better starting values and thus overall faster fits even though an entire LMM must be fit before the GLMM can be fit. init_from_lmm can be used to specify which starting values from an LMM to use. Valid options are any collection (array, set, etc.) containing one or more of and , the default is the empty set.

Note

Initializing from an LMM requires fitting the entire LMM first, so when progress=true, there will be two progress bars: first for the LMM, then for the GLMM.

Warning

The init_from_lmm functionality is experimental and may change or be removed entirely without being considered a breaking change.

source
StatsAPI.fit!Method
fit!(m::LinearMixedModel; progress::Bool=true, REML::Bool=m.optsum.REML,
                           σ::Union{Real, Nothing}=m.optsum.sigma,
-                          thin::Int=typemax(Int))

Optimize the objective of a LinearMixedModel. When progress is true a ProgressMeter.ProgressUnknown display is shown during the optimization of the objective, if the optimization takes more than one second or so.

At every thinth iteration is recorded in fitlog, optimization progress is saved in m.optsum.fitlog.

source
StatsAPI.leverageMethod
leverage(::LinearMixedModel)

Return the diagonal of the hat matrix of the model.

For a linear model, the sum of the leverage values is the degrees of freedom for the model in the sense that this sum is the dimension of the span of columns of the model matrix. With a bit of hand waving a similar argument could be made for linear mixed-effects models. The hat matrix is of the form $[ZΛ X][L L']⁻¹[ZΛ X]'$.

source
StatsAPI.modelmatrixMethod
modelmatrix(m::MixedModel)

Returns the model matrix X for the fixed-effects parameters, as returned by coef.

This is always the full model matrix in the original column order and from a field in the model struct. It should be copied if it is to be modified.

source
StatsAPI.predictMethod
StatsAPI.predict(m::LinearMixedModel, newdata;
+                          thin::Int=typemax(Int))

Optimize the objective of a LinearMixedModel. When progress is true a ProgressMeter.ProgressUnknown display is shown during the optimization of the objective, if the optimization takes more than one second or so.

At every thinth iteration is recorded in fitlog, optimization progress is saved in m.optsum.fitlog.

source
StatsAPI.leverageMethod
leverage(::LinearMixedModel)

Return the diagonal of the hat matrix of the model.

For a linear model, the sum of the leverage values is the degrees of freedom for the model in the sense that this sum is the dimension of the span of columns of the model matrix. With a bit of hand waving a similar argument could be made for linear mixed-effects models. The hat matrix is of the form $[ZΛ X][L L']⁻¹[ZΛ X]'$.

source
StatsAPI.modelmatrixMethod
modelmatrix(m::MixedModel)

Returns the model matrix X for the fixed-effects parameters, as returned by coef.

This is always the full model matrix in the original column order and from a field in the model struct. It should be copied if it is to be modified.

source
StatsAPI.predictMethod
StatsAPI.predict(m::LinearMixedModel, newdata;
                 new_re_levels=:missing)
 StatsAPI.predict(m::GeneralizedLinearMixedModel, newdata;
-                new_re_levels=:missing, type=:response)

Predict response for new data.

Note

Currently, no in-place methods are provided because these methods internally construct a new model and therefore allocate not just a response vector but also many other matrices.

Warning

newdata should contain a column for the response (dependent variable) initialized to some numerical value (not missing), because this is used to construct the new model used in computing the predictions. missing is not valid because missing data are dropped before constructing the model matrices.

Warning

These methods construct an entire MixedModel behind the scenes and as such may use a large amount of memory when newdata is large.

Warning

Rank-deficiency can lead to surprising but consistent behavior. For example, if there are two perfectly collinear predictors A and B (e.g. constant multiples of each other), then it is possible that A will be pivoted out in the fitted model and thus the associated coefficient is set to zero. If predictions are then generated on new data where B has been set to zero but A has not, then there will no contribution from neither A nor B in the resulting predictions.

The keyword argument new_re_levels specifies how previously unobserved values of the grouping variable are handled. Possible values are:

  • :population: return population values for the relevant grouping variable. In other words, treat the associated random effect as 0. If all grouping variables have new levels, then this is equivalent to just the fixed effects.
  • :missing: return missing.
  • :error: error on this condition. The error type is an implementation detail: you should not rely on a particular type of error being thrown.

If you want simulated values for unobserved levels of the grouping variable, consider the simulate! and simulate methods.

Predictions based purely on the fixed effects can be obtained by specifying previously unobserved levels of the random effects and setting new_re_levels=:population. Similarly, the contribution of any grouping variable can be excluded by specifying previously unobserved levels, while including previously observed levels of the other grouping variables. In the future, it may be possible to specify a subset of the grouping variables or overall random-effects structure to use, but not at this time.

Note

new_re_levels impacts only the behavior for previously unobserved random effects levels, i.e. new RE levels. For previously observed random effects levels, predictions take both the fixed and random effects into account.

For GeneralizedLinearMixedModel, the type parameter specifies whether the predictions should be returned on the scale of linear predictor (:linpred) or on the response scale (:response). If you don't know the difference between these terms, then you probably want type=:response.

Regression weights are not yet supported in prediction. Similarly, offsets are also not supported for GeneralizedLinearMixedModel.

source
StatsAPI.responseMethod
response(m::MixedModel)

Return the response vector for the model.

For a linear mixed model this is a view of the last column of the XyMat field. For a generalized linear mixed model this is the m.resp.y field. In either case it should be copied if it is to be modified.

source
StatsAPI.vcovMethod
vcov(m::MixedModel; corr=false)

Returns the variance-covariance matrix of the fixed effects. If corr is true, the correlation of the fixed effects is returned instead.

source
Tables.columntableMethod
columntable(s::OptSummary, [stack::Bool=false])

Return s.fitlog as a Tables.columntable.

When stack is false (the default), there will be 3 columns in the result:

  • iter: the sample number
  • objective: the value of the objective at that sample
  • θ: the parameter vector at that sample

(The term sample here refers to the fact that when the thin argument to the fit or refit! call is greater than 1 only a subset of the iterations have results recorded.)

When stack is true, there will be 4 columns: iter, objective, par, and value where value is the stacked contents of the θ vectors (the equivalent of vcat(θ...)) and par is a vector of parameter numbers.

source

Methods from StatsAPI.jl, StatsBase.jl, StatsModels.jl and GLM.jl

aic
+                new_re_levels=:missing, type=:response)

Predict response for new data.

Note

Currently, no in-place methods are provided because these methods internally construct a new model and therefore allocate not just a response vector but also many other matrices.

Warning

newdata should contain a column for the response (dependent variable) initialized to some numerical value (not missing), because this is used to construct the new model used in computing the predictions. missing is not valid because missing data are dropped before constructing the model matrices.

Warning

These methods construct an entire MixedModel behind the scenes and as such may use a large amount of memory when newdata is large.

Warning

Rank-deficiency can lead to surprising but consistent behavior. For example, if there are two perfectly collinear predictors A and B (e.g. constant multiples of each other), then it is possible that A will be pivoted out in the fitted model and thus the associated coefficient is set to zero. If predictions are then generated on new data where B has been set to zero but A has not, then there will no contribution from neither A nor B in the resulting predictions.

The keyword argument new_re_levels specifies how previously unobserved values of the grouping variable are handled. Possible values are:

  • :population: return population values for the relevant grouping variable. In other words, treat the associated random effect as 0. If all grouping variables have new levels, then this is equivalent to just the fixed effects.
  • :missing: return missing.
  • :error: error on this condition. The error type is an implementation detail: you should not rely on a particular type of error being thrown.

If you want simulated values for unobserved levels of the grouping variable, consider the simulate! and simulate methods.

Predictions based purely on the fixed effects can be obtained by specifying previously unobserved levels of the random effects and setting new_re_levels=:population. Similarly, the contribution of any grouping variable can be excluded by specifying previously unobserved levels, while including previously observed levels of the other grouping variables. In the future, it may be possible to specify a subset of the grouping variables or overall random-effects structure to use, but not at this time.

Note

new_re_levels impacts only the behavior for previously unobserved random effects levels, i.e. new RE levels. For previously observed random effects levels, predictions take both the fixed and random effects into account.

For GeneralizedLinearMixedModel, the type parameter specifies whether the predictions should be returned on the scale of linear predictor (:linpred) or on the response scale (:response). If you don't know the difference between these terms, then you probably want type=:response.

Regression weights are not yet supported in prediction. Similarly, offsets are also not supported for GeneralizedLinearMixedModel.

source
StatsAPI.responseMethod
response(m::MixedModel)

Return the response vector for the model.

For a linear mixed model this is a view of the last column of the XyMat field. For a generalized linear mixed model this is the m.resp.y field. In either case it should be copied if it is to be modified.

source
StatsAPI.vcovMethod
vcov(m::MixedModel; corr=false)

Returns the variance-covariance matrix of the fixed effects. If corr is true, the correlation of the fixed effects is returned instead.

source
Tables.columntableMethod
columntable(s::OptSummary, [stack::Bool=false])

Return s.fitlog as a Tables.columntable.

When stack is false (the default), there will be 3 columns in the result:

  • iter: the sample number
  • objective: the value of the objective at that sample
  • θ: the parameter vector at that sample

(The term sample here refers to the fact that when the thin argument to the fit or refit! call is greater than 1 only a subset of the iterations have results recorded.)

When stack is true, there will be 4 columns: iter, objective, par, and value where value is the stacked contents of the θ vectors (the equivalent of vcat(θ...)) and par is a vector of parameter numbers.

source

Methods from StatsAPI.jl, StatsBase.jl, StatsModels.jl and GLM.jl

aic
 aicc
 bic
 coef
@@ -76,9 +76,9 @@
 simulate
 simulate!
 stderrror!
-varest

Non-Exported Functions

Note that unless discussed elsewhere in the online documentation, non-exported functions should be considered implementation details.

Base.copyMethod
Base.copy(ReMat{T,S})

Return a shallow copy of ReMat.

A shallow copy shares as much internal storage as possible with the original ReMat. Only the vector λ and the scratch matrix are copied.

source
Base.sizeMethod
size(m::MixedModel)

Returns the size of a mixed model as a tuple of length four: the number of observations, the number of (non-singular) fixed-effects parameters, the number of conditional modes (random effects), the number of grouping variables

source
GLM.wrkresp!Method
GLM.wrkresp!(v::AbstractVector{T}, resp::GLM.GlmResp{AbstractVector{T}})

A copy of a method from GLM that generalizes the types in the signature

source
MixedModels.LDMethod
LD(A::Diagonal)
+varest

Non-Exported Functions

Note that unless discussed elsewhere in the online documentation, non-exported functions should be considered implementation details.

Base.copyMethod
Base.copy(ReMat{T,S})

Return a shallow copy of ReMat.

A shallow copy shares as much internal storage as possible with the original ReMat. Only the vector λ and the scratch matrix are copied.

source
Base.sizeMethod
size(m::MixedModel)

Returns the size of a mixed model as a tuple of length four: the number of observations, the number of (non-singular) fixed-effects parameters, the number of conditional modes (random effects), the number of grouping variables

source
GLM.wrkresp!Method
GLM.wrkresp!(v::AbstractVector{T}, resp::GLM.GlmResp{AbstractVector{T}})

A copy of a method from GLM that generalizes the types in the signature

source
MixedModels.LDMethod
LD(A::Diagonal)
 LD(A::HBlikDiag)
-LD(A::DenseMatrix)

Return log(det(tril(A))) evaluated in place.

source
MixedModels.adjAMethod
adjA(refs::AbstractVector, z::AbstractMatrix{T})

Returns the adjoint of an ReMat as a SparseMatrixCSC{T,Int32}

source
MixedModels.allparsMethod
allpars(bsamp::MixedModelFitCollection)

Return a tidy (column)table with the parameter estimates spread into columns of iter, type, group, name and value.

Warning

Currently, correlations that are systematically zero are included in the the result. This may change in a future release without being considered a breaking change.

source
MixedModels.amalgamateMethod
amalgamate(reterms::Vector{AbstractReMat})

Combine multiple ReMat with the same grouping variable into a single object.

source
MixedModels.blockMethod
block(i, j)

Return the linear index of the [i,j] position ("block") in the row-major packed lower triangle.

Use the row-major ordering in this case because the result depends only on i and j, not on the overall size of the array.

When i == j the value is the same as kp1choose2(i).

source
MixedModels.cholUnblocked!Function
cholUnblocked!(A, Val{:L})

Overwrite the lower triangle of A with its lower Cholesky factor.

The name is borrowed from [https://github.com/andreasnoack/LinearAlgebra.jl] because these are part of the inner calculations in a blocked Cholesky factorization.

source
MixedModels.corrmatMethod
corrmat(A::ReMat)

Return the estimated correlation matrix for A. The diagonal elements are 1 and the off-diagonal elements are the correlations between those random effect terms

Example

Note that trailing digits may vary slightly depending on the local platform.

julia> using MixedModels
+LD(A::DenseMatrix)

Return log(det(tril(A))) evaluated in place.

source
MixedModels.adjAMethod
adjA(refs::AbstractVector, z::AbstractMatrix{T})

Returns the adjoint of an ReMat as a SparseMatrixCSC{T,Int32}

source
MixedModels.allparsMethod
allpars(bsamp::MixedModelFitCollection)

Return a tidy (column)table with the parameter estimates spread into columns of iter, type, group, name and value.

Warning

Currently, correlations that are systematically zero are included in the the result. This may change in a future release without being considered a breaking change.

source
MixedModels.amalgamateMethod
amalgamate(reterms::Vector{AbstractReMat})

Combine multiple ReMat with the same grouping variable into a single object.

source
MixedModels.blockMethod
block(i, j)

Return the linear index of the [i,j] position ("block") in the row-major packed lower triangle.

Use the row-major ordering in this case because the result depends only on i and j, not on the overall size of the array.

When i == j the value is the same as kp1choose2(i).

source
MixedModels.cholUnblocked!Function
cholUnblocked!(A, Val{:L})

Overwrite the lower triangle of A with its lower Cholesky factor.

The name is borrowed from [https://github.com/andreasnoack/LinearAlgebra.jl] because these are part of the inner calculations in a blocked Cholesky factorization.

source
MixedModels.corrmatMethod
corrmat(A::ReMat)

Return the estimated correlation matrix for A. The diagonal elements are 1 and the off-diagonal elements are the correlations between those random effect terms

Example

Note that trailing digits may vary slightly depending on the local platform.

julia> using MixedModels
 
 julia> mod = fit(MixedModel,
                  @formula(rt_trunc ~ 1 + spkr + prec + load + (1 + spkr + prec | subj)),
@@ -96,13 +96,13 @@
 3×3 LinearAlgebra.Symmetric{Float64,Array{Float64,2}}:
   1.0        0.214816   -0.982948
   0.214816   1.0        -0.0315607
- -0.982948  -0.0315607   1.0
source
MixedModels.cpadMethod
cpad(s::AbstractString, n::Integer)

Return a string of length n containing s in the center (more-or-less).

source
MixedModels.densifyFunction
densify(S::SparseMatrix, threshold=0.1)

Convert sparse S to Diagonal if S is diagonal or to Array(S) if the proportion of nonzeros exceeds threshold.

source
MixedModels.deviance!Function
deviance!(m::GeneralizedLinearMixedModel, nAGQ=1)

Update m.η, m.μ, etc., install the working response and working weights in m.LMM, update m.LMM.A and m.LMM.R, then evaluate the deviance.

source
MixedModels.feLMethod
feL(m::LinearMixedModel)

Return the lower Cholesky factor for the fixed-effects parameters, as an LowerTriangular p × p matrix.

source
MixedModels.fixef!Method
fixef!(v::Vector{T}, m::MixedModel{T})

Overwrite v with the pivoted fixed-effects coefficients of model m

For full-rank models the length of v must be the rank of X. For rank-deficient models the length of v can be the rank of X or the number of columns of X. In the latter case the calculated coefficients are padded with -0.0 out to the number of columns.

source
MixedModels.getθ!Method
getθ!(v::AbstractVector{T}, A::ReMat{T}) where {T}

Overwrite v with the elements of the blocks in the lower triangle of A.Λ (column-major ordering)

source
MixedModels.isconstantMethod
isconstant(x::Array)
-isconstant(x::Tuple)

Are all elements of the iterator the same? That is, is it constant?

source
MixedModels.isnestedMethod
isnested(A::ReMat, B::ReMat)

Is the grouping factor for A nested in the grouping factor for B?

That is, does each value of A occur with just one value of B?

source
MixedModels.kchoose2Method
kchoose2(k)

The binomial coefficient k choose 2 which is the number of elements in the packed form of the strict lower triangle of a matrix.

source
MixedModels.kp1choose2Method
kp1choose2(k)

The binomial coefficient k+1 choose 2 which is the number of elements in the packed form of the lower triangle of a matrix.

source
MixedModels.cpadMethod
cpad(s::AbstractString, n::Integer)

Return a string of length n containing s in the center (more-or-less).

source
MixedModels.densifyFunction
densify(S::SparseMatrix, threshold=0.1)

Convert sparse S to Diagonal if S is diagonal or to Array(S) if the proportion of nonzeros exceeds threshold.

source
MixedModels.deviance!Function
deviance!(m::GeneralizedLinearMixedModel, nAGQ=1)

Update m.η, m.μ, etc., install the working response and working weights in m.LMM, update m.LMM.A and m.LMM.R, then evaluate the deviance.

source
MixedModels.feLMethod
feL(m::LinearMixedModel)

Return the lower Cholesky factor for the fixed-effects parameters, as an LowerTriangular p × p matrix.

source
MixedModels.fixef!Method
fixef!(v::Vector{T}, m::MixedModel{T})

Overwrite v with the pivoted fixed-effects coefficients of model m

For full-rank models the length of v must be the rank of X. For rank-deficient models the length of v can be the rank of X or the number of columns of X. In the latter case the calculated coefficients are padded with -0.0 out to the number of columns.

source
MixedModels.getθ!Method
getθ!(v::AbstractVector{T}, A::ReMat{T}) where {T}

Overwrite v with the elements of the blocks in the lower triangle of A.Λ (column-major ordering)

source
MixedModels.isconstantMethod
isconstant(x::Array)
+isconstant(x::Tuple)

Are all elements of the iterator the same? That is, is it constant?

source
MixedModels.isnestedMethod
isnested(A::ReMat, B::ReMat)

Is the grouping factor for A nested in the grouping factor for B?

That is, does each value of A occur with just one value of B?

source
MixedModels.kchoose2Method
kchoose2(k)

The binomial coefficient k choose 2 which is the number of elements in the packed form of the strict lower triangle of a matrix.

source
MixedModels.kp1choose2Method
kp1choose2(k)

The binomial coefficient k+1 choose 2 which is the number of elements in the packed form of the lower triangle of a matrix.

source
MixedModels.likelihoodratiotestMethod
likelihoodratiotest(m::MixedModel...)
 likelihoodratiotest(m0::LinearModel, m::MixedModel...)
 likelihoodratiotest(m0::GeneralizedLinearModel, m::MixedModel...)
 likelihoodratiotest(m0::TableRegressionModel{LinearModel}, m::MixedModel...)
-likelihoodratiotest(m0::TableRegressionModel{GeneralizedLinearModel}, m::MixedModel...)

Likeihood ratio test applied to a set of nested models.

Note

The nesting of the models is not checked. It is incumbent on the user to check this. This differs from StatsModels.lrtest as nesting in mixed models, especially in the random effects specification, may be non obvious.

Note

For comparisons between mixed and non-mixed models, the deviance for the non-mixed model is taken to be -2 log likelihood, i.e. omitting the additive constant for the fully saturated model. This is in line with the computation of the deviance for mixed models.

This functionality may be deprecated in the future in favor of StatsModels.lrtest.

source
MixedModels.nranefMethod
nranef(A::ReMat)

Return the number of random effects represented by A. Zero unless A is an ReMat.

source
MixedModels.nθMethod
nθ(A::ReMat)

Return the number of free parameters in the relative covariance matrix λ

source
MixedModels.optsumjMethod
optsumj(os::OptSummary, j::Integer)

Return an OptSummary with the j'th component of the parameter omitted.

os.final with its j'th component omitted is used as the initial parameter.

source
MixedModels.parsejMethod
parsej(sym::Symbol)

Return the index from symbol names like :θ1, :θ01, etc.

Note

This method is internal.

source
MixedModels.pivotMethod
pivot(m::MixedModel)
-pivot(A::FeTerm)

Return the pivot associated with the FeTerm.

source
MixedModels.profileσs!Method
 profileσs!(val::NamedTuple, tc::TableColumns{T}; nzlb=1.0e-8) where {T}

Profile the variance components.

Note

This method is called by profile and currently considered internal. As such, it may change or disappear in a future release without being considered breaking.

source
MixedModels.ranef!Method
ranef!(v::Vector{Matrix{T}}, m::MixedModel{T}, β, uscale::Bool) where {T}

Overwrite v with the conditional modes of the random effects for m.

If uscale is true the random effects are on the spherical (i.e. u) scale, otherwise on the original scale

β is the truncated, pivoted coefficient vector.

source
MixedModels.rankUpdate!Function
rankUpdate!(C, A)
+likelihoodratiotest(m0::TableRegressionModel{GeneralizedLinearModel}, m::MixedModel...)

Likeihood ratio test applied to a set of nested models.

Note

The nesting of the models is not checked. It is incumbent on the user to check this. This differs from StatsModels.lrtest as nesting in mixed models, especially in the random effects specification, may be non obvious.

Note

For comparisons between mixed and non-mixed models, the deviance for the non-mixed model is taken to be -2 log likelihood, i.e. omitting the additive constant for the fully saturated model. This is in line with the computation of the deviance for mixed models.

This functionality may be deprecated in the future in favor of StatsModels.lrtest.

source
MixedModels.nranefMethod
nranef(A::ReMat)

Return the number of random effects represented by A. Zero unless A is an ReMat.

source
MixedModels.nθMethod
nθ(A::ReMat)

Return the number of free parameters in the relative covariance matrix λ

source
MixedModels.optsumjMethod
optsumj(os::OptSummary, j::Integer)

Return an OptSummary with the j'th component of the parameter omitted.

os.final with its j'th component omitted is used as the initial parameter.

source
MixedModels.parsejMethod
parsej(sym::Symbol)

Return the index from symbol names like :θ1, :θ01, etc.

Note

This method is internal.

source
MixedModels.pivotMethod
pivot(m::MixedModel)
+pivot(A::FeTerm)

Return the pivot associated with the FeTerm.

source
MixedModels.profileσs!Method
 profileσs!(val::NamedTuple, tc::TableColumns{T}; nzlb=1.0e-8) where {T}

Profile the variance components.

Note

This method is called by profile and currently considered internal. As such, it may change or disappear in a future release without being considered breaking.

source
MixedModels.ranef!Method
ranef!(v::Vector{Matrix{T}}, m::MixedModel{T}, β, uscale::Bool) where {T}

Overwrite v with the conditional modes of the random effects for m.

If uscale is true the random effects are on the spherical (i.e. u) scale, otherwise on the original scale

β is the truncated, pivoted coefficient vector.

source
MixedModels.rankUpdate!Function
rankUpdate!(C, A)
 rankUpdate!(C, A, α)
-rankUpdate!(C, A, α, β)

A rank-k update, C := αA'A + βC, of a Hermitian (Symmetric) matrix.

α and β both default to 1.0. When α is -1.0 this is a downdate operation. The name rankUpdate! is borrowed from [https://github.com/andreasnoack/LinearAlgebra.jl]

source
MixedModels.rePCAMethod
rePCA(m::LinearMixedModel; corr::Bool=true)

Return a named tuple of the normalized cumulative variance of a principal components analysis of the random effects covariance matrices or correlation matrices when corr is true.

The normalized cumulative variance is the proportion of the variance for the first principal component, the first two principal components, etc. The last element is always 1.0 representing the complete proportion of the variance.

source
MixedModels.reevaluateAend!Method
reevaluateAend!(m::LinearMixedModel)

Reevaluate the last column of m.A from m.Xymat. This function should be called after updating the response.

source
MixedModels.refitσ!Method
refitσ!(m::LinearMixedModel{T}, σ::T, tc::TableColumns{T}, obj::T, neg::Bool)

Refit the model m with the given value of σ and return a NamedTuple of information about the fit.

obj and neg allow for conversion of the objective to the ζ scale and tc is used to return a NamedTuple

Note

This method is internal and may change or disappear in a future release without being considered breaking.

source
MixedModels.schematizeFunction
schematize(f, tbl, contrasts::Dict{Symbol}, Mod=LinearMixedModel)

Find and apply the schema for f in a way that automatically uses Grouping() contrasts when appropriate.

Warn

This is an internal method.

source
MixedModels.sdcorrMethod
sdcorr(A::AbstractMatrix{T}) where {T}

Transform a square matrix A with positive diagonals into an NTuple{size(A,1), T} of standard deviations and a tuple of correlations.

A is assumed to be symmetric and only the lower triangle is used. The order of the correlations is row-major ordering of the lower triangle (or, equivalently, column-major in the upper triangle).

source
MixedModels.setβθ!Method
setβθ!(m::GeneralizedLinearMixedModel, v)

Set the parameter vector, :βθ, of m to v.

βθ is the concatenation of the fixed-effects, β, and the covariance parameter, θ.

source
MixedModels.ssqdenomMethod
ssqdenom(m::LinearMixedModel)

Return the denominator for penalized sums-of-squares.

For MLE, this value is the number of observations. For REML, this value is the number of observations minus the rank of the fixed-effects matrix. The difference is analogous to the use of n or n-1 in the denominator when calculating the variance.

source
MixedModels.statsrankMethod
statsrank(x::Matrix{T}, ranktol::Real=1e-8) where {T<:AbstractFloat}

Return the numerical column rank and a pivot vector.

The rank is determined from the absolute values of the diagonal of R from a pivoted QR decomposition, relative to the first (and, hence, largest) element of this vector.

In the full-rank case the pivot vector is collect(axes(x, 2)).

source
MixedModels.tidyβMethod
tidyβ(bsamp::MixedModelFitCollection)

Return a tidy (row)table with the parameter estimates spread into columns of iter, coefname and β

source
MixedModels.tidyσsMethod
tidyσs(bsamp::MixedModelFitCollection)

Return a tidy (row)table with the estimates of the variance components (on the standard deviation scale) spread into columns of iter, group, column and σ.

source
MixedModels.unscaledre!Function
unscaledre!(y::AbstractVector{T}, M::ReMat{T}) where {T}
-unscaledre!(rng::AbstractRNG, y::AbstractVector{T}, M::ReMat{T}) where {T}

Add unscaled random effects simulated from M to y.

These are unscaled random effects (i.e. they incorporate λ but not σ) because the scaling is done after the per-observation noise is added as a standard normal.

source
MixedModels.updateA!Method
updateA!(m::LinearMixedModel)

Update the cross-product array, m.A, from m.reterms and m.Xymat

This is usually done after a reweight! operation.

source
MixedModels.updateη!Method
updateη!(m::GeneralizedLinearMixedModel)

Update the linear predictor, m.η, from the offset and the B-scale random effects.

source
MixedModels.σvals!Method
σvals!(v::AbstractVector, A::ReMat, sc::Number)

Overwrite v with the standard deviations of the random effects associated with A

source
MixedModels.σρ!Method
σρ!(v, t, σ)

push! σ times the row lengths (σs) and the inner products of normalized rows (ρs) of t onto v

source
StatsModels.isnestedMethod
isnested(m1::MixedModel, m2::MixedModel; atol::Real=0.0)

Indicate whether model m1 is nested in model m2, i.e. whether m1 can be obtained by constraining some parameters in m2. Both models must have been fitted on the same data. This check is conservative for MixedModels and may reject nested models with different parameterizations as being non nested.

source
+rankUpdate!(C, A, α, β)

A rank-k update, C := αA'A + βC, of a Hermitian (Symmetric) matrix.

α and β both default to 1.0. When α is -1.0 this is a downdate operation. The name rankUpdate! is borrowed from [https://github.com/andreasnoack/LinearAlgebra.jl]

source
MixedModels.rePCAMethod
rePCA(m::LinearMixedModel; corr::Bool=true)

Return a named tuple of the normalized cumulative variance of a principal components analysis of the random effects covariance matrices or correlation matrices when corr is true.

The normalized cumulative variance is the proportion of the variance for the first principal component, the first two principal components, etc. The last element is always 1.0 representing the complete proportion of the variance.

source
MixedModels.reevaluateAend!Method
reevaluateAend!(m::LinearMixedModel)

Reevaluate the last column of m.A from m.Xymat. This function should be called after updating the response.

source
MixedModels.refitσ!Method
refitσ!(m::LinearMixedModel{T}, σ::T, tc::TableColumns{T}, obj::T, neg::Bool)

Refit the model m with the given value of σ and return a NamedTuple of information about the fit.

obj and neg allow for conversion of the objective to the ζ scale and tc is used to return a NamedTuple

Note

This method is internal and may change or disappear in a future release without being considered breaking.

source
MixedModels.schematizeFunction
schematize(f, tbl, contrasts::Dict{Symbol}, Mod=LinearMixedModel)

Find and apply the schema for f in a way that automatically uses Grouping() contrasts when appropriate.

Warn

This is an internal method.

source
MixedModels.sdcorrMethod
sdcorr(A::AbstractMatrix{T}) where {T}

Transform a square matrix A with positive diagonals into an NTuple{size(A,1), T} of standard deviations and a tuple of correlations.

A is assumed to be symmetric and only the lower triangle is used. The order of the correlations is row-major ordering of the lower triangle (or, equivalently, column-major in the upper triangle).

source
MixedModels.setβθ!Method
setβθ!(m::GeneralizedLinearMixedModel, v)

Set the parameter vector, :βθ, of m to v.

βθ is the concatenation of the fixed-effects, β, and the covariance parameter, θ.

source
MixedModels.ssqdenomMethod
ssqdenom(m::LinearMixedModel)

Return the denominator for penalized sums-of-squares.

For MLE, this value is the number of observations. For REML, this value is the number of observations minus the rank of the fixed-effects matrix. The difference is analogous to the use of n or n-1 in the denominator when calculating the variance.

source
MixedModels.statsrankMethod
statsrank(x::Matrix{T}, ranktol::Real=1e-8) where {T<:AbstractFloat}

Return the numerical column rank and a pivot vector.

The rank is determined from the absolute values of the diagonal of R from a pivoted QR decomposition, relative to the first (and, hence, largest) element of this vector.

In the full-rank case the pivot vector is collect(axes(x, 2)).

source
MixedModels.tidyβMethod
tidyβ(bsamp::MixedModelFitCollection)

Return a tidy (row)table with the parameter estimates spread into columns of iter, coefname and β

source
MixedModels.tidyσsMethod
tidyσs(bsamp::MixedModelFitCollection)

Return a tidy (row)table with the estimates of the variance components (on the standard deviation scale) spread into columns of iter, group, column and σ.

source
MixedModels.unscaledre!Function
unscaledre!(y::AbstractVector{T}, M::ReMat{T}) where {T}
+unscaledre!(rng::AbstractRNG, y::AbstractVector{T}, M::ReMat{T}) where {T}

Add unscaled random effects simulated from M to y.

These are unscaled random effects (i.e. they incorporate λ but not σ) because the scaling is done after the per-observation noise is added as a standard normal.

source
MixedModels.updateA!Method
updateA!(m::LinearMixedModel)

Update the cross-product array, m.A, from m.reterms and m.Xymat

This is usually done after a reweight! operation.

source
MixedModels.updateη!Method
updateη!(m::GeneralizedLinearMixedModel)

Update the linear predictor, m.η, from the offset and the B-scale random effects.

source
MixedModels.σvals!Method
σvals!(v::AbstractVector, A::ReMat, sc::Number)

Overwrite v with the standard deviations of the random effects associated with A

source
MixedModels.σρ!Method
σρ!(v, t, σ)

push! σ times the row lengths (σs) and the inner products of normalized rows (ρs) of t onto v

source
StatsModels.isnestedMethod
isnested(m1::MixedModel, m2::MixedModel; atol::Real=0.0)

Indicate whether model m1 is nested in model m2, i.e. whether m1 can be obtained by constraining some parameters in m2. Both models must have been fitted on the same data. This check is conservative for MixedModels and may reject nested models with different parameterizations as being non nested.

source
diff --git a/dev/benchmarks/index.html b/dev/benchmarks/index.html index 85baa9f44..aaebf1d3f 100644 --- a/dev/benchmarks/index.html +++ b/dev/benchmarks/index.html @@ -17,4 +17,4 @@ Load Avg: 1.4091796875 2.07080078125 1.63037109375 WORD_SIZE: 64 LIBM: libopenlibm - LLVM: libLLVM-6.0.0 (ORCJIT, ivybridge)
+ LLVM: libLLVM-6.0.0 (ORCJIT, ivybridge) diff --git a/dev/bootstrap/bf576c01.svg b/dev/bootstrap/04a294f1.svg similarity index 62% rename from dev/bootstrap/bf576c01.svg rename to dev/bootstrap/04a294f1.svg index b950f2c54..5eab5bcf7 100644 --- a/dev/bootstrap/bf576c01.svg +++ b/dev/bootstrap/04a294f1.svg @@ -14,15 +14,15 @@ - - + + Parametric bootstrap samples of correlation of random effects - + -1.0 @@ -49,502 +49,502 @@ - - - - + + + + - - + + - + - + - + - + - - + + - + - + - + - + - + - - - + + + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + @@ -552,28 +552,28 @@ - - + + 0 - + 100 - + 200 - + 300 - + 400 @@ -581,7 +581,7 @@ - + diff --git a/dev/bootstrap/d17b7df1.svg b/dev/bootstrap/1f620d14.svg similarity index 62% rename from dev/bootstrap/d17b7df1.svg rename to dev/bootstrap/1f620d14.svg index 9fcf331c0..9af3c64da 100644 --- a/dev/bootstrap/d17b7df1.svg +++ b/dev/bootstrap/1f620d14.svg @@ -14,15 +14,15 @@ - - + + Parametric bootstrap estimates of σ₁ - + 0 @@ -39,490 +39,490 @@ - - - - + + + + - - + + - + - + - - + + - + - + - + - - - + + + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + @@ -530,18 +530,18 @@ - - + + 0 - + 500 - + 1000 @@ -549,7 +549,7 @@ - + diff --git a/dev/bootstrap/2f7df49d.svg b/dev/bootstrap/2f350b57.svg similarity index 83% rename from dev/bootstrap/2f7df49d.svg rename to dev/bootstrap/2f350b57.svg index ce46f4945..6132d1329 100644 --- a/dev/bootstrap/2f7df49d.svg +++ b/dev/bootstrap/2f350b57.svg @@ -14,15 +14,15 @@ - - + + Parametric bootstrap estimates of σ₁ - + -50 @@ -49,52 +49,52 @@ - - - - + + + + - - + + - + - + - + - - + + - + - + - + - + - + - - - + + + @@ -102,23 +102,23 @@ - - + + 0.00 - + 0.01 - + 0.02 - + 0.03 @@ -126,7 +126,7 @@ - + diff --git a/dev/bootstrap/9b3e42d5.svg b/dev/bootstrap/34e2ad89.svg similarity index 81% rename from dev/bootstrap/9b3e42d5.svg rename to dev/bootstrap/34e2ad89.svg index 9a5676491..08ad79a66 100644 --- a/dev/bootstrap/9b3e42d5.svg +++ b/dev/bootstrap/34e2ad89.svg @@ -14,15 +14,15 @@ - - + + Parametric bootstrap estimates of β₁ - + 1400 @@ -54,61 +54,61 @@ - - - - + + + + - - + + - + - + - + - + - + - - + + - + - + - + - + - + - + - - - + + + @@ -116,33 +116,33 @@ - - + + 0.000 - + 0.005 - + 0.010 - + 0.015 - + 0.020 - + 0.025 @@ -150,7 +150,7 @@ - + diff --git a/dev/bootstrap/cd7266c3.svg b/dev/bootstrap/709de4de.svg similarity index 80% rename from dev/bootstrap/cd7266c3.svg rename to dev/bootstrap/709de4de.svg index 721fb4b79..efda98682 100644 --- a/dev/bootstrap/cd7266c3.svg +++ b/dev/bootstrap/709de4de.svg @@ -14,15 +14,15 @@ - - + + Parametric bootstrap estimates of σ - + 0 @@ -39,55 +39,55 @@ - - - - + + + + - - + + - + - + - + - + - + - + - - + + - + - + - + - - - + + + @@ -95,38 +95,38 @@ - - + + 0.00 - + 0.01 - + 0.02 - + 0.03 - + 0.04 - + 0.05 - + 0.06 @@ -134,7 +134,7 @@ - + diff --git a/dev/bootstrap/index.html b/dev/bootstrap/index.html index 3be0a5bf8..f6a12f763 100644 --- a/dev/bootstrap/index.html +++ b/dev/bootstrap/index.html @@ -1,6 +1,6 @@ Parametric bootstrap for mixed-effects models · MixedModels

Parametric bootstrap for mixed-effects models

Julia is well-suited to implementing bootstrapping and other simulation-based methods for statistical models. The parametricbootstrap function in the MixedModels package provides an efficient parametric bootstrap for mixed-effects models.

MixedModels.parametricbootstrapFunction
parametricbootstrap([rng::AbstractRNG], nsamp::Integer, m::MixedModel{T}, ftype=T;
-    β = fixef(m), σ = m.σ, θ = m.θ, progress=true, optsum_overrides=(;))

Perform nsamp parametric bootstrap replication fits of m, returning a MixedModelBootstrap.

The default random number generator is Random.GLOBAL_RNG.

ftype can be used to store the computed bootstrap values in a lower precision. ftype is not a named argument because named arguments are not used in method dispatch and thus specialization. In other words, having ftype as a positional argument has some potential performance benefits.

Keyword Arguments

  • β, σ, and θ are the values of m's parameters for simulating the responses.
  • σ is only valid for LinearMixedModel and GeneralizedLinearMixedModel for

families with a dispersion parameter.

  • progress controls whether the progress bar is shown. Note that the progress

bar is automatically disabled for non-interactive (i.e. logging) contexts.

  • optsum_overrides is used to override values of OptSummary in the models

fit during the bootstrapping process. For example, optsum_overrides=(;ftol_rel=1e-08) reduces the convergence criterion, which can greatly speed up the bootstrap fits. Taking advantage of this speed up to increase n can often lead to better estimates of coverage intervals.

Note

All coefficients are bootstrapped. In the rank deficient case, the inestimatable coefficients are treated as -0.0 in the simulations underlying the bootstrap, which will generally result in their estimate from the simulated data also being being inestimable and thus set to -0.0. However this behavior may change in future releases to explicitly drop the extraneous columns before simulation and thus not include their estimates in the bootstrap result.

source

The parametric bootstrap

Bootstrapping is a family of procedures for generating sample values of a statistic, allowing for visualization of the distribution of the statistic or for inference from this sample of values.

A parametric bootstrap is used with a parametric model, m, that has been fit to data. The procedure is to simulate n response vectors from m using the estimated parameter values and refit m to these responses in turn, accumulating the statistics of interest at each iteration.

The parameters of a LinearMixedModel object are the fixed-effects parameters, β, the standard deviation, σ, of the per-observation noise, and the covariance parameter, θ, that defines the variance-covariance matrices of the random effects.

For example, a simple linear mixed-effects model for the Dyestuff data in the lme4 package for R is fit by

using DataFrames
+    β = fixef(m), σ = m.σ, θ = m.θ, progress=true, optsum_overrides=(;))

Perform nsamp parametric bootstrap replication fits of m, returning a MixedModelBootstrap.

The default random number generator is Random.GLOBAL_RNG.

ftype can be used to store the computed bootstrap values in a lower precision. ftype is not a named argument because named arguments are not used in method dispatch and thus specialization. In other words, having ftype as a positional argument has some potential performance benefits.

Keyword Arguments

  • β, σ, and θ are the values of m's parameters for simulating the responses.
  • σ is only valid for LinearMixedModel and GeneralizedLinearMixedModel for

families with a dispersion parameter.

  • progress controls whether the progress bar is shown. Note that the progress

bar is automatically disabled for non-interactive (i.e. logging) contexts.

  • optsum_overrides is used to override values of OptSummary in the models

fit during the bootstrapping process. For example, optsum_overrides=(;ftol_rel=1e-08) reduces the convergence criterion, which can greatly speed up the bootstrap fits. Taking advantage of this speed up to increase n can often lead to better estimates of coverage intervals.

Note

All coefficients are bootstrapped. In the rank deficient case, the inestimatable coefficients are treated as -0.0 in the simulations underlying the bootstrap, which will generally result in their estimate from the simulated data also being being inestimable and thus set to -0.0. However this behavior may change in future releases to explicitly drop the extraneous columns before simulation and thus not include their estimates in the bootstrap result.

source

The parametric bootstrap

Bootstrapping is a family of procedures for generating sample values of a statistic, allowing for visualization of the distribution of the statistic or for inference from this sample of values.

A parametric bootstrap is used with a parametric model, m, that has been fit to data. The procedure is to simulate n response vectors from m using the estimated parameter values and refit m to these responses in turn, accumulating the statistics of interest at each iteration.

The parameters of a LinearMixedModel object are the fixed-effects parameters, β, the standard deviation, σ, of the per-observation noise, and the covariance parameter, θ, that defines the variance-covariance matrices of the random effects.

For example, a simple linear mixed-effects model for the Dyestuff data in the lme4 package for R is fit by

using DataFrames
 using Gadfly          # plotting package
 using MixedModels
 using Random
dyestuff = MixedModels.dataset(:dyestuff)
@@ -27,9 +27,9 @@
  15 │ 336.186  1536.17  64.0205  15.243    0.238096
  16 │ 329.468  1526.42  58.6856  0.0       0.0
  17 │ 320.086  1517.67  43.218   35.9663   0.832207
- ⋮  │    ⋮        ⋮        ⋮        ⋮          ⋮

A density plot of the estimates of σ, the residual standard deviation, can be created as

plot(x = tbl.σ, Geom.density, Guide.xlabel("Parametric bootstrap estimates of σ"))
Example block output

or, for the intercept parameter

plot(x = tbl.β1, Geom.density, Guide.xlabel("Parametric bootstrap estimates of β₁"))
Example block output

A density plot of the estimates of the standard deviation of the random effects is obtained as

plot(x = tbl.σ1, Geom.density,
-    Guide.xlabel("Parametric bootstrap estimates of σ₁"))
Example block output

Notice that this density plot has a spike, or mode, at zero. Although this mode appears to be diffuse, this is an artifact of the way that density plots are created. In fact, it is a pulse, as can be seen from a histogram.

plot(x = tbl.σ1, Geom.histogram,
-    Guide.xlabel("Parametric bootstrap estimates of σ₁"))
Example block output

The bootstrap sample can be used to generate intervals that cover a certain percentage of the bootstrapped values. We refer to these as "coverage intervals", similar to a confidence interval. The shortest such intervals, obtained with the shortestcovint extractor, correspond to a highest posterior density interval in Bayesian inference.

MixedModels.shortestcovintFunction
shortestcovint(v, level = 0.95)

Return the shortest interval containing level proportion of the values of v

source
shortestcovint(bsamp::MixedModelFitCollection, level = 0.95)

Return the shortest interval containing level proportion for each parameter from bsamp.allpars.

Warning

Currently, correlations that are systematically zero are included in the the result. This may change in a future release without being considered a breaking change.

source

We generate these directly from the original bootstrap object:

Table(shortestcovint(samp))
Table with 5 columns and 3 rows:
+ ⋮  │    ⋮        ⋮        ⋮        ⋮          ⋮

A density plot of the estimates of σ, the residual standard deviation, can be created as

plot(x = tbl.σ, Geom.density, Guide.xlabel("Parametric bootstrap estimates of σ"))
Example block output

or, for the intercept parameter

plot(x = tbl.β1, Geom.density, Guide.xlabel("Parametric bootstrap estimates of β₁"))
Example block output

A density plot of the estimates of the standard deviation of the random effects is obtained as

plot(x = tbl.σ1, Geom.density,
+    Guide.xlabel("Parametric bootstrap estimates of σ₁"))
Example block output

Notice that this density plot has a spike, or mode, at zero. Although this mode appears to be diffuse, this is an artifact of the way that density plots are created. In fact, it is a pulse, as can be seen from a histogram.

plot(x = tbl.σ1, Geom.histogram,
+    Guide.xlabel("Parametric bootstrap estimates of σ₁"))
Example block output

The bootstrap sample can be used to generate intervals that cover a certain percentage of the bootstrapped values. We refer to these as "coverage intervals", similar to a confidence interval. The shortest such intervals, obtained with the shortestcovint extractor, correspond to a highest posterior density interval in Bayesian inference.

MixedModels.shortestcovintFunction
shortestcovint(v, level = 0.95)

Return the shortest interval containing level proportion of the values of v

source
shortestcovint(bsamp::MixedModelFitCollection, level = 0.95)

Return the shortest interval containing level proportion for each parameter from bsamp.allpars.

Warning

Currently, correlations that are systematically zero are included in the the result. This may change in a future release without being considered a breaking change.

source

We generate these directly from the original bootstrap object:

Table(shortestcovint(samp))
Table with 5 columns and 3 rows:
      type  group     names        lower    upper
    ┌──────────────────────────────────────────────
  1 │ β     missing   (Intercept)  1492.54  1561.34
@@ -67,10 +67,10 @@
  (type = "σ", group = "subj", names = "days", lower = 3.024936712829588, upper = 7.675241393755667)
  (type = "ρ", group = "subj", names = "(Intercept), days", lower = -0.40535765760216863, upper = 1.0)
  (type = "σ", group = "residual", names = missing, lower = 22.656763019290786, upper = 28.43122140376578)

A histogram of the estimated correlations from the bootstrap sample has a spike at +1.

plot(x = tbl2.ρ1, Geom.histogram,
-    Guide.xlabel("Parametric bootstrap samples of correlation of random effects"))
Example block output

or, as a count,

count(tbl2.ρ1 .≈ 1)
306

Close examination of the histogram shows a few values of -1.

count(tbl2.ρ1 .≈ -1)
2

Furthermore there are even a few cases where the estimate of the standard deviation of the random effect for the intercept is zero.

count(tbl2.σ1 .≈ 0)
5

There is a general condition to check for singularity of an estimated covariance matrix or matrices in a bootstrap sample. The parameter optimized in the estimation is θ, the relative covariance parameter. Some of the elements of this parameter vector must be non-negative and, when one of these components is approximately zero, one of the covariance matrices will be singular.

The issingular method for a MixedModel object that tests if a parameter vector θ corresponds to a boundary or singular fit.

This operation is encapsulated in a method for the issingular function.

count(issingular(samp2))
313

Reduced Precision Bootstrap

parametricbootstrap accepts an optional keyword argument optsum_overrides, which can be used to override the convergence criteria for bootstrap replicates. One possibility is setting ftol_rel=1e-8, i.e., considering the model converged when the relative change in the objective between optimizer iterations is smaller than 0.00000001. This threshold corresponds approximately to the precision from treating the value of the objective as a single precision (Float32) number, while not changing the precision of the intermediate computations. The resultant loss in precision will generally be smaller than the variation that the bootstrap captures, but can greatly speed up the fitting process for each replicates, especially for large models. More directly, lowering the fit quality for each replicate will reduce the quality of each replicate, but this may be more than compensated for by the ability to fit a much larger number of replicates in the same time.

t = @timed parametricbootstrap(MersenneTwister(42), 1000, m2; progress=false)
-t.time
0.703007214
optsum_overrides = (; ftol_rel=1e-8)
+    Guide.xlabel("Parametric bootstrap samples of correlation of random effects"))
Example block output

or, as a count,

count(tbl2.ρ1 .≈ 1)
306

Close examination of the histogram shows a few values of -1.

count(tbl2.ρ1 .≈ -1)
2

Furthermore there are even a few cases where the estimate of the standard deviation of the random effect for the intercept is zero.

count(tbl2.σ1 .≈ 0)
5

There is a general condition to check for singularity of an estimated covariance matrix or matrices in a bootstrap sample. The parameter optimized in the estimation is θ, the relative covariance parameter. Some of the elements of this parameter vector must be non-negative and, when one of these components is approximately zero, one of the covariance matrices will be singular.

The issingular method for a MixedModel object that tests if a parameter vector θ corresponds to a boundary or singular fit.

This operation is encapsulated in a method for the issingular function.

count(issingular(samp2))
313

Reduced Precision Bootstrap

parametricbootstrap accepts an optional keyword argument optsum_overrides, which can be used to override the convergence criteria for bootstrap replicates. One possibility is setting ftol_rel=1e-8, i.e., considering the model converged when the relative change in the objective between optimizer iterations is smaller than 0.00000001. This threshold corresponds approximately to the precision from treating the value of the objective as a single precision (Float32) number, while not changing the precision of the intermediate computations. The resultant loss in precision will generally be smaller than the variation that the bootstrap captures, but can greatly speed up the fitting process for each replicates, especially for large models. More directly, lowering the fit quality for each replicate will reduce the quality of each replicate, but this may be more than compensated for by the ability to fit a much larger number of replicates in the same time.

t = @timed parametricbootstrap(MersenneTwister(42), 1000, m2; progress=false)
+t.time
0.712444437
optsum_overrides = (; ftol_rel=1e-8)
 t = @timed parametricbootstrap(MersenneTwister(42), 1000, m2; optsum_overrides, progress=false)
-t.time
0.638842517

Distributed Computing and the Bootstrap

Earlier versions of MixedModels.jl supported a multi-threaded bootstrap via the use_threads keyword argument. However, with improved BLAS multithreading, the Julia-level threads often wound up competing with the BLAS threads, leading to no improvement or even a worsening of performance when use_threads=true. Nonetheless, the bootstrap is a classic example of an embarrassingly parallel problem and so we provide a few convenience methods for combining results computed separately. In particular, there are vcat and an optimized reduce(::typeof(vcat)) methods for MixedModelBootstrap objects. For computers with many processors (as opposed to a single processor with several cores) or for computing clusters, these provide a convenient way to split the computation across nodes.

using Distributed
+t.time
0.634287233

Distributed Computing and the Bootstrap

Earlier versions of MixedModels.jl supported a multi-threaded bootstrap via the use_threads keyword argument. However, with improved BLAS multithreading, the Julia-level threads often wound up competing with the BLAS threads, leading to no improvement or even a worsening of performance when use_threads=true. Nonetheless, the bootstrap is a classic example of an embarrassingly parallel problem and so we provide a few convenience methods for combining results computed separately. In particular, there are vcat and an optimized reduce(::typeof(vcat)) methods for MixedModelBootstrap objects. For computers with many processors (as opposed to a single processor with several cores) or for computing clusters, these provide a convenient way to split the computation across nodes.

using Distributed
 # you already have 1 proc by default, so add the number of additional cores with `addprocs`
 # you need at least as many RNGs as cores you want to use in parallel
 # but you shouldn't use all of your cores because nested within this
@@ -112,4 +112,4 @@
  ρ1  │ -0.424371  1.0
  σ   │ 22.4485    28.2745
  σ1  │ 10.6217    32.5576
- σ2  │ 3.18136    7.74161
+ σ2 │ 3.18136 7.74161 diff --git a/dev/constructors/index.html b/dev/constructors/index.html index ad2d7a914..1cbc325f4 100644 --- a/dev/constructors/index.html +++ b/dev/constructors/index.html @@ -39,15 +39,15 @@ ────────────────────────────────────────────────

(If you are new to Julia you may find that this first fit takes an unexpectedly long time, due to Just-In-Time (JIT) compilation of the code. The subsequent calls to such functions are much faster.)

using BenchmarkTools
 dyestuff2 = MixedModels.dataset(:dyestuff2)
 @benchmark fit(MixedModel, $fm, $dyestuff2)
BenchmarkTools.Trial: 10000 samples with 1 evaluation.
- Range (minmax):  140.683 μs 55.155 ms   GC (min … max): 0.00% … 95.67%
- Time  (median):     150.672 μs                GC (median):    0.00%
- Time  (mean ± σ):   163.879 μs ± 743.266 μs   GC (mean ± σ):  6.14% ±  1.35%
+ Range (minmax):  138.839 μs 55.429 ms   GC (min … max): 0.00% … 94.19%
+ Time  (median):     147.350 μs                GC (median):    0.00%
+ Time  (mean ± σ):   166.288 μs ± 904.413 μs   GC (mean ± σ):  8.96% ±  1.65%
 
-      ▄█▇▄▁                                                      
-  ▁▂▃▇█████▆▅▅▅▅▆▆▆▆▇▆▆▅▅▄▃▃▃▂▂▂▂▂▂▂▂▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁ ▃
-  141 μs           Histogram: frequency by time          192 μs <
+      ▅█▇▄▁                                                      
+  ▁▁▃▇█████▆▅▄▄▄▄▅▅▅▆▅▅▅▅▅▄▄▃▃▃▃▂▂▂▂▂▂▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁ ▃
+  139 μs           Histogram: frequency by time          185 μs <
 
- Memory estimate: 53.17 KiB, allocs estimate: 946.

By default, the model is fit by maximum likelihood. To use the REML criterion instead, add the optional named argument REML=true to the call to fit

fm1reml = fit(MixedModel, fm, dyestuff, REML=true)
Linear mixed model fit by REML
+ Memory estimate: 54.02 KiB, allocs estimate: 956.

By default, the model is fit by maximum likelihood. To use the REML criterion instead, add the optional named argument REML=true to the call to fit

fm1reml = fit(MixedModel, fm, dyestuff, REML=true)
Linear mixed model fit by REML
  yield ~ 1 + (1 | batch)
  REML criterion at convergence: 319.65427684225943
 
@@ -286,7 +286,7 @@
 days: 7       62.0988     10.0922    6.15    <1e-09
 days: 8       79.9777     13.2713    6.03    <1e-08
 days: 9       94.1994     13.1757    7.15    <1e-12
-───────────────────────────────────────────────────

(Notice that the variance component for days: 1 is estimated as zero, so the correlations for this component are undefined and expressed as NaN, not a number.)

An alternative is to force all the levels of days as indicators using fulldummy encoding.

MixedModels.fulldummyFunction
fulldummy(term::CategoricalTerm)

Assign "contrasts" that include all indicator columns (dummy variables) and an intercept column.

This will result in an under-determined set of contrasts, which is not a problem in the random effects because of the regularization, or "shrinkage", of the conditional modes.

The interaction of fulldummy with complex random effects is subtle and complex with numerous potential edge cases. As we discover these edge cases, we will document and determine their behavior. Until such time, please check the model summary to verify that the expansion is working as you expected. If it is not, please report a use case on GitHub.

source
fit(MixedModel, @formula(reaction ~ 1 + days + (1 + fulldummy(days)|subj)), sleepstudy,
+───────────────────────────────────────────────────

(Notice that the variance component for days: 1 is estimated as zero, so the correlations for this component are undefined and expressed as NaN, not a number.)

An alternative is to force all the levels of days as indicators using fulldummy encoding.

MixedModels.fulldummyFunction
fulldummy(term::CategoricalTerm)

Assign "contrasts" that include all indicator columns (dummy variables) and an intercept column.

This will result in an under-determined set of contrasts, which is not a problem in the random effects because of the regularization, or "shrinkage", of the conditional modes.

The interaction of fulldummy with complex random effects is subtle and complex with numerous potential edge cases. As we discover these edge cases, we will document and determine their behavior. Until such time, please check the model summary to verify that the expansion is working as you expected. If it is not, please report a use case on GitHub.

source
fit(MixedModel, @formula(reaction ~ 1 + days + (1 + fulldummy(days)|subj)), sleepstudy,
     contrasts = Dict(:days => DummyCoding()))
Linear mixed model fit by maximum likelihood
  reaction ~ 1 + days + (1 + days | subj)
    logLik   -2 logLik     AIC       AICc        BIC    
@@ -458,24 +458,24 @@
 mode: want     0.706979     0.151006    4.68    <1e-05
 ──────────────────────────────────────────────────────

The canonical link, which is LogitLink for the Bernoulli distribution, is used if no explicit link is specified.

Note that, in keeping with convention in the GLM package, the distribution family for a binary (i.e. 0/1) response is the Bernoulli distribution. The Binomial distribution is only used when the response is the fraction of trials returning a positive, in which case the number of trials must be specified as the case weights.

Optional arguments to fit

An alternative approach is to create the GeneralizedLinearMixedModel object then call fit! on it. The optional arguments fast and/or nAGQ can be passed to the optimization process via both fit and fit! (i.e these optimization settings are not used nor recognized when constructing the model).

As the name implies, fast=true, provides a faster but somewhat less accurate fit. These fits may suffice for model comparisons.

gm1a = fit(MixedModel, verbaggform, verbagg, Bernoulli(), fast = true)
 deviance(gm1a) - deviance(gm1)
0.33800914130279125
@benchmark fit(MixedModel, $verbaggform, $verbagg, Bernoulli())
BenchmarkTools.Trial: 3 samples with 1 evaluation.
- Range (minmax):  2.086 s 2.096 s   GC (min … max): 0.00% … 0.00%
- Time  (median):     2.087 s              GC (median):    0.00%
- Time  (mean ± σ):   2.090 s ± 5.187 ms   GC (mean ± σ):  0.00% ± 0.00%
+ Range (minmax):  2.076 s 2.092 s   GC (min … max): 0.00% … 0.00%
+ Time  (median):     2.082 s              GC (median):    0.00%
+ Time  (mean ± σ):   2.083 s ± 7.908 ms   GC (mean ± σ):  0.00% ± 0.00%
 
-                                    █  
-  ▁▁▁▁▁▁▁█▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁█ ▁
-  2.09 s        Histogram: frequency by time         2.1 s <
+                               █  
+  ▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁█▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁█ ▁
+  2.08 s        Histogram: frequency by time        2.09 s <
 
- Memory estimate: 23.73 MiB, allocs estimate: 451843.
@benchmark fit(MixedModel, $verbaggform, $verbagg, Bernoulli(), fast = true)
BenchmarkTools.Trial: 28 samples with 1 evaluation.
- Range (minmax):  175.455 ms201.059 ms   GC (min … max): 0.00% … 0.00%
- Time  (median):     179.802 ms                GC (median):    0.00%
- Time  (mean ± σ):   180.738 ms ±   5.597 ms   GC (mean ± σ):  0.23% ± 1.17%
+ Memory estimate: 23.75 MiB, allocs estimate: 451979.
@benchmark fit(MixedModel, $verbaggform, $verbagg, Bernoulli(), fast = true)
BenchmarkTools.Trial: 28 samples with 1 evaluation.
+ Range (minmax):  174.951 ms190.212 ms   GC (min … max): 0.00% … 6.12%
+ Time  (median):     179.028 ms                GC (median):    0.00%
+ Time  (mean ± σ):   179.534 ms ±   4.116 ms   GC (mean ± σ):  0.23% ± 1.16%
 
-  ▄▄      ▁                                                   
-  ██▆▆▁▆▁▆█▆▁█▁▁▁▁▆▁▆▁▁▁▁▁▆▁▁▁▁▁▁▆▁▁▁▆▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▆ ▁
-  175 ms           Histogram: frequency by time          201 ms <
+  █ ▃          ▃    ▃                                          
+  █▁█▇▁▇▁▁▇▁▁▇▇█▁▇▇█▁▁▁▇▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▇▁▁▁▁▁▁▁▁▁▇▁▁▁▁▁▇▁▇ ▁
+  175 ms           Histogram: frequency by time          190 ms <
 
- Memory estimate: 9.91 MiB, allocs estimate: 88325.

The optional argument nAGQ=k causes evaluation of the deviance function to use a k point adaptive Gauss-Hermite quadrature rule. This method only applies to models with a single, simple, scalar random-effects term, such as

contraception = MixedModels.dataset(:contra)
+ Memory estimate: 9.93 MiB, allocs estimate: 88485.

The optional argument nAGQ=k causes evaluation of the deviance function to use a k point adaptive Gauss-Hermite quadrature rule. This method only applies to models with a single, simple, scalar random-effects term, such as

contraception = MixedModels.dataset(:contra)
 contraform = @formula(use ~ 1 + age + abs2(age) + livch + urban + (1|dist));
 bernoulli = Bernoulli()
 deviances = Dict{Symbol,Float64}()
@@ -492,7 +492,7 @@
   :fast      => 2372.78
   :nAGQ      => 2372.46
   :nAGQ_fast => 2372.51

Extractor functions

LinearMixedModel and GeneralizedLinearMixedModel are subtypes of StatsAPI.RegressionModel which, in turn, is a subtype of StatsBase.StatisticalModel. Many of the generic extractors defined in the StatsBase package have methods for these models.

Model-fit statistics

The statistics describing the quality of the model fit include

StatsAPI.loglikelihoodFunction
loglikelihood(model::StatisticalModel)
-loglikelihood(model::StatisticalModel, observation)

Return the log-likelihood of the model.

With an observation argument, return the contribution of observation to the log-likelihood of model.

If observation is a Colon, return a vector of each observation's contribution to the log-likelihood of the model. In other words, this is the vector of the pointwise log-likelihood contributions.

In general, sum(loglikehood(model, :)) == loglikelihood(model).

source
StatsAPI.aicFunction
aic(model::StatisticalModel)

Akaike's Information Criterion, defined as $-2 \log L + 2k$, with $L$ the likelihood of the model, and k its number of consumed degrees of freedom (as returned by dof).

source
StatsAPI.bicFunction
bic(model::StatisticalModel)

Bayesian Information Criterion, defined as $-2 \log L + k \log n$, with $L$ the likelihood of the model, $k$ its number of consumed degrees of freedom (as returned by dof), and $n$ the number of observations (as returned by nobs).

source
StatsAPI.dofFunction
dof(model::StatisticalModel)

Return the number of degrees of freedom consumed in the model, including when applicable the intercept and the distribution's dispersion parameter.

source
StatsAPI.nobsFunction
nobs(model::StatisticalModel)

Return the number of independent observations on which the model was fitted. Be careful when using this information, as the definition of an independent observation may vary depending on the model, on the format used to pass the data, on the sampling plan (if specified), etc.

source
loglikelihood(fm1)
-163.6635299405715
aic(fm1)
333.327059881143
bic(fm1)
337.5306520261295
dof(fm1)   # 1 fixed effect, 2 variances
3
nobs(fm1)  # 30 observations
30
loglikelihood(gm1)
-4067.916431282346

In general the deviance of a statistical model fit is negative twice the log-likelihood adjusting for the saturated model.

StatsAPI.devianceMethod
deviance(m::GeneralizedLinearMixedModel{T}, nAGQ=1)::T where {T}

Return the deviance of m evaluated by the Laplace approximation (nAGQ=1) or nAGQ-point adaptive Gauss-Hermite quadrature.

If the distribution D does not have a scale parameter the Laplace approximation is the squared length of the conditional modes, $u$, plus the determinant of $Λ'Z'WZΛ + I$, plus the sum of the squared deviance residuals.

source

Because it is not clear what the saturated model corresponding to a particular LinearMixedModel should be, negative twice the log-likelihood is called the objective.

This value is also accessible as the deviance but the user should bear in mind that this doesn't have all the properties of a deviance which is corrected for the saturated model. For example, it is not necessarily non-negative.

objective(fm1)
327.327059881143
deviance(fm1)
327.327059881143

The value optimized when fitting a GeneralizedLinearMixedModel is the Laplace approximation to the deviance or an adaptive Gauss-Hermite evaluation.

MixedModels.deviance!Function
deviance!(m::GeneralizedLinearMixedModel, nAGQ=1)

Update m.η, m.μ, etc., install the working response and working weights in m.LMM, update m.LMM.A and m.LMM.R, then evaluate the deviance.

source
MixedModels.deviance!(gm1)
8135.832862564683

Fixed-effects parameter estimates

The coef and fixef extractors both return the maximum likelihood estimates of the fixed-effects coefficients. They differ in their behavior in the rank-deficient case. The associated coefnames and fixefnames return the corresponding coefficient names.

StatsAPI.coefFunction
coef(model::StatisticalModel)

Return the coefficients of the model.

source
MixedModels.fixefFunction
fixef(m::MixedModel)

Return the fixed-effects parameter vector estimate of m.

In the rank-deficient case the truncated parameter vector, of length rank(m) is returned. This is unlike coef which always returns a vector whose length matches the number of columns in X.

source
MixedModels.fixefnamesFunction
fixefnames(m::MixedModel)

Return a (permuted and truncated in the rank-deficient case) vector of coefficient names.

source
coef(fm1)
+loglikelihood(model::StatisticalModel, observation)

Return the log-likelihood of the model.

With an observation argument, return the contribution of observation to the log-likelihood of model.

If observation is a Colon, return a vector of each observation's contribution to the log-likelihood of the model. In other words, this is the vector of the pointwise log-likelihood contributions.

In general, sum(loglikehood(model, :)) == loglikelihood(model).

source
StatsAPI.aicFunction
aic(model::StatisticalModel)

Akaike's Information Criterion, defined as $-2 \log L + 2k$, with $L$ the likelihood of the model, and k its number of consumed degrees of freedom (as returned by dof).

source
StatsAPI.bicFunction
bic(model::StatisticalModel)

Bayesian Information Criterion, defined as $-2 \log L + k \log n$, with $L$ the likelihood of the model, $k$ its number of consumed degrees of freedom (as returned by dof), and $n$ the number of observations (as returned by nobs).

source
StatsAPI.dofFunction
dof(model::StatisticalModel)

Return the number of degrees of freedom consumed in the model, including when applicable the intercept and the distribution's dispersion parameter.

source
StatsAPI.nobsFunction
nobs(model::StatisticalModel)

Return the number of independent observations on which the model was fitted. Be careful when using this information, as the definition of an independent observation may vary depending on the model, on the format used to pass the data, on the sampling plan (if specified), etc.

source
loglikelihood(fm1)
-163.6635299405715
aic(fm1)
333.327059881143
bic(fm1)
337.5306520261295
dof(fm1)   # 1 fixed effect, 2 variances
3
nobs(fm1)  # 30 observations
30
loglikelihood(gm1)
-4067.916431282346

In general the deviance of a statistical model fit is negative twice the log-likelihood adjusting for the saturated model.

StatsAPI.devianceMethod
deviance(m::GeneralizedLinearMixedModel{T}, nAGQ=1)::T where {T}

Return the deviance of m evaluated by the Laplace approximation (nAGQ=1) or nAGQ-point adaptive Gauss-Hermite quadrature.

If the distribution D does not have a scale parameter the Laplace approximation is the squared length of the conditional modes, $u$, plus the determinant of $Λ'Z'WZΛ + I$, plus the sum of the squared deviance residuals.

source

Because it is not clear what the saturated model corresponding to a particular LinearMixedModel should be, negative twice the log-likelihood is called the objective.

This value is also accessible as the deviance but the user should bear in mind that this doesn't have all the properties of a deviance which is corrected for the saturated model. For example, it is not necessarily non-negative.

objective(fm1)
327.327059881143
deviance(fm1)
327.327059881143

The value optimized when fitting a GeneralizedLinearMixedModel is the Laplace approximation to the deviance or an adaptive Gauss-Hermite evaluation.

MixedModels.deviance!Function
deviance!(m::GeneralizedLinearMixedModel, nAGQ=1)

Update m.η, m.μ, etc., install the working response and working weights in m.LMM, update m.LMM.A and m.LMM.R, then evaluate the deviance.

source
MixedModels.deviance!(gm1)
8135.832862564683

Fixed-effects parameter estimates

The coef and fixef extractors both return the maximum likelihood estimates of the fixed-effects coefficients. They differ in their behavior in the rank-deficient case. The associated coefnames and fixefnames return the corresponding coefficient names.

StatsAPI.coefFunction
coef(model::StatisticalModel)

Return the coefficients of the model.

source
MixedModels.fixefFunction
fixef(m::MixedModel)

Return the fixed-effects parameter vector estimate of m.

In the rank-deficient case the truncated parameter vector, of length rank(m) is returned. This is unlike coef which always returns a vector whose length matches the number of columns in X.

source
MixedModels.fixefnamesFunction
fixefnames(m::MixedModel)

Return a (permuted and truncated in the rank-deficient case) vector of coefficient names.

source
coef(fm1)
 coefnames(fm1)
1-element Vector{String}:
  "(Intercept)"
fixef(fm1)
 fixefnames(fm1)
1-element Vector{String}:
@@ -538,9 +538,9 @@
 subj (Intercept)  1.793543 1.339232
 item (Intercept)  0.117147 0.342267
 
-

Individual components are returned by other extractors

MixedModels.varestFunction
varest(m::LinearMixedModel)

Returns the estimate of σ², the variance of the conditional distribution of Y given B.

source
varest(m::GeneralizedLinearMixedModel)

Returns the estimate of ϕ², the variance of the conditional distribution of Y given B.

For models with a dispersion parameter ϕ, this is simply ϕ². For models without a dispersion parameter, this value is missing. This differs from disperion, which returns 1 for models without a dispersion parameter.

For Gaussian models, this parameter is often called σ².

source
MixedModels.sdestFunction
sdest(m::LinearMixedModel)

Return the estimate of σ, the standard deviation of the per-observation noise.

source
sdest(m::GeneralizedLinearMixedModel)

Return the estimate of the dispersion, i.e. the standard deviation of the per-observation noise.

For models with a dispersion parameter ϕ, this is simply ϕ. For models without a dispersion parameter, this value is missing. This differs from disperion, which returns 1 for models without a dispersion parameter.

For Gaussian models, this parameter is often called σ.

source
varest(fm2)
654.9414514334794
sdest(fm2)
25.591823917678852
fm2.σ
25.591823917678852

Conditional modes of the random effects

The ranef extractor

MixedModels.ranefFunction
ranef(m::LinearMixedModel; uscale=false)

Return, as a Vector{Matrix{T}}, the conditional modes of the random effects in model m.

If uscale is true the random effects are on the spherical (i.e. u) scale, otherwise on the original scale.

For a named variant, see raneftables.

source
ranef(fm1)
1-element Vector{Matrix{Float64}}:
+

Individual components are returned by other extractors

MixedModels.varestFunction
varest(m::LinearMixedModel)

Returns the estimate of σ², the variance of the conditional distribution of Y given B.

source
varest(m::GeneralizedLinearMixedModel)

Returns the estimate of ϕ², the variance of the conditional distribution of Y given B.

For models with a dispersion parameter ϕ, this is simply ϕ². For models without a dispersion parameter, this value is missing. This differs from disperion, which returns 1 for models without a dispersion parameter.

For Gaussian models, this parameter is often called σ².

source
MixedModels.sdestFunction
sdest(m::LinearMixedModel)

Return the estimate of σ, the standard deviation of the per-observation noise.

source
sdest(m::GeneralizedLinearMixedModel)

Return the estimate of the dispersion, i.e. the standard deviation of the per-observation noise.

For models with a dispersion parameter ϕ, this is simply ϕ. For models without a dispersion parameter, this value is missing. This differs from disperion, which returns 1 for models without a dispersion parameter.

For Gaussian models, this parameter is often called σ.

source
varest(fm2)
654.9414514334794
sdest(fm2)
25.591823917678852
fm2.σ
25.591823917678852

Conditional modes of the random effects

The ranef extractor

MixedModels.ranefFunction
ranef(m::LinearMixedModel; uscale=false)

Return, as a Vector{Matrix{T}}, the conditional modes of the random effects in model m.

If uscale is true the random effects are on the spherical (i.e. u) scale, otherwise on the original scale.

For a named variant, see raneftables.

source
ranef(fm1)
1-element Vector{Matrix{Float64}}:
  [-16.628221011733622 0.36951602248394705 … 53.57982326003441 -42.49434258554293]
fm1.b
1-element Vector{Matrix{Float64}}:
- [-16.628221011733622 0.36951602248394705 … 53.57982326003441 -42.49434258554293]

returns the conditional modes of the random effects given the observed data. That is, these are the values that maximize the conditional density of the random effects given the observed data. For a LinearMixedModel these are also the conditional means.

These are sometimes called the best linear unbiased predictors or BLUPs but that name is not particularly meaningful.

At a superficial level these can be considered as the "estimates" of the random effects, with a bit of hand waving, but pursuing this analogy too far usually results in confusion.

To obtain tables associating the values of the conditional modes with the levels of the grouping factor, use

MixedModels.raneftablesFunction
raneftables(m::MixedModel; uscale = false)

Return the conditional means of the random effects as a NamedTuple of Tables.jl-compliant tables.

Note

The API guarantee is only that the NamedTuple contains Tables.jl tables and not on the particular concrete type of each table.

source

as in

DataFrame(only(raneftables(fm1)))
6×2 DataFrame
Rowbatch(Intercept)
StringFloat64
1A-16.6282
2B0.369516
3C26.9747
4D-21.8014
5E53.5798
6F-42.4943

The corresponding conditional variances are returned by

MixedModels.condVarFunction
condVar(m::LinearMixedModel)

Return the conditional variances matrices of the random effects.

The random effects are returned by ranef as a vector of length k, where k is the number of random effects terms. The ith element is a matrix of size vᵢ × ℓᵢ where vᵢ is the size of the vector-valued random effects for each of the ℓᵢ levels of the grouping factor. Technically those values are the modes of the conditional distribution of the random effects given the observed data.

This function returns an array of k three dimensional arrays, where the ith array is of size vᵢ × vᵢ × ℓᵢ. These are the diagonal blocks from the conditional variance-covariance matrix,

s² Λ(Λ'Z'ZΛ + I)⁻¹Λ'
source
condVar(fm1)
1-element Vector{Array{Float64, 3}}:
+ [-16.628221011733622 0.36951602248394705 … 53.57982326003441 -42.49434258554293]

returns the conditional modes of the random effects given the observed data. That is, these are the values that maximize the conditional density of the random effects given the observed data. For a LinearMixedModel these are also the conditional means.

These are sometimes called the best linear unbiased predictors or BLUPs but that name is not particularly meaningful.

At a superficial level these can be considered as the "estimates" of the random effects, with a bit of hand waving, but pursuing this analogy too far usually results in confusion.

To obtain tables associating the values of the conditional modes with the levels of the grouping factor, use

MixedModels.raneftablesFunction
raneftables(m::MixedModel; uscale = false)

Return the conditional means of the random effects as a NamedTuple of Tables.jl-compliant tables.

Note

The API guarantee is only that the NamedTuple contains Tables.jl tables and not on the particular concrete type of each table.

source

as in

DataFrame(only(raneftables(fm1)))
6×2 DataFrame
Rowbatch(Intercept)
StringFloat64
1A-16.6282
2B0.369516
3C26.9747
4D-21.8014
5E53.5798
6F-42.4943

The corresponding conditional variances are returned by

MixedModels.condVarFunction
condVar(m::LinearMixedModel)

Return the conditional variances matrices of the random effects.

The random effects are returned by ranef as a vector of length k, where k is the number of random effects terms. The ith element is a matrix of size vᵢ × ℓᵢ where vᵢ is the size of the vector-valued random effects for each of the ℓᵢ levels of the grouping factor. Technically those values are the modes of the conditional distribution of the random effects given the observed data.

This function returns an array of k three dimensional arrays, where the ith array is of size vᵢ × vᵢ × ℓᵢ. These are the diagonal blocks from the conditional variance-covariance matrix,

s² Λ(Λ'Z'ZΛ + I)⁻¹Λ'
source
condVar(fm1)
1-element Vector{Array{Float64, 3}}:
  [362.3104675622471;;; 362.3104675622471;;; 362.3104675622471;;; 362.3104675622471;;; 362.3104675622471;;; 362.3104675622471]

Case-wise diagnostics and residual degrees of freedom

The leverage values

StatsAPI.leverageFunction
leverage(model::RegressionModel)

Return the diagonal of the projection matrix of the model.

source
leverage(fm1)
30-element Vector{Float64}:
  0.15650534082766315
  0.15650534082766315
@@ -612,4 +612,4 @@
                Coef.  Std. Error      z  Pr(>|z|)
 ─────────────────────────────────────────────────
 (Intercept)  22.9722    0.808572  28.41    <1e-99
-─────────────────────────────────────────────────
sum(leverage(fm4r))
27.472361767063312
+─────────────────────────────────────────────────
sum(leverage(fm4r))
27.472361767063312
diff --git a/dev/index.html b/dev/index.html index 1ab28e52a..ee96bedcd 100644 --- a/dev/index.html +++ b/dev/index.html @@ -1,2 +1,2 @@ -MixedModels.jl Documentation · MixedModels
+MixedModels.jl Documentation · MixedModels
diff --git a/dev/mime/index.html b/dev/mime/index.html index b4503ee36..f76554d62 100644 --- a/dev/mime/index.html +++ b/dev/mime/index.html @@ -82,4 +82,4 @@ Residual & 712.4038 & & & & & \\ \end{tabular}

This output can also be written directly to file:

open("model.md", "w") do io
     show(io, MIME("text/markdown"), kbm)
-end
+end diff --git a/dev/optimization/b1afde18.svg b/dev/optimization/f8a018e8.svg similarity index 76% rename from dev/optimization/b1afde18.svg rename to dev/optimization/f8a018e8.svg index b7d39cc2c..919450999 100644 --- a/dev/optimization/b1afde18.svg +++ b/dev/optimization/f8a018e8.svg @@ -14,15 +14,15 @@ - - + + step - + 0 @@ -44,118 +44,118 @@ - - - + + + NelderMead - + BOBYQA - - + + - + - - + + algorithm - - - - + + + + - - + + - + - + - + - + - - + + - + - + - + - + - - + + - + - - + + 1740 - + 1760 - + 1780 - + 1800 - + 1820 - - + + objective @@ -163,7 +163,7 @@ - + diff --git a/dev/optimization/index.html b/dev/optimization/index.html index 18d2764d0..73212d5f2 100644 --- a/dev/optimization/index.html +++ b/dev/optimization/index.html @@ -125,8 +125,8 @@ ([0.8166315695343094, 0.011167254457244754, 0.28823768689703533], 1753.6956816568222)

A blocked Cholesky factor

A LinearMixedModel object contains two blocked matrices; a symmetric matrix A (only the lower triangle is stored) and a lower-triangular L which is the lower Cholesky factor of the updated and inflated A. In versions 4.0.0 and later of MixedModels only the blocks in the lower triangle are stored in A and L, as a Vector{AbstractMatrix{T}}.

BlockDescription shows the structure of the blocks

BlockDescription(fm2)
rows:     subj         fixed     
   36:   BlkDiag    
    3:    Dense         Dense     
-

Another change in v4.0.0 and later is that the last row of blocks is constructed from m.Xymat which contains the full-rank model matrix X with the response y concatenated on the right.

The operation of installing a new value of the variance parameters, θ, and updating L

MixedModels.setθ!Function
setθ!(m::LinearMixedModel, v)

Install v as the θ parameters in m.

source
setθ!(bsamp::MixedModelFitCollection, θ::AbstractVector)
-setθ!(bsamp::MixedModelFitCollection, i::Integer)

Install the values of the i'th θ value of bsamp.fits in bsamp.λ

source
MixedModels.updateL!Function
updateL!(m::LinearMixedModel)

Update the blocked lower Cholesky factor, m.L, from m.A and m.reterms (used for λ only)

This is the crucial step in evaluating the objective, given a new parameter value.

source

is the central step in evaluating the objective (negative twice the log-likelihood).

Typically, the (1,1) block is the largest block in A and L and it has a special form, either Diagonal or UniformBlockDiagonal providing a compact representation and fast matrix multiplication or solutions of linear systems of equations.

Modifying the optimization process

The OptSummary object contains both input and output fields for the optimizer. To modify the optimization process the input fields can be changed after constructing the model but before fitting it.

Suppose, for example, that the user wishes to try a Nelder-Mead optimization method instead of the default BOBYQA (Bounded Optimization BY Quadratic Approximation) method.

fm2nm = LinearMixedModel(@formula(reaction ~ 1+days+(1+days|subj)), sleepstudy);
+

Another change in v4.0.0 and later is that the last row of blocks is constructed from m.Xymat which contains the full-rank model matrix X with the response y concatenated on the right.

The operation of installing a new value of the variance parameters, θ, and updating L

MixedModels.setθ!Function
setθ!(m::LinearMixedModel, v)

Install v as the θ parameters in m.

source
setθ!(bsamp::MixedModelFitCollection, θ::AbstractVector)
+setθ!(bsamp::MixedModelFitCollection, i::Integer)

Install the values of the i'th θ value of bsamp.fits in bsamp.λ

source
MixedModels.updateL!Function
updateL!(m::LinearMixedModel)

Update the blocked lower Cholesky factor, m.L, from m.A and m.reterms (used for λ only)

This is the crucial step in evaluating the objective, given a new parameter value.

source

is the central step in evaluating the objective (negative twice the log-likelihood).

Typically, the (1,1) block is the largest block in A and L and it has a special form, either Diagonal or UniformBlockDiagonal providing a compact representation and fast matrix multiplication or solutions of linear systems of equations.

Modifying the optimization process

The OptSummary object contains both input and output fields for the optimizer. To modify the optimization process the input fields can be changed after constructing the model but before fitting it.

Suppose, for example, that the user wishes to try a Nelder-Mead optimization method instead of the default BOBYQA (Bounded Optimization BY Quadratic Approximation) method.

fm2nm = LinearMixedModel(@formula(reaction ~ 1+days+(1+days|subj)), sleepstudy);
 fm2nm.optsum.optimizer = :LN_NELDERMEAD;
 fit!(fm2nm; thin=1)
 fm2nm.optsum
Initial parameter vector: [1.0, 0.0, 1.0]
@@ -153,7 +153,7 @@
                            repeat(["BOBYQA"], length(bob))],
                    objective=[last.(nm); last.(bob)],
                    step=[1:length(nm); 1:length(bob)])
-plot(convdf, x=:step, y=:objective, color=:algorithm, Geom.line)
Example block output

Run time can be constrained with maxfeval and maxtime.

See the documentation for the NLopt package for details about the various settings.

Convergence to singular covariance matrices

To ensure identifiability of $\Sigma_\theta=\sigma^2\Lambda_\theta \Lambda_\theta$, the elements of $\theta$ corresponding to diagonal elements of $\Lambda_\theta$ are constrained to be non-negative. For example, in a trivial case of a single, simple, scalar, random-effects term as in fm1, the one-dimensional $\theta$ vector is the ratio of the standard deviation of the random effects to the standard deviation of the response. It happens that $-\theta$ produces the same log-likelihood but, by convention, we define the standard deviation to be the positive square root of the variance. Requiring the diagonal elements of $\Lambda_\theta$ to be non-negative is a generalization of using this positive square root.

If the optimization converges on the boundary of the feasible region, that is if one or more of the diagonal elements of $\Lambda_\theta$ is zero at convergence, the covariance matrix $\Sigma_\theta$ will be singular. This means that there will be linear combinations of random effects that are constant. Usually convergence to a singular covariance matrix is a sign of an over-specified model.

Singularity can be checked with the issingular predicate function.

MixedModels.issingularFunction
issingular(m::MixedModel, θ=m.θ)

Test whether the model m is singular if the parameter vector is θ.

Equality comparisons are used b/c small non-negative θ values are replaced by 0 in fit!.

Note

For GeneralizedLinearMixedModel, the entire parameter vector (including β in the case fast=false) must be specified if the default is not used.

source
issingular(bsamp::MixedModelFitCollection)

Test each bootstrap sample for singularity of the corresponding fit.

Equality comparisons are used b/c small non-negative θ values are replaced by 0 in fit!.

See also issingular(::MixedModel).

source
issingular(fm2)
false

Generalized Linear Mixed-Effects Models

In a generalized linear model the responses are modelled as coming from a particular distribution, such as Bernoulli for binary responses or Poisson for responses that represent counts. The scalar distributions of individual responses differ only in their means, which are determined by a linear predictor expression $\eta=\bf X\beta$, where, as before, $\bf X$ is a model matrix derived from the values of covariates and $\beta$ is a vector of coefficients.

The unconstrained components of $\eta$ are mapped to the, possibly constrained, components of the mean response, $\mu$, via a scalar function, $g^{-1}$, applied to each component of $\eta$. For historical reasons, the inverse of this function, taking components of $\mu$ to the corresponding component of $\eta$ is called the link function and the more frequently used map from $\eta$ to $\mu$ is the inverse link.

A generalized linear mixed-effects model (GLMM) is defined, for the purposes of this package, by

\[\begin{aligned} +plot(convdf, x=:step, y=:objective, color=:algorithm, Geom.line)Example block output

Run time can be constrained with maxfeval and maxtime.

See the documentation for the NLopt package for details about the various settings.

Convergence to singular covariance matrices

To ensure identifiability of $\Sigma_\theta=\sigma^2\Lambda_\theta \Lambda_\theta$, the elements of $\theta$ corresponding to diagonal elements of $\Lambda_\theta$ are constrained to be non-negative. For example, in a trivial case of a single, simple, scalar, random-effects term as in fm1, the one-dimensional $\theta$ vector is the ratio of the standard deviation of the random effects to the standard deviation of the response. It happens that $-\theta$ produces the same log-likelihood but, by convention, we define the standard deviation to be the positive square root of the variance. Requiring the diagonal elements of $\Lambda_\theta$ to be non-negative is a generalization of using this positive square root.

If the optimization converges on the boundary of the feasible region, that is if one or more of the diagonal elements of $\Lambda_\theta$ is zero at convergence, the covariance matrix $\Sigma_\theta$ will be singular. This means that there will be linear combinations of random effects that are constant. Usually convergence to a singular covariance matrix is a sign of an over-specified model.

Singularity can be checked with the issingular predicate function.

MixedModels.issingularFunction
issingular(m::MixedModel, θ=m.θ)

Test whether the model m is singular if the parameter vector is θ.

Equality comparisons are used b/c small non-negative θ values are replaced by 0 in fit!.

Note

For GeneralizedLinearMixedModel, the entire parameter vector (including β in the case fast=false) must be specified if the default is not used.

source
issingular(bsamp::MixedModelFitCollection)

Test each bootstrap sample for singularity of the corresponding fit.

Equality comparisons are used b/c small non-negative θ values are replaced by 0 in fit!.

See also issingular(::MixedModel).

source
issingular(fm2)
false

Generalized Linear Mixed-Effects Models

In a generalized linear model the responses are modelled as coming from a particular distribution, such as Bernoulli for binary responses or Poisson for responses that represent counts. The scalar distributions of individual responses differ only in their means, which are determined by a linear predictor expression $\eta=\bf X\beta$, where, as before, $\bf X$ is a model matrix derived from the values of covariates and $\beta$ is a vector of coefficients.

The unconstrained components of $\eta$ are mapped to the, possibly constrained, components of the mean response, $\mu$, via a scalar function, $g^{-1}$, applied to each component of $\eta$. For historical reasons, the inverse of this function, taking components of $\mu$ to the corresponding component of $\eta$ is called the link function and the more frequently used map from $\eta$ to $\mu$ is the inverse link.

A generalized linear mixed-effects model (GLMM) is defined, for the purposes of this package, by

\[\begin{aligned} (\mathcal{Y} | \mathcal{B}=\bf{b}) &\sim\mathcal{D}(\bf{g^{-1}(X\beta + Z b)},\phi)\\\\ \mathcal{B}&\sim\mathcal{N}(\bf{0},\Sigma_\theta) . \end{aligned}\]

where $\mathcal{D}$ indicates the distribution family parameterized by the mean and, when needed, a common scale parameter, $\phi$. (There is no scale parameter for Bernoulli or for Poisson. Specifying the mean completely determines the distribution.)

Distributions.BernoulliType
Bernoulli(p)

A Bernoulli distribution is parameterized by a success rate p, which takes value 1 with probability p and 0 with probability 1-p.

\[P(X = k) = \begin{cases} @@ -164,11 +164,11 @@ params(d) # Get the parameters, i.e. (p,) succprob(d) # Get the success rate, i.e. p -failprob(d) # Get the failure rate, i.e. 1 - p

External links:

source
Distributions.PoissonType
Poisson(λ)

A Poisson distribution describes the number of independent events occurring within a unit time interval, given the average rate of occurrence λ.

\[P(X = k) = \frac{\lambda^k}{k!} e^{-\lambda}, \quad \text{ for } k = 0,1,2,\ldots.\]

Poisson()        # Poisson distribution with rate parameter 1
+failprob(d)    # Get the failure rate, i.e. 1 - p

External links:

source
Distributions.PoissonType
Poisson(λ)

A Poisson distribution describes the number of independent events occurring within a unit time interval, given the average rate of occurrence λ.

\[P(X = k) = \frac{\lambda^k}{k!} e^{-\lambda}, \quad \text{ for } k = 0,1,2,\ldots.\]

Poisson()        # Poisson distribution with rate parameter 1
 Poisson(lambda)       # Poisson distribution with rate parameter lambda
 
 params(d)        # Get the parameters, i.e. (λ,)
-mean(d)          # Get the mean arrival rate, i.e. λ

External links:

source

A GeneralizedLinearMixedModel object is generated from a formula, data frame and distribution family.

verbagg = MixedModels.dataset(:verbagg)
+mean(d)          # Get the mean arrival rate, i.e. λ

External links:

source

A GeneralizedLinearMixedModel object is generated from a formula, data frame and distribution family.

verbagg = MixedModels.dataset(:verbagg)
 const vaform = @formula(r2 ~ 1 + anger + gender + btype + situ + (1|subj) + (1|item));
 mdl = GeneralizedLinearMixedModel(vaform, verbagg, Bernoulli());
 typeof(mdl)
GeneralizedLinearMixedModel{Float64, Bernoulli{Float64}}

A separate call to fit! can be used to fit the model. This involves optimizing an objective function, the Laplace approximation to the deviance, with respect to the parameters, which are $\beta$, the fixed-effects coefficients, and $\theta$, the covariance parameters. The starting estimate for $\beta$ is determined by fitting a GLM to the fixed-effects part of the formula

mdl.β
6-element Vector{Float64}:
@@ -281,4 +281,4 @@
 btype: scold  -1.05872     0.256803   -4.12    <1e-04
 btype: shout  -2.10528     0.258527   -8.14    <1e-15
 situ: self    -1.05558     0.210301   -5.02    <1e-06
-─────────────────────────────────────────────────────

This fit provided slightly better results (Laplace approximation to the deviance of 8151.400 versus 8151.583) but took 6 times as long. That is not terribly important when the times involved are a few seconds but can be important when the fit requires many hours or days of computing time.

+─────────────────────────────────────────────────────

This fit provided slightly better results (Laplace approximation to the deviance of 8151.400 versus 8151.583) but took 6 times as long. That is not terribly important when the times involved are a few seconds but can be important when the fit requires many hours or days of computing time.

diff --git a/dev/prediction/index.html b/dev/prediction/index.html index 52fefc404..48e31449f 100644 --- a/dev/prediction/index.html +++ b/dev/prediction/index.html @@ -193,4 +193,4 @@ ─────────────────────────────────────────────────── (Intercept) 259.607 7.53747 34.44 <1e-99 days 9.46755 0.783538 12.08 <1e-32 -───────────────────────────────────────────────────

For simulating from generalized linear mixed models, there is no type option because the observation-level always occurs at the level of the response and not of the linear predictor.

Warning

Simulating the model response in place may not yield the same result as simulating into a pre-allocated or new vector, depending on choice of pseudorandom number generator. Random number generation in Julia allows optimization based on type, and the internal storage type of the model response (currently a view into a matrix storing the concatenated fixed-effects model matrix and the response) may not match the type of a pre-allocated or new vector. See also discussion here.

Note

All the methods that take new data as a table construct an additional MixedModel behind the scenes, even when the new data is exactly the same as the data that the model was fitted to. For the simulation methods in particular, these thus form a convenience wrapper for constructing a new model and calling simulate without new data on that model with the parameters from the original model.

+───────────────────────────────────────────────────

For simulating from generalized linear mixed models, there is no type option because the observation-level always occurs at the level of the response and not of the linear predictor.

Warning

Simulating the model response in place may not yield the same result as simulating into a pre-allocated or new vector, depending on choice of pseudorandom number generator. Random number generation in Julia allows optimization based on type, and the internal storage type of the model response (currently a view into a matrix storing the concatenated fixed-effects model matrix and the response) may not match the type of a pre-allocated or new vector. See also discussion here.

Note

All the methods that take new data as a table construct an additional MixedModel behind the scenes, even when the new data is exactly the same as the data that the model was fitted to. For the simulation methods in particular, these thus form a convenience wrapper for constructing a new model and calling simulate without new data on that model with the parameters from the original model.

diff --git a/dev/rankdeficiency/index.html b/dev/rankdeficiency/index.html index 197f09396..9f714fb8a 100644 --- a/dev/rankdeficiency/index.html +++ b/dev/rankdeficiency/index.html @@ -52,4 +52,4 @@ spkr: old & load: yes 26.8642 21.7062 1.24 0.2159 prec: maintain & load: yes -18.6514 21.7062 -0.86 0.3902 spkr: old & prec: maintain & load: yes 15.4985 21.7062 0.71 0.4752 -──────────────────────────────────────────────────────────────────────────────

This may be useful when the PCA property suggests a random effects structure larger than only main effects but smaller than all interaction terms. This is also similar to the functionality provided by dummy in lme4, but as in the difference between zerocorr in Julia and || in R, there are subtle differences in how this expansion interacts with other terms in the random effects.

+──────────────────────────────────────────────────────────────────────────────

This may be useful when the PCA property suggests a random effects structure larger than only main effects but smaller than all interaction terms. This is also similar to the functionality provided by dummy in lme4, but as in the difference between zerocorr in Julia and || in R, there are subtle differences in how this expansion interacts with other terms in the random effects.