The collapse of adobe bricks under compressive forces and exposure to water has a duration of several minutes, with only minor displacements before and after the collapse, whence a conceptual question arises: When does the collapse start and when does it end? The paper compares several mathematical models for the description of the fracture process from displacement data. It recommends the use of linear splines to identify the beginning and end of the collapse phase of adobe bricks.
Traditional adobe architecture, using unburnt clay bricks, is still common in many parts of the world [
Using adobe bricks for construction is ecologically and economically sustainable [
As to disadvantages, adobe walls must be protected from direct contact with water (
There is a rich literature on testing the compressive strength of dry adobe bricks. In most cases the purpose is the optimization of the material mix, e.g. [
Such experiments could provide only indirect information about how long an adobe wall might withstand the contact with water. In order to approach this question directly, an experimental setup was designed, that enables the measurement for a period of up to several days of the deformation of an adobe brick under constant pressure, while at the same time the water content was increasing. This simulated an adobe brick in a wall exposed to moisture from a wet underground.
Multiple experiments [
treated adobe bricks was in three phases. Initial resistance phase with minor displacements for a day or more, collapse within several minutes and final phase with only minor displacements for the crushed brick. Thus, the fracture process may be characterized by the points of time, t0 and t1, when the collapse begins and ends. The goal of the paper is the proposal of a computational method to automatically identify t0 and t1 from the displacement data. To this end, the paper compares several mathematical models for describing the fracture process. (Appendix II summarizes the Mathematica codes for these models.) Basically, the problem of this paper is a conceptual one of defining the collapse phase by a standardized method.
The experimental setting of this paper (
An adobe brick was placed on a saturated filter layer of fine, clean sand (height 1.5 cm). The increasing saturation of the brick resulted from the capillary rise through the bedding. Thus, the water level did not reach the brick directly, but by contact to the bottom of the brick. The experiment used adobe bricks of size 24.5 × 12 × 6 cm, with a dry density of 2100 kg/m³, produced by an extrusion process at the brickyard Nicoloso (Pottenbrunn/Austria). Appendix I provides more information about the material characteristics. A load of F = 1.8 kN, comparable to the compressive force to the lowest brick of an adobe wall of 3 m height, was applied vertically to the surface of the adobe brick by a lever arm that was flexibly jointed to an aluminum plate (25 × 13 × 2.5 cm) on top of the brick to get a uniform load distribution of initially 6.1 N/cm2 for the brick and 5.5 N/cm2, when the plate dipped into the crumbled material (
span for these displacement measurements was up to two days. Displacement data were recorded in a spreadsheet (Microsoft EXCEL 2016). As a first reduction of complexity, for each brick only the average displacements per minute (obtained from a pivot table) were further analyzed. Thus, for time t this paper denotes by d(t) the measured average displacement of the considered brick. The experiment started at time t = 0 and the data for
The authors tested only one type of adobe, characterized in Appendix I, as the purpose of the paper is the proof of principle for one type of materials. The paper does not aim at the optimization of materials. Rather, given displacement data as in
A first attempt towards the description of the data used methods of pattern recognition, as illustrated by the coloring in
As to the method of pattern recognition, the paper used cluster analysis to identify three groups of data points (corresponding to the three phases) with a small distance amongst them. In order to improve the clustering, a modification of two-dimensional Euclidean distance was used, defining as nil the distance between data points, whose displacements differed by less than a threshold of 1.0 mm. The local optimization algorithm of [
This method was not fully automatic, as pattern recognition generally is rather a learning technique. Thus, the distance function had to be modified, using a threshold that was adapted to the data by trial-and-error. The output was also dependent on the selected algorithm; e.g. hierarchical agglomerative clustering did produce inferior results.
As indicated by
As to the method of data smoothing, the paper used a low-pass filter to eliminate high-frequency fluctuations. It passes signals with a frequency lower than a chosen cutoff frequency. The computation was done in Mathematica.
This method was not fully automatic, as both the cutoff frequency (here: 0.01) and the threshold for the quantile (here 95%) was manually adjusted by trial-and-error. Further, the threshold for the quantile could not be defined without knowledge of the yet unknown duration of the collapse. For example, defining the collapse from the 95% quantile assumes that 5% of the data points describe this phase. For the present data this in turn corresponded to a time span of 117 minutes, which overestimated the duration of collapse.
Also the cutoff frequency should neither be too small nor too large. It should not be too small, as the overall shape of the original data should be retained (a smaller cutoff frequency estimates lower t0 and higher t1). It should not be too high, either, as then also in other regions the differences might exceed the given quantile. (Example: For a cutoff frequency of 0.1 the differences exceed the 95% quantile between minutes 112 to 150, 270 to 283 and 1840 to 1905, whereby only the latter interval corresponded to the collapse.) This reasoning applies in particular to the unfiltered data (a cutoff frequency of 2p defines an all-pass filter).
An elementary approach identified the three phases of fracture from an approximation by a linearized jump function (
As to the method, the following model was used to approximate the fracture process: No displacement till t0, maximal displacement after t1 and linear in between. The parameters t0 and t1 of this model were determined by using the method of least squares to find a best fit of the jump model to the data.
However, this approach turned out to be insofar not satisfactory, as the fit to the data of the initial resistance phase was poor.
This approach improves the simple jump model insofar, as it describes the displacement by a linear spline function (
As to the method, the following model of the fracture process was used: Again, three intervals i = 1, 2, 3 between t = 0 and t0, between t0 and t1 and between t1 and infinity were considered and on each interval the fracture was approximated by a linear function
di(t) of time, formula (1). These linear functions are not the regression lines, but linear splines, because the model assumes in addition that the displacement should be continuous, formula (2), which says d1(t0) = d2(t0) and d2(t1) = d3(t1). A further reduction of the number of parameters could be achieved by formula (3), stipulating that the linear spline originates from 0 and ends with the maximal measured displacement dmax; summarizing:
The goal was to find the best fit of this model to the measured displacement data d(t), using the method of least squares. The parameters of the model were ki and mi, from which ti were computed by formula (2).
In comparison to the jump model, the better fit of this model to the observed displacements made it superior. For the present data this could also be verified in terms of the Akaike information criterion for model selection [
As is well-known (references cited above) the material characteristics, e.g. grain size distribution or use of additives, strongly influences the stability of adobe bricks. A new experimental setting was designed to test, for how long an adobe wall could withstand contact with water, asking for the break point time under constant pressure of an adobe brick, which absorbs water by capillarity. In order to optimize the material mix under the aspect of a combined resistance to pressure and water, large test series are needed, whence the identification of the collapse phase needs to be automatized. Thus, in order to compare and optimize different materials, the break point time needs to be determined by a standardized method.
While it was easy to derive a rough estimate from a visual inspection of plotting displacement data, a more accurate definition of the break point time turned out to be conceptually demanding, owing to the slow fracture process. Therefore, instead of characterizing the fracture process by one break point time, the paper proposes to determine the beginning and end of the collapse phase. Of the four models considered for this purpose, the linear spline approximation based on Equations ((1) to (3)) turned out to be the most viable approach, suitable for automatization.
Further, the authors considered that the experimental setting and the subsequent data analysis (computational identification of the collapse phase) should be simple and inexpensive. For adobe bricks are mainly used in developing countries. There the optimization of materials by using locally available additives needs to be done at the village level, where engineers may not have access to sophisticated machines or software. The present experimental setting (
Rauchecker, M., Kühleitner, M., Brunner, N., Scheicher, K., Tintner, J., Roth, M. and Wriessnig, K. (2017) When Do Adobe Bricks Collapse under Compressive Forces: A Simulation Approach. Open Journal of Modelling and Simulation, 5, 1-12. http://dx.doi.org/10.4236/ojmsi.2017.51001
In order to allow a comparison of the present data with literature data, the grain size distribution of the brick was determined with a combined wet sieving of the fraction > 40 μm and automatic sedimentation analysis with Sedi Graph III (Micromeritics).
An air-dried sample of 50 g was treated with 200 ml of 10% H2O2 for oxidation of organic components and proper dispersion. After ca. 24 h reaction time and removal of the remaining H2O2, the sample was treated with ultrasound and sieved with a set of 2000 μm, 630 μm, 200 μm, 63 μm and 20 μm sieves. The coarse fractions were dried at 105˚C, weighed, and measured in mass percent. The <20 μm portion was suspended in water, a representative portion was taken out, treated with 0.05% sodium polyphosphate and ultrasound, and analyzed in a sedigraph by X-ray, according to Stokes’ Law. From the cumulative curve of the sedigraph and the sieving data, the grain size distribution of the entire sample was calculated (see
As some models used highly sophisticated tools, their code is summarized and annotated below, using Mathematica 11 software of Wolfram Research. It provides these tools in the form of a “black box”.
The following commands relate to certain characteristics of the data. Thereby, “full_data” was the original input, comprised of the pairs time (in days) and displacement (in mm), while “disp_data” retained the displacement information (per minute), only. Further, “tmax” was the duration tmax = 1.631 days of the measurements and “dismax” the maximal displacement (22.38 mm).
full_data = {{0, 0.025}, {0.000694444, 0.124}, …};
disp_data = Last[Transpose[full_data]];
{tmax = Max[First[Transpose[full_data]]],
dismax = Max[Last[Transpose[full_data]]]}
The following key commands were used for the pattern recognition. The first line
defines the modified distance, the second line identifies the clusters, the third line plots them and the last line extracts the bounds t0 and t1 from the second cluster.
modifieddist[u_, v_] = If[Abs[Last[u] -Last[v]] < 1.0, 0, Euclidean Distance [u, v]];
cluster = Find Clusters [full_data, 3, Distance Function ® modifieddist, Method ® “Optimize”]
List Plot[cluster, Plot Style ® {Green, Black, Green}]
{Min[First[Transpose[cluster[[
The following commands were used for data smoothing. The speed of the displacement, “speeds”, was the difference of successive displacement data. “Lowpass Filter” with a filtering frequency of 0.01 defined “filter”, comprised of successive differences of the filtered data.
speeds = Differences[disp_date];
filter = Differences[Lowpass Filter[disp_data, 0.01]];
Show[ListPlot[speeds, PlotStyle ® Green], ListPlot[filter, Plot Style ® Black]]
Minutes =Flatten[Table[If[filter[[n]] > Quantile[filter, 0.95], {n}, {}],
{n, 1, Length[filter]}]]
The following code was used to fit the jump model to the data. It was defined from an interpolation object, that was used like a function: “Interpolation” with the interpolation order 1 defined a piecewise linear function (linear spline) with prescribed values; namely value 0 att = 0 and at t = t0 and value dismax at t = t1 and at t = tmax. “Nonlinear Model Fit” fitted the parameters t0 and t1 to the data using the method of least squares. Thereby, “?NumberQ” was a reminder to the program that these parameters should be used as numbers and not as symbols.
Clear[model];
model[t0_?NumberQ, t1_?NumberQ] = (model[t0, t1] =
Interpolation[{{0, 0}, {t0, 0}, {t1, dismax}, {tmax, dismax}},
InterpolationOrder ® 1]);
splinemod = NonlinearModelFit[full_data, model[t0, t1][t], {t0, t1}, t];
Show[Plot[splinemod[t], {t, 0, tmax}, PlotStyle®Black],
ListPlot[full_data, PlotStyle®Green]]
{splinemod[“BestFitParameters”], splinemod[“AIC”]}
This code was modified as follows for the fit of the linear splines. Here, v0 and v1 were additional parameters that were fit to the data.
model[t0_?NumberQ, t1_?NumberQ, v0_?NumberQ, v1_?NumberQ] =
(model[t0, t1, v0, v1] =
Interpolation[{{0, 0}, {t0, v0}, {t1, v1}, {tmax, dismax}}, InterpolationOrder®1]);
splinemod = NonlinearModelFit[full_data, model[t0, t1, v0, v1][t], {t0, t1, v0, v1}, t];
The raw data have been collected in an Excel file. Its sheet “adobebrick” presents them
A | B | C | D | |
---|---|---|---|---|
1 | time (day) | av. disp. (mm) | speed (mm/day) | d1 |
2 | 0 | 0.025 | =$M$2*A2 + $M$3 | |
3 | 0.0006 | 0.124 | =(B3 − B2)/(A3 − A2) | 0.1599 |
E | F | G | H | |
1 | d2 | d3 | spline | squared residuals |
2 | =$O$2*A2 + $O$3 | =$Q$2*A2 + $Q$3 | =IF(A2 < $O$6; D2; IF(A2 > $O$7; F2; E2)) | =(B2 − G2)2 |
3 | −3926.61 | 19.3965 | 0.15990 | 0.001289 |
in the form time vs. displacement. The sheet “pivot” was automatically generated (pivot table in the insert-tab). It computed, for each minute, the average displacements. The model calculations are in the sheet “adobebrick calculations” (see
For the computation of the linear spline function, the sheet “pivot” was copied into columns A (time in days) and B (average displacement in mm) of the calculation sheet. Column C computes the speed of displacement, the linear functions of formula (1) are computed in columns D to F. From these functions the spline function is pieced together in column G. Column H assesses the fit of this function to the data. Next, in order to apply the method of least squares, in cell I1 the sum of squared residuals is computed as =SUM(H:H). Using the data-tab, the SOLVER add-in is called up and the following optimization model is defined: Minimize cell I1, using as variables cells $M$2; $M$3; $O$2; $O$3; $Q$2; $Q$3, the parameters used in formula (1), and use the nonlinear GRG-solver for this task (EXCEL 2010 and later; in earlier versions just the nonlinear solver). The SOLVER now determines parameters in formula (1) that minimize the sum of squared residuals.
This EXCEL model insofar simplified the linear spline model, as the condition of formula (3) has not been used, because this condition did not have a significant effect on the estimates for t0 and t1. However, the definition of the SOLVER set-up can be easily modified to consider formula (3), too.