Search results for: David Steel
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 2383

Search results for: David Steel

283 Modeling Engagement with Multimodal Multisensor Data: The Continuous Performance Test as an Objective Tool to Track Flow

Authors: Mohammad H. Taheri, David J. Brown, Nasser Sherkat

Abstract:

Engagement is one of the most important factors in determining successful outcomes and deep learning in students. Existing approaches to detect student engagement involve periodic human observations that are subject to inter-rater reliability. Our solution uses real-time multimodal multisensor data labeled by objective performance outcomes to infer the engagement of students. The study involves four students with a combined diagnosis of cerebral palsy and a learning disability who took part in a 3-month trial over 59 sessions. Multimodal multisensor data were collected while they participated in a continuous performance test. Eye gaze, electroencephalogram, body pose, and interaction data were used to create a model of student engagement through objective labeling from the continuous performance test outcomes. In order to achieve this, a type of continuous performance test is introduced, the Seek-X type. Nine features were extracted including high-level handpicked compound features. Using leave-one-out cross-validation, a series of different machine learning approaches were evaluated. Overall, the random forest classification approach achieved the best classification results. Using random forest, 93.3% classification for engagement and 42.9% accuracy for disengagement were achieved. We compared these results to outcomes from different models: AdaBoost, decision tree, k-Nearest Neighbor, naïve Bayes, neural network, and support vector machine. We showed that using a multisensor approach achieved higher accuracy than using features from any reduced set of sensors. We found that using high-level handpicked features can improve the classification accuracy in every sensor mode. Our approach is robust to both sensor fallout and occlusions. The single most important sensor feature to the classification of engagement and distraction was shown to be eye gaze. It has been shown that we can accurately predict the level of engagement of students with learning disabilities in a real-time approach that is not subject to inter-rater reliability, human observation or reliant on a single mode of sensor input. This will help teachers design interventions for a heterogeneous group of students, where teachers cannot possibly attend to each of their individual needs. Our approach can be used to identify those with the greatest learning challenges so that all students are supported to reach their full potential.

Keywords: affective computing in education, affect detection, continuous performance test, engagement, flow, HCI, interaction, learning disabilities, machine learning, multimodal, multisensor, physiological sensors, student engagement

Procedia PDF Downloads 72
282 Neural Synchronization - The Brain’s Transfer of Sensory Data

Authors: David Edgar

Abstract:

To understand how the brain’s subconscious and conscious functions, we must conquer the physics of Unity, which leads to duality’s algorithm. Where the subconscious (bottom-up) and conscious (top-down) processes function together to produce and consume intelligence, we use terms like ‘time is relative,’ but we really do understand the meaning. In the brain, there are different processes and, therefore, different observers. These different processes experience time at different rates. A sensory system such as the eyes cycles measurement around 33 milliseconds, the conscious process of the frontal lobe cycles at 300 milliseconds, and the subconscious process of the thalamus cycle at 5 milliseconds. Three different observers experience time differently. To bridge observers, the thalamus, which is the fastest of the processes, maintains a synchronous state and entangles the different components of the brain’s physical process. The entanglements form a synchronous cohesion between the brain components allowing them to share the same state and execute in the same measurement cycle. The thalamus uses the shared state to control the firing sequence of the brain’s linear subconscious process. Sharing state also allows the brain to cheat on the amount of sensory data that must be exchanged between components. Only unpredictable motion is transferred through the synchronous state because predictable motion already exists in the shared framework. The brain’s synchronous subconscious process is entirely based on energy conservation, where prediction regulates energy usage. So, the eyes every 33 milliseconds dump their sensory data into the thalamus every day. The thalamus is going to perform a motion measurement to identify the unpredictable motion in the sensory data. Here is the trick. The thalamus conducts its measurement based on the original observation time of the sensory system (33 ms), not its own process time (5 ms). This creates a data payload of synchronous motion that preserves the original sensory observation. Basically, a frozen moment in time (Flat 4D). The single moment in time can then be processed through the single state maintained by the synchronous process. Other processes, such as consciousness (300 ms), can interface with the synchronous state to generate awareness of that moment. Now, synchronous data traveling through a separate faster synchronous process creates a theoretical time tunnel where observation time is tunneled through the synchronous process and is reproduced on the other side in the original time-relativity. The synchronous process eliminates time dilation by simply removing itself from the equation so that its own process time does not alter the experience. To the original observer, the measurement appears to be instantaneous, but in the thalamus, a linear subconscious process generating sensory perception and thought production is being executed. It is all just occurring in the time available because other observation times are slower than thalamic measurement time. For life to exist in the physical universe requires a linear measurement process, it just hides by operating at a faster time relativity. What’s interesting is time dilation is not the problem; it’s the solution. Einstein said there was no universal time.

Keywords: neural synchronization, natural intelligence, 99.95% IoT data transmission savings, artificial subconscious intelligence (ASI)

Procedia PDF Downloads 104
281 Optimization of Operational Water Quality Parameters in a Drinking Water Distribution System Using Response Surface Methodology

Authors: Sina Moradi, Christopher W. K. Chow, John Van Leeuwen, David Cook, Mary Drikas, Patrick Hayde, Rose Amal

Abstract:

Chloramine is commonly used as a disinfectant in drinking water distribution systems (DWDSs), particularly in Australia and the USA. Maintaining a chloramine residual throughout the DWDS is important in ensuring microbiologically safe water is supplied at the customer’s tap. In order to simulate how chloramine behaves when it moves through the distribution system, a water quality network model (WQNM) can be applied. In this work, the WQNM was based on mono-chloramine decomposition reactions, which enabled prediction of mono-chloramine residual at different locations through a DWDS in Australia, using the Bentley commercial hydraulic package (Water GEMS). The accuracy of WQNM predictions is influenced by a number of water quality parameters. Optimization of these parameters in order to obtain the closest results in comparison with actual measured data in a real DWDS would result in both cost reduction as well as reduction in consumption of valuable resources such as energy and materials. In this work, the optimum operating conditions of water quality parameters (i.e. temperature, pH, and initial mono-chloramine concentration) to maximize the accuracy of mono-chloramine residual predictions for two water supply scenarios in an entire network were determined using response surface methodology (RSM). To obtain feasible and economical water quality parameters for highest model predictability, Design Expert 8.0 software (Stat-Ease, Inc.) was applied to conduct the optimization of three independent water quality parameters. High and low levels of the water quality parameters were considered, inevitably, as explicit constraints, in order to avoid extrapolation. The independent variables were pH, temperature and initial mono-chloramine concentration. The lower and upper limits of each variable for two water supply scenarios were defined and the experimental levels for each variable were selected based on the actual conditions in studied DWDS. It was found that at pH of 7.75, temperature of 34.16 ºC, and initial mono-chloramine concentration of 3.89 (mg/L) during peak water supply patterns, root mean square error (RMSE) of WQNM for the whole network would be minimized to 0.189, and the optimum conditions for averaged water supply occurred at pH of 7.71, temperature of 18.12 ºC, and initial mono-chloramine concentration of 4.60 (mg/L). The proposed methodology to predict mono-chloramine residual can have a great potential for water treatment plant operators in accurately estimating the mono-chloramine residual through a water distribution network. Additional studies from other water distribution systems are warranted to confirm the applicability of the proposed methodology for other water samples.

Keywords: chloramine decay, modelling, response surface methodology, water quality parameters

Procedia PDF Downloads 199
280 Evolutionary Advantages of Loneliness with an Agent-Based Model

Authors: David Gottlieb, Jason Yoder

Abstract:

The feeling of loneliness is not uncommon in modern society, and yet, there is a fundamental lack of understanding in its origins and purpose in nature. One interpretation of loneliness is that it is a subjective experience that punishes a lack of social behavior, and thus its emergence in human evolution is seemingly tied to the survival of early human tribes. Still, a common counterintuitive response to loneliness is a state of hypervigilance, resulting in social withdrawal, which may appear maladaptive to modern society. So far, no computational model of loneliness’ effect during evolution yet exists; however, agent-based models (ABM) can be used to investigate social behavior, and applying evolution to agents’ behaviors can demonstrate selective advantages for particular behaviors. We propose an ABM where each agent contains four social behaviors, and one goal-seeking behavior, letting evolution select the best behavioral patterns for resource allocation. In our paper, we use an algorithm similar to the boid model to guide the behavior of agents, but expand the set of rules that govern their behavior. While we use cohesion, separation, and alignment for simple social movement, our expanded model adds goal-oriented behavior, which is inspired by particle swarm optimization, such that agents move relative to their personal best position. Since agents are given the ability to form connections by interacting with each other, our final behavior guides agent movement toward its social connections. Finally, we introduce a mechanism to represent a state of loneliness, which engages when an agent's perceived social involvement does not meet its expected social involvement. This enables us to investigate a minimal model of loneliness, and using evolution we attempt to elucidate its value in human survival. Agents are placed in an environment in which they must acquire resources, as their fitness is based on the total resource collected. With these rules in place, we are able to run evolution under various conditions, including resource-rich environments, and when disease is present. Our simulations indicate that there is strong selection pressure for social behavior under circumstances where there is a clear discrepancy between initial resource locations, and against social behavior when disease is present, mirroring hypervigilance. This not only provides an explanation for the emergence of loneliness, but also reflects the diversity of response to loneliness in the real world. In addition, there is evidence of a richness of social behavior when loneliness was present. By introducing just two resource locations, we observed a divergence in social motivation after agents became lonely, where one agent learned to move to the other, who was in a better resource position. The results and ongoing work from this project show that it is possible to glean insight into the evolutionary advantages of even simple mechanisms of loneliness. The model we developed has produced unexpected results and has led to more questions, such as the impact loneliness would have at a larger scale, or the effect of creating a set of rules governing interaction beyond adjacency.

Keywords: agent-based, behavior, evolution, loneliness, social

Procedia PDF Downloads 72
279 Verification of a Simple Model for Rolling Isolation System Response

Authors: Aarthi Sridhar, Henri Gavin, Karah Kelly

Abstract:

Rolling Isolation Systems (RISs) are simple and effective means to mitigate earthquake hazards to equipment in critical and precious facilities, such as hospitals, network collocation facilities, supercomputer centers, and museums. The RIS works by isolating components acceleration the inertial forces felt by the subsystem. The RIS consists of two platforms with counter-facing concave surfaces (dishes) in each corner. Steel balls lie inside the dishes and allow the relative motion between the top and bottom platform. Formerly, a mathematical model for the dynamics of RISs was developed using Lagrange’s equations (LE) and experimentally validated. A new mathematical model was developed using Gauss’s Principle of Least Constraint (GPLC) and verified by comparing impulse response trajectories of the GPLC model and the LE model in terms of the peak displacements and accelerations of the top platform. Mathematical models for the RIS are tedious to derive because of the non-holonomic rolling constraints imposed on the system. However, using Gauss’s Principle of Least constraint to find the equations of motion removes some of the obscurity and yields a system that can be easily extended. Though the GPLC model requires more state variables, the equations of motion are far simpler. The non-holonomic constraint is enforced in terms of accelerations and therefore requires additional constraint stabilization methods in order to avoid the possibility that numerical integration methods can cause the system to go unstable. The GPLC model allows the incorporation of more physical aspects related to the RIS, such as contribution of the vertical velocity of the platform to the kinetic energy and the mass of the balls. This mathematical model for the RIS is a tool to predict the motion of the isolation platform. The ability to statistically quantify the expected responses of the RIS is critical in the implementation of earthquake hazard mitigation.

Keywords: earthquake hazard mitigation, earthquake isolation, Gauss’s Principle of Least Constraint, nonlinear dynamics, rolling isolation system

Procedia PDF Downloads 225
278 Automated Computer-Vision Analysis Pipeline of Calcium Imaging Neuronal Network Activity Data

Authors: David Oluigbo, Erik Hemberg, Nathan Shwatal, Wenqi Ding, Yin Yuan, Susanna Mierau

Abstract:

Introduction: Calcium imaging is an established technique in neuroscience research for detecting activity in neural networks. Bursts of action potentials in neurons lead to transient increases in intracellular calcium visualized with fluorescent indicators. Manual identification of cell bodies and their contours by experts typically takes 10-20 minutes per calcium imaging recording. Our aim, therefore, was to design an automated pipeline to facilitate and optimize calcium imaging data analysis. Our pipeline aims to accelerate cell body and contour identification and production of graphical representations reflecting changes in neuronal calcium-based fluorescence. Methods: We created a Python-based pipeline that uses OpenCV (a computer vision Python package) to accurately (1) detect neuron contours, (2) extract the mean fluorescence within the contour, and (3) identify transient changes in the fluorescence due to neuronal activity. The pipeline consisted of 3 Python scripts that could both be easily accessed through a Python Jupyter notebook. In total, we tested this pipeline on ten separate calcium imaging datasets from murine dissociate cortical cultures. We next compared our automated pipeline outputs with the outputs of manually labeled data for neuronal cell location and corresponding fluorescent times series generated by an expert neuroscientist. Results: Our results show that our automated pipeline efficiently pinpoints neuronal cell body location and neuronal contours and provides a graphical representation of neural network metrics accurately reflecting changes in neuronal calcium-based fluorescence. The pipeline detected the shape, area, and location of most neuronal cell body contours by using binary thresholding and grayscale image conversion to allow computer vision to better distinguish between cells and non-cells. Its results were also comparable to manually analyzed results but with significantly reduced result acquisition times of 2-5 minutes per recording versus 10-20 minutes per recording. Based on these findings, our next step is to precisely measure the specificity and sensitivity of the automated pipeline’s cell body and contour detection to extract more robust neural network metrics and dynamics. Conclusion: Our Python-based pipeline performed automated computer vision-based analysis of calcium image recordings from neuronal cell bodies in neuronal cell cultures. Our new goal is to improve cell body and contour detection to produce more robust, accurate neural network metrics and dynamic graphs.

Keywords: calcium imaging, computer vision, neural activity, neural networks

Procedia PDF Downloads 60
277 Improvement in Blast Furnace Performance Using Softening - Melting Zone Profile Prediction Model at G Blast Furnace, Tata Steel Jamshedpur

Authors: Shoumodip Roy, Ankit Singhania, K. R. K. Rao, Ravi Shankar, M. K. Agarwal, R. V. Ramna, Uttam Singh

Abstract:

The productivity of a blast furnace and the quality of the hot metal produced are significantly dependent on the smoothness and stability of furnace operation. The permeability of the furnace bed, as well as the gas flow pattern, influences the steady control of process parameters. The softening – melting zone that is formed inside the furnace contributes largely in distribution of the gas flow and the bed permeability. A better shape of softening-melting zone enhances the performance of blast furnace, thereby reducing the fuel rates and improving furnace life. Therefore, predictive model of the softening- melting zone profile can be utilized to control and improve the furnace operation. The shape of softening-melting zone depends upon the physical and chemical properties of the agglomerates and iron ore charged in the furnace. The variations in the agglomerate proportion in the burden at G Blast furnace disturbed the furnace stability. During such circumstances, it was analyzed that a w-shape softening-melting zone profile was formed inside the furnace. The formation of w-shape zone resulted in poor bed permeability and non-uniform gas flow. There was a significant increase in the heat loss at the lower zone of the furnace. The fuel demand increased, and the huge production loss was incurred. Therefore, visibility of softening-melting zone profile was necessary in order to pro-actively optimize the process parameters and thereby to operate the furnace smoothly. Using stave temperatures, a model was developed that predicted the shape of the softening-melting zone inside the furnace. It was observed that furnace operated smoothly during inverse V-shape of the zone and vice-versa during w-shape. This model helped to control the heat loss, optimize the burden distribution and lower the fuel rate at G Blast Furnace, TSL Jamshedpur. As a result of furnace stabilization productivity increased by 10% and fuel rate reduced by 80 kg/thm. Details of the process have been discussed in this paper.

Keywords: agglomerate, blast furnace, permeability, softening-melting

Procedia PDF Downloads 224
276 Violent, Psychological, Sexual and Abuse-Related Emergency Department Usage amongst Pediatric Victims of Physical Assault and Gun Violence: A Case-Control Study

Authors: Mary Elizabeth Bernardin, Margie Batek, Joseph Moen, David Schnadower

Abstract:

Background: Injuries due to interpersonal violence are a common reason for emergency department (ED) visits amongst the American pediatric population. Gun violence, in particular, is associated with high morbidity, mortality as well as financial costs. Patterns of pediatric ED usage may be an indicator of risk for future violence, but very little data on the topic exists. Objective: The aims of this study were to assess for frequencies of ED usage for previous interpersonal violence, mental/behavioral issues, sexual/reproductive issues and concerns for abuse in youths presenting to EDs due to physical assault injuries (PAIs) compared to firearm injuries (FIs). Methods: In this retrospective case-control study, ED charts of children ages 8-19 years who presented with injuries due to interpersonal violent encounters from 2014-2017 were reviewed. Data was collected regarding all previous ED visits for injuries due to interpersonal violence (including physical assaults and firearm injuries), mental/behavioral health visits (including depression, suicidal ideation, suicide attempt, homicidal ideation and violent behavior), sexual/reproductive health visits (including sexually transmitted infections and pregnancy related issues), and concerns for abuse (including physical abuse or domestic violence, neglect, sexual abuse, sexual assault, and intimate partner violence). Logistic regression was used to identify predictors of gun violence based on previous ED visits amongst physical assault injured versus firearm injured youths. Results: A total of 407 patients presenting to the ED for an interpersonal violent encounter were analyzed, 251 (62%) of which were due to physical assault injuries (PAIs) and 156 (38%) due to firearm injuries (FIs). The majority of both PAI and FI patients had no previous history of ED visits for violence, mental/behavioral health, sexual/reproductive health or concern for abuse (60.8% PAI, 76.3% FI). 19.2% of PAI and 13.5% of FI youths had previous ED visits for physical assault injuries (OR 0.68, P=0.24, 95% CI 0.36 to 1.29). 1.6% of PAI and 3.2% of FI youths had a history of ED visits for previous firearm injuries (OR 3.6, P=0.34, 95% CI 0.04 to 2.95). 10% of PAI and 3.8% of FI youths had previous ED visits for mental/behavioral health issues (OR 0.91, P=0.80, 95% CI 0.43 to 1.93). 10% of PAI and 2.6% of FI youths had previous ED visits due to concerns for abuse (OR 0.76, P=0.55, 95% CI 0.31 to 1.86). Conclusions: There are no statistically significant differences between physical assault-injured and firearm-injured youths in terms of ED usage for previous violent injuries, mental/behavioral health visits, sexual/reproductive health visits or concerns for abuse. However, violently injured youths in this study have more than twice the number of previous ED usage for physical assaults and mental health visits than previous literature indicates. Data comparing ED usage of victims of interpersonal violence to nonviolent ED patients is needed, but this study supports the notion that EDs may be a useful place for identification of and enrollment in interventions for youths most at risk for future violence.

Keywords: child abuse, emergency department usage, pediatric gun violence, pediatric interpersonal violence, pediatric mental health, pediatric reproductive health

Procedia PDF Downloads 209
275 Cost-Effective Materials for Hydrocarbons Recovery from Produced Water

Authors: Fahd I. Alghunaimi, Hind S. Dossary, Norah W. Aljuryyed, Tawfik A. Saleh

Abstract:

Produced water (PW) is one of the largest by-volume waste streams and one of the most challenging effluents in the oil and gas industry. This is due to the variation of contaminants that make up PW. Severalmaterialshavebeen developed, studied, and implemented to remove hydrocarbonsfrom PW. Adsorption is one of the most effective ways ofremoving oil fromPW. In this work, three new and cost-effective hydrophobic adsorbentmaterials based on 9-octadecenoic acid grafted graphene (POG) were synthesized for oil/water separation. Graphene derived from graphite was modified with 9-octadecenoic acid to yield 9-octadecenoic acid grafted graphene (OG). The newsynthesized materials which called POG25, POG50, and POG75 were characterized by using N₂-physisorption (BET) and Fourier transform infrared (FTIR). The BET surface area of POG75 was the highest with 288 m²/g, whereas POG50 was 225 m²/g and POG25 was lowest 79 m²/g. These three materials were also evaluated for their oil-water separation efficiency using a model mixture, whichdemonstrated that POG-75 has the highest oil removal efficiency and the faster rate of the adsorption (Figure-1). POG75 was regenerated, and its performance was verified again with a little reduced adsorption rate compared to the fresh material. The mixtures that used in the performance test were prepared by mixing nonpolar organic liquids such as heptane, dodecane, or hexadecane into the colored water. In general, the new materials showed fast uptake of the certain quantity of the oildue to the high hydrophobicity nature of the materials, which repel water as confirmed by the contact angle of approximately 150˚. Besides that, novel superhydrophobic material was also synthesized by introducing hydrophobic branches of laurate on the surface of the stainless steel mesh (SSM). This novel mesh could help to hold the novel adsorbent materials in a column to remove oil from PW. Both BOG-75 and the novel mesh have the potential to remove oil contaminants from produced water, which will help to provide an opportunity to recover useful components, in addition, to reduce the environmental impact and reuse produced water in several applications such as fracturing.

Keywords: graphite to graphene, oleophilic, produced water, separation

Procedia PDF Downloads 103
274 Random Vertical Seismic Vibrations of the Long Span Cantilever Beams

Authors: Sergo Esadze

Abstract:

Seismic resistance norms require calculation of cantilevers on vertical components of the base seismic acceleration. Long span cantilevers, as a rule, must be calculated as a separate construction element. According to the architectural-planning solution, functional purposes and environmental condition of a designing buildings/structures, long span cantilever construction may be of very different types: both by main bearing element (beam, truss, slab), and by material (reinforced concrete, steel). A choice from these is always linked with bearing construction system of the building. Research of vertical seismic vibration of these constructions requires individual approach for each (which is not specified in the norms) in correlation with model of seismic load. The latest may be given both as deterministic load and as a random process. Loading model as a random process is more adequate to this problem. In presented paper, two types of long span (from 6m – up to 12m) reinforcement concrete cantilever beams have been considered: a) bearing elements of cantilevers, i.e., elements in which they fixed, have cross-sections with large sizes and cantilevers are made with haunch; b) cantilever beam with load-bearing rod element. Calculation models are suggested, separately for a) and b) types. They are presented as systems with finite quantity degree (concentrated masses) of freedom. Conditions for fixing ends are corresponding with its types. Vertical acceleration and vertical component of the angular acceleration affect masses. Model is based on assumption translator-rotational motion of the building in the vertical plane, caused by vertical seismic acceleration. Seismic accelerations are considered as random processes and presented by multiplication of the deterministic envelope function on stationary random process. Problem is solved within the framework of the correlation theory of random process. Solved numerical examples are given. The method is effective for solving the specific problems.

Keywords: cantilever, random process, seismic load, vertical acceleration

Procedia PDF Downloads 163
273 Structural Design of a Relief Valve Considering Strength

Authors: Nam-Hee Kim, Jang-Hoon Ko, Kwon-Hee Lee

Abstract:

A relief valve is a mechanical element to keep safety by controlling high pressure. Usually, the high pressure is relieved by using the spring force and letting the fluid to flow from another way out of system. When its normal pressure is reached, the relief valve can return to initial state. The relief valve in this study has been applied for pressure vessel, evaporator, piping line, etc. The relief valve should be designed for smooth operation and should satisfy the structural safety requirement under operating condition. In general, the structural analysis is performed by following fluid flow analysis. In this process, the FSI (Fluid-Structure Interaction) is required to input the force obtained from the output of the flow analysis. Firstly, this study predicts the velocity profile and the pressure distribution in the given system. In this study, the assumptions for flow analysis are as follows: • The flow is steady-state and three-dimensional. • The fluid is Newtonian and incompressible. • The walls of the pipe and valve are smooth. The flow characteristics in this relief valve does not induce any problem. The commercial software ANSYS/CFX is utilized for flow analysis. On the contrary, very high pressure may cause structural problem due to severe stress. The relief valve is made of body, bonnet, guide, piston and nozzle, and its material is stainless steel. To investigate its structural safety, the worst case loading is considered as the pressure of 700 bar. The load is applied to inside the valve, which is greater than the load obtained from FSI. The maximum stress is calculated as 378 MPa by performing the finite element analysis. However, the value is greater than its allowable value. Thus, an alternative design is suggested to improve the structural performance through case study. We found that the sensitive design variable to the strength is the shape of the nozzle. The case study is to vary the size of the nozzle. Finally, it can be seen that the suggested design satisfy the structural design requirement. The FE analysis is performed by using the commercial software ANSYS/Workbench.

Keywords: relief valve, structural analysis, structural design, strength, safety factor

Procedia PDF Downloads 273
272 Metacognitive Processing in Early Readers: The Role of Metacognition in Monitoring Linguistic and Non-Linguistic Performance and Regulating Students' Learning

Authors: Ioanna Taouki, Marie Lallier, David Soto

Abstract:

Metacognition refers to the capacity to reflect upon our own cognitive processes. Although there is an ongoing discussion in the literature on the role of metacognition in learning and academic achievement, little is known about its neurodevelopmental trajectories in early childhood, when children begin to receive formal education in reading. Here, we evaluate the metacognitive ability, estimated under a recently developed Signal Detection Theory model, of a cohort of children aged between 6 and 7 (N=60), who performed three two-alternative-forced-choice tasks (two linguistic: lexical decision task, visual attention span task, and one non-linguistic: emotion recognition task) including trial-by-trial confidence judgements. Our study has three aims. First, we investigated how metacognitive ability (i.e., how confidence ratings track accuracy in the task) relates to performance in general standardized tasks related to students' reading and general cognitive abilities using Spearman's and Bayesian correlation analysis. Second, we assessed whether or not young children recruit common mechanisms supporting metacognition across the different task domains or whether there is evidence for domain-specific metacognition at this early stage of development. This was done by examining correlations in metacognitive measures across different task domains and evaluating cross-task covariance by applying a hierarchical Bayesian model. Third, using robust linear regression and Bayesian regression models, we assessed whether metacognitive ability in this early stage is related to the longitudinal learning of children in a linguistic and a non-linguistic task. Notably, we did not observe any association between students’ reading skills and metacognitive processing in this early stage of reading acquisition. Some evidence consistent with domain-general metacognition was found, with significant positive correlations between metacognitive efficiency between lexical and emotion recognition tasks and substantial covariance indicated by the Bayesian model. However, no reliable correlations were found between metacognitive performance in the visual attention span and the remaining tasks. Remarkably, metacognitive ability significantly predicted children's learning in linguistic and non-linguistic domains a year later. These results suggest that metacognitive skill may be dissociated to some extent from general (i.e., language and attention) abilities and further stress the importance of creating educational programs that foster students’ metacognitive ability as a tool for long term learning. More research is crucial to understand whether these programs can enhance metacognitive ability as a transferable skill across distinct domains or whether unique domains should be targeted separately.

Keywords: confidence ratings, development, metacognitive efficiency, reading acquisition

Procedia PDF Downloads 122
271 Honneth, Feenberg, and the Redemption of Critical Theory of Technology

Authors: David Schafer

Abstract:

Critical Theory is in sore need of a workable account of technology. It had one in the writings of Herbert Marcuse, or so it seemed until Jürgen Habermas mounted a critique in 'Technology and Science as Ideology' (Habermas, 1970) that decisively put it away. Ever since Marcuse’s work has been regarded outdated – a 'philosophy of consciousness' no longer seriously tenable. But with Marcuse’s view has gone the important insight that technology is no norm-free system (as Habermas portrays it) but can be laden with social bias. Andrew Feenberg is among a few serious scholars who have perceived this problem in post-Habermasian critical theory and has sought to revive a basically Marcusean account of technology. On his view, while so-called ‘technical elements’ that physically make up technologies are neutral with regard to social interests, there is a sense in which we may speak of a normative grammar or ‘technical code’ built-in to technology that can be socially biased in favor of certain groups over others (Feenberg, 2002). According to Feenberg, those perspectives on technology are reified which consider technology only by their technical elements to the neglect of their technical codes. Nevertheless, Feenberg’s account fails to explain what is normatively problematic with such reified views of technology. His plausible claim that they represent false perspectives on technology by itself does not explain how such views may be oppressive, even though Feenberg surely wants to be doing that stronger level of normative theorizing. Perceiving this deficit in his own account of reification, he tries to adopt Habermas’s version of systems-theory to ground his own critical theory of technology (Feenberg, 1999). But this is a curious move in light of Feenberg’s own legitimate critiques of Habermas’s portrayals of technology as reified or ‘norm-free.’ This paper argues that a better foundation may be found in Axel Honneth’s recent text, Freedom’s Right (Honneth, 2014). Though Honneth there says little explicitly about technology, he offers an implicit account of reification formulated in opposition to Habermas’s systems-theoretic approach. On this ‘normative functionalist’ account of reification, social spheres are reified when participants prioritize individualist ideals of freedom (moral and legal freedom) to the neglect of an intersubjective form of freedom-through-recognition that Honneth calls ‘social freedom.’ Such misprioritization is ultimately problematic because it is unsustainable: individual freedom is philosophically and institutionally dependent upon social freedom. The main difficulty in adopting Honneth’s social theory for the purposes of a theory of technology, however, is that the notion of social freedom is predicable only of social institutions, whereas it appears difficult to conceive of technology as an institution. Nevertheless, in light of Feenberg’s work, the idea that technology includes within itself a normative grammar (technical code) takes on much plausibility. To the extent that this normative grammar may be understood by the category of social freedom, Honneth’s dialectical account of the relationship between individual and social forms of freedom provides a more solid basis from which to ground the normative claims of Feenberg’s sociological account of technology than Habermas’s systems theory.

Keywords: Habermas, Honneth, technology, Feenberg

Procedia PDF Downloads 168
270 Development of an Automatic Computational Machine Learning Pipeline to Process Confocal Fluorescence Images for Virtual Cell Generation

Authors: Miguel Contreras, David Long, Will Bachman

Abstract:

Background: Microscopy plays a central role in cell and developmental biology. In particular, fluorescence microscopy can be used to visualize specific cellular components and subsequently quantify their morphology through development of virtual-cell models for study of effects of mechanical forces on cells. However, there are challenges with these imaging experiments, which can make it difficult to quantify cell morphology: inconsistent results, time-consuming and potentially costly protocols, and limitation on number of labels due to spectral overlap. To address these challenges, the objective of this project is to develop an automatic computational machine learning pipeline to predict cellular components morphology for virtual-cell generation based on fluorescence cell membrane confocal z-stacks. Methods: Registered confocal z-stacks of nuclei and cell membrane of endothelial cells, consisting of 20 images each, were obtained from fluorescence confocal microscopy and normalized through software pipeline for each image to have a mean pixel intensity value of 0.5. An open source machine learning algorithm, originally developed to predict fluorescence labels on unlabeled transmitted light microscopy cell images, was trained using this set of normalized z-stacks on a single CPU machine. Through transfer learning, the algorithm used knowledge acquired from its previous training sessions to learn the new task. Once trained, the algorithm was used to predict morphology of nuclei using normalized cell membrane fluorescence images as input. Predictions were compared to the ground truth fluorescence nuclei images. Results: After one week of training, using one cell membrane z-stack (20 images) and corresponding nuclei label, results showed qualitatively good predictions on training set. The algorithm was able to accurately predict nuclei locations as well as shape when fed only fluorescence membrane images. Similar training sessions with improved membrane image quality, including clear lining and shape of the membrane, clearly showing the boundaries of each cell, proportionally improved nuclei predictions, reducing errors relative to ground truth. Discussion: These results show the potential of pre-trained machine learning algorithms to predict cell morphology using relatively small amounts of data and training time, eliminating the need of using multiple labels in immunofluorescence experiments. With further training, the algorithm is expected to predict different labels (e.g., focal-adhesion sites, cytoskeleton), which can be added to the automatic machine learning pipeline for direct input into Principal Component Analysis (PCA) for generation of virtual-cell mechanical models.

Keywords: cell morphology prediction, computational machine learning, fluorescence microscopy, virtual-cell models

Procedia PDF Downloads 179
269 The Confluence between Autism Spectrum Disorder and the Schizoid Personality

Authors: Murray David Schane

Abstract:

Though years of clinical encounters with patients with autism spectrum disorders and those with a schizoid personality the many defining diagnostic features shared between these conditions have been explored and current neurobiological differences have been reviewed; and, critical and different treatment strategies for each have been devised. The paper compares and contrasts the apparent similarities between autism spectrum disorders and the schizoid personality are found in these DSM descriptive categories: restricted range of social-emotional reciprocity; poor non-verbal communicative behavior in social interactions; difficulty developing and maintaining relationships; detachment from social relationships; lack of the desire for or enjoyment of close relationships; and preference for solitary activities. In this paper autism, fundamentally a communicative disorder, is revealed to present clinically as a pervasive aversive response to efforts to engage with or be engaged by others. Autists with the Asperger presentation typically have language but have difficulty understanding humor, irony, sarcasm, metaphoric speech, and even narratives about social relationships. They also tend to seek sameness, possibly to avoid problems of social interpretation. Repetitive behaviors engage many autists as a screen against ambient noise, social activity, and challenging interactions. Also in this paper, the schizoid personality is revealed as a pattern of social avoidance, self-sufficiency and apparent indifference to others as a complex psychological defense against a deep, long-abiding fear of appropriation and perverse manipulation. Neither genetic nor MRI studies have yet located the explanatory data that identifies the cause or the neurobiology of autism. Similarly, studies of the schizoid have yet to group that condition with those found in schizophrenia. Through presentations of clinical examples, the treatment of autists of the Asperger type is revealed to address the autist’s extreme social aversion which also precludes the experience of empathy. Autists will be revealed as forming social attachments but without the capacity to interact with mutual concern. Empathy will be shown be teachable and, as social avoidance relents, understanding of the meaning and signs of empathic needs that autists can recognize and acknowledge. Treatment of schizoids will be shown to revolve around joining empathically with the schizoid’s apprehensions about interpersonal, interactive proximity. Models of both autism and schizoid personality traits have yet to be replicated in animals, thereby eliminating the role of translational research in providing the kind of clues to behavioral patterns that can be related to genetic, epigenetic and neurobiological measures. But as these clinical examples will attest, treatment strategies have significant impact.

Keywords: autism spectrum, schizoid personality traits, neurobiological implications, critical diagnostic distinctions

Procedia PDF Downloads 91
268 Assessment of Surface Water Quality near Landfill Sites Using a Water Pollution Index

Authors: Alejandro Cittadino, David Allende

Abstract:

Landfilling of municipal solid waste is a common waste management practice in Argentina as in many parts of the world. There is extensive scientific literature on the potential negative effects of landfill leachates on the environment, so it’s necessary to be rigorous with the control and monitoring systems. Due to the specific municipal solid waste composition in Argentina, local landfill leachates contain large amounts of organic matter (biodegradable, but also refractory to biodegradation), as well as ammonia-nitrogen, small trace of some heavy metals, and inorganic salts. In order to investigate the surface water quality in the Reconquista river adjacent to the Norte III landfill, water samples both upstream and downstream the dumpsite are quarterly collected and analyzed for 43 parameters including organic matter, heavy metals, and inorganic salts, as required by the local standards. The objective of this study is to apply a water quality index that considers the leachate characteristics in order to determine the quality status of the watercourse through the landfill. The water pollution index method has been widely used in water quality assessments, particularly rivers, and it has played an increasingly important role in water resource management, since it provides a number simple enough for the public to understand, that states the overall water quality at a certain location and time. The chosen water quality index (ICA) is based on the values of six parameters: dissolved oxygen (in mg/l and percent saturation), temperature, biochemical oxygen demand (BOD5), ammonia-nitrogen and chloride (Cl-) concentration. The index 'ICA' was determined both upstream and downstream the Reconquista river, being the rating scale between 0 (very poor water quality) and 10 (excellent water quality). The monitoring results indicated that the water quality was unaffected by possible leachate runoff since the index scores upstream and downstream were ranked in the same category, although in general, most of the samples were classified as having poor water quality according to the index’s scale. The annual averaged ICA index scores (computed quarterly) were 4.9, 3.9, 4.4 and 5.0 upstream and 3.9, 5.0, 5.1 and 5.0 downstream the river during the study period between 2014 and 2017. Additionally, the water quality seemed to exhibit distinct seasonal variations, probably due to annual precipitation patterns in the study area. The ICA water quality index appears to be appropriate to evaluate landfill impacts since it accounts mainly for organic pollution and inorganic salts and the absence of heavy metals in the local leachate composition, however, the inclusion of other parameters could be more decisive in discerning the affected stream reaches from the landfill activities. A future work may consider adding to the index other parameters like total organic carbon (TOC) and total suspended solids (TSS) since they are present in the leachate in high concentrations.

Keywords: landfill, leachate, surface water, water quality index

Procedia PDF Downloads 125
267 The Impact of Animal-Assisted Learning on Emotional Wellbeing and Engagement with Reading

Authors: Jill Steel

Abstract:

Introduction: Animal-assisted learning (AAL) interventions are increasing exponentially, yet a paucity of quality research in the field exists. The aim of this study was to evaluate how the promotion of emotional wellbeing, through AAL, in this case, a dog, may support children’s engagement with reading in a Primary 1 classroom. Research indicates that dogs can provide emotional support to children; by forming a trusting attachment with a non-critical ‘friend’ who confers unconditional positive regard on the child, confidence may be boosted and anxiety reduced. By promoting emotional wellbeing through interactions with the dog, it is hoped that children begin to associate reading with feelings of wellbeing, which then results in increased engagement with reading. Methodology: A review of the literature was conducted. The relationship between emotional wellbeing and learning was explored, followed by an examination of the literature relating to Animal-Assisted Therapy and AAL. Scottish educational policy and legislation were analysed to establish the extent to which AAL might be suitable for the Scottish pedagogical context. An empirical study was conducted in a mainstream Primary 1 classroom over a four-week period. An inclusive approach was adopted whereby all children that wanted to interact with the dog were given the opportunity to do so, and all 25 children subsequently chose to participate. Children were not withdrawn from the classroom. Primary methods included interviews, observations, and questionnaires. Three focus children were selected for closer study. Main Results: Results were remarkably close to previous research and literature. Children’s emotional wellbeing was boosted, and engagement in reading improved. Principal Conclusions and Implications for Field: It was concluded that AAL could support emotional wellbeing and, in turn, promote children’s engagement with reading. The main limitation of the study was its short-term nature, and a longer randomised controlled trial with a larger sample, currently being undertaken by the author, would provide a fuller answer to the research question. Barriers to AAL include health and safety concerns and steps to ensure the welfare of the dog.

Keywords: animal-assisted learning, emotional wellbeing, reading, reading to dogs

Procedia PDF Downloads 107
266 Energy Intensity: A Case of Indian Manufacturing Industries

Authors: Archana Soni, Arvind Mittal, Manmohan Kapshe

Abstract:

Energy has been recognized as one of the key inputs for the economic growth and social development of a country. High economic growth naturally means a high level of energy consumption. However, in the present energy scenario where there is a wide gap between the energy generation and energy consumption, it is extremely difficult to match the demand with the supply. India being one of the largest and rapidly growing developing countries, there is an impending energy crisis which requires immediate measures to be adopted. In this situation, the concept of Energy Intensity comes under special focus to ensure energy security in an environmentally sustainable way. Energy Intensity is defined as the energy consumed per unit output in the context of industrial energy practices. It is a key determinant of the projections of future energy demands which assists in policy making. Energy Intensity is inversely related to energy efficiency; lesser the energy required to produce a unit of output or service, the greater is the energy efficiency. Energy Intensity of Indian manufacturing industries is among the highest in the world and stands for enormous energy consumption. Hence, reducing the Energy Intensity of Indian manufacturing industries is one of the best strategies to achieve a low level of energy consumption and conserve energy. This study attempts to analyse the factors which influence the Energy Intensity of Indian manufacturing firms and how they can be used to reduce the Energy Intensity. The paper considers six of the largest energy consuming manufacturing industries in India viz. Aluminium, Cement, Iron & Steel Industries, Textile Industries, Fertilizer and Paper industries and conducts a detailed Energy Intensity analysis using the data from PROWESS database of the Centre for Monitoring Indian Economy (CMIE). A total of twelve independent explanatory variables based on various factors such as raw material, labour, machinery, repair and maintenance, production technology, outsourcing, research and development, number of employees, wages paid, profit margin and capital invested have been taken into consideration for the analysis.

Keywords: energy intensity, explanatory variables, manufacturing industries, PROWESS database

Procedia PDF Downloads 311
265 Controlled Growth of Au Hierarchically Ordered Crystals Architectures for Electrochemical Detection of Traces of Molecules

Authors: P. Bauer, K. Mougin, V. Vignal, A. Buch, P. Ponthiaux, D. Faye

Abstract:

Nowadays, noble metallic nanostructures with unique morphology are widely used as new sensors due to their fascinating optical, electronic and catalytic properties. Among various shapes, dendritic nanostructures have attracted much attention because of their large surface-to-volume ratio, high sensitivity and special texture with sharp tips and nanoscale junctions. Several methods have been developed to fabricate those specific structures such as electrodeposition, photochemical way, seed-mediated growth or wet chemical method. The present study deals with a novel approach for a controlled growth pattern-directed organisation of Au flower-like crystals (NFs) deposited onto stainless steel plates to achieve large-scale functional surfaces. This technique consists in the deposition of a soft nanoporous template on which Au NFs are grown by electroplating and seed-mediated method. Size, morphology, and interstructure distance have been controlled by a site selective nucleation process. Dendritic Au nanostructures have appeared as excellent Raman-active candidates due to the presence of very sharp tips of multi-branched Au nanoparticles that leads to a large local field enhancement and a good SERS sensitivity. In addition, these structures have also been used as electrochemical sensors to detect traces of molecules present in a solution. A correlation of the number of active sites on the surface and the current charge by both colorimetric method and cyclic voltammetry of gold structures have allowed a calibration of the system. This device represents a first step for the fabrication of MEMs platform that could ultimately be integrated into a lab-on-chip system. It also opens pathways to several technologically large-scale nanomaterials fabrication such as hierarchically ordered crystal architectures for sensor applications.

Keywords: dendritic, electroplating, gold, template

Procedia PDF Downloads 165
264 Development of Interaction Diagram for Eccentrically Loaded Reinforced Concrete Sandwich Walls with Different Design Parameters

Authors: May Haggag, Ezzat Fahmy, Mohamed Abdel-Mooty, Sherif Safar

Abstract:

Sandwich sections have a very complex nature due to variability of behavior of different materials within the section. Cracking, crushing and yielding capacity of constituent materials enforces high complexity of the section. Furthermore, slippage between the different layers adds to the section complex behavior. Conventional methods implemented in current industrial guidelines do not account for the above complexities. Thus, a throughout study is needed to understand the true behavior of the sandwich panels thus, increase the ability to use them effectively and efficiently. The purpose of this paper is to conduct numerical investigation using ANSYS software for the structural behavior of sandwich wall section under eccentric loading. Sandwich walls studied herein are composed of two RC faces, a foam core and linking shear connectors. Faces are modeled using solid elements and reinforcement together with connectors are modeled using link elements. The analysis conducted herein is nonlinear static analysis incorporating material nonlinearity, crashing and crushing of concrete and yielding of steel. The model is validated by comparing it to test results in literature. After validation, the model is used to establish extensive parametric analysis to investigate the effect of three key parameters on the axial force bending moment interaction diagram of the walls. These parameters are the concrete compressive strength, face thickness and number of shear connectors. Furthermore, the results of the parametric study are used to predict a coefficient that links the interaction diagram of a solid wall to that of a sandwich wall. The equation is predicted using the parametric study data and regression analysis. The predicted α was used to construct the interaction diagram of the investigated wall and the results were compared with ANSYS results and showed good agreement.

Keywords: sandwich walls, interaction diagrams, numerical modeling, eccentricity, reinforced concrete

Procedia PDF Downloads 384
263 Techniques for Seismic Strengthening of Historical Monuments from Diagnosis to Implementation

Authors: Mircan Kaya

Abstract:

A multi-disciplinary approach is required in any intervention project for historical monuments. Due to the complexity of their geometry, the variable and unpredictable characteristics of original materials used in their creation, heritage structures are peculiar. Their histories are often complex, and they require correct diagnoses to decide on the techniques of intervention. This approach should not only combine technical aspects but also historical research that may help discover phenomena involving structural issues, and acquire a knowledge of the structure on its concept, method of construction, previous interventions, process of damage, and its current state. In addition to the traditional techniques like bed joint reinforcement, the repairing, strengthening and restoration of historical buildings may require several other modern methods which may be described as innovative techniques like pre-stressing and post-tensioning, use of shape memory alloy devices and shock transmission units, shoring, drilling, and the use of stainless steel or titanium. Regardless of the method to be incorporated in the strengthening process, which can be traditional or innovative, it is crucial to recognize that structural strengthening is the process of upgrading the structural system of the existing building with the aim of improving its performance under existing and additional loads like seismic loads. This process is much more complex than dealing with a new construction, owing to the fact that there are several unknown factors associated with the structural system. Material properties, load paths, previous interventions, existing reinforcement are especially important matters to be considered. There are several examples of seismic strengthening with traditional and innovative techniques around the world, which will be discussed in this paper in detail, including their pros and cons. Ultimately, however, the main idea underlying the philosophy of a successful intervention with the most appropriate techniques of strengthening a historic monument should be decided by a proper assessment of the specific needs of the building.

Keywords: bed joint reinforcement, historical monuments, post-tensioning, pre-stressing, seismic strengthening, shape memory alloy devices, shock transmitters, tie rods

Procedia PDF Downloads 236
262 Characterization of Thin Woven Composites Used in Printed Circuit Boards by Combining Numerical and Experimental Approaches

Authors: Gautier Girard, Marion Martiny, Sebastien Mercier, Mohamad Jrad, Mohamed-Slim Bahi, Laurent Bodin, Francois Lechleiter, David Nevo, Sophie Dareys

Abstract:

Reliability of electronic devices has always been of highest interest for Aero-MIL and space applications. In any electronic device, Printed Circuit Board (PCB), providing interconnection between components, is a key for reliability. During the last decades, PCB technologies evolved to sustain and/or fulfill increased original equipment manufacturers requirements and specifications, higher densities and better performances, faster time to market and longer lifetime, newer material and mixed buildups. From the very beginning of the PCB industry up to recently, qualification, experiments and trials, and errors were the most popular methods to assess system (PCB) reliability. Nowadays OEM, PCB manufacturers and scientists are working together in a close relationship in order to develop predictive models for PCB reliability and lifetime. To achieve that goal, it is fundamental to characterize precisely base materials (laminates, electrolytic copper, …), in order to understand failure mechanisms and simulate PCB aging under environmental constraints by means of finite element method for example. The laminates are woven composites and have thus an orthotropic behaviour. The in-plane properties can be measured by combining classical uniaxial testing and digital image correlation. Nevertheless, the out-of-plane properties cannot be evaluated due to the thickness of the laminate (a few hundred of microns). It has to be noted that the knowledge of the out-of-plane properties is fundamental to investigate the lifetime of high density printed circuit boards. A homogenization method combining analytical and numerical approaches has been developed in order to obtain the complete elastic orthotropic behaviour of a woven composite from its precise 3D internal structure and its experimentally measured in-plane elastic properties. Since the mechanical properties of the resin surrounding the fibres are unknown, an inverse method is proposed to estimate it. The methodology has been applied to one laminate used in hyperfrequency spatial applications in order to get its elastic orthotropic behaviour at different temperatures in the range [-55°C; +125°C]. Next; numerical simulations of a plated through hole in a double sided PCB are performed. Results show the major importance of the out-of-plane properties and the temperature dependency of these properties on the lifetime of a printed circuit board. Acknowledgements—The support of the French ANR agency through the Labcom program ANR-14-LAB7-0003-01, support of CNES, Thales Alenia Space and Cimulec is acknowledged.

Keywords: homogenization, orthotropic behaviour, printed circuit board, woven composites

Procedia PDF Downloads 174
261 Best Combination of Design Parameters for Buildings with Buckling-Restrained Braces

Authors: Ángel de J. López-Pérez, Sonia E. Ruiz, Vanessa A. Segovia

Abstract:

Buildings vulnerability due to seismic activity has been highly studied since the middle of last century. As a solution to the structural and non-structural damage caused by intense ground motions, several seismic energy dissipating devices, such as buckling-restrained braces (BRB), have been proposed. BRB have shown to be effective in concentrating a large portion of the energy transmitted to the structure by the seismic ground motion. A design approach for buildings with BRB elements, which is based on a seismic Displacement-Based formulation, has recently been proposed by the coauthors in this paper. It is a practical and easy design method which simplifies the work of structural engineers. The method is used here for the design of the structure-BRB damper system. The objective of the present study is to extend and apply a methodology to find the best combination of design parameters on multiple-degree-of-freedom (MDOF) structural frame – BRB systems, taking into account simultaneously: 1) initial costs and 2) an adequate engineering demand parameter. The design parameters considered here are: the stiffness ratio (α = Kframe/Ktotal), and the strength ratio (γ = Vdamper/Vtotal); where K represents structural stiffness and V structural strength; and the subscripts "frame", "damper" and "total" represent: the structure without dampers, the BRB dampers and the total frame-damper system, respectively. The selection of the best combination of design parameters α and γ is based on an initial costs analysis and on the structural dynamic response of the structural frame-damper system. The methodology is applied to a 12-story 5-bay steel building with BRB, which is located on the intermediate soil of Mexico City. It is found the best combination of design parameters α and γ for the building with BRB under study.

Keywords: best combination of design parameters, BRB, buildings with energy dissipating devices, buckling-restrained braces, initial costs

Procedia PDF Downloads 236
260 Investigating the Effect of Using Amorphous Silica Ash Obtained from Rice Husk as a Partial Replacement of Ordinary Portland Cement on the Mechanical and Microstructure Properties of Cement Paste and Mortar

Authors: Aliyu Usman, Muhaammed Bello Ibrahim, Yusuf D. Amartey, Jibrin M. Kaura

Abstract:

This research is aimed at investigating the effect of using amorphous silica ash (ASA) obtained from rice husk as a partial replacement of ordinary Portland cement (OPC) on the mechanical and microstructure properties of cement paste and mortar. ASA was used in partial replacement of ordinary Portland cement in the following percentages 3 percent, 5 percent, 8 percent and 10 percent. These partial replacements were used to produce Cement-ASA paste and Cement-ASA mortar. ASA was found to contain all the major chemical compounds found in cement with the exception of alumina, which are SiO2 (91.5%), CaO (2.84%), Fe2O3 (1.96%), and loss on ignition (LOI) was found to be 9.18%. It also contains other minor oxides found in cement. Consistency of Cement-ASA paste was found to increase with increase in ASA replacement. Likewise, the setting time and soundness of the Cement-ASA paste also increases with increase in ASA replacements. The test on hardened mortar were destructive in nature which include flexural strength test on prismatic beam (40mm x 40mm x 160mm) at 2, 7, 14 and 28 days curing and compressive strength test on the cube size (40mm x 40mm, by using the auxiliary steel platens) at 2,7,14 and 28 days curing. The Cement-ASA mortar flexural and compressive strengths were found to be increasing with curing time and decreases with cement replacement by ASA. It was observed that 5 percent replacement of cement with ASA attained the highest strength for all the curing ages and all the percentage replacements attained the targeted compressive strength of 6N/mm2 for 28 days. There is an increase in the drying shrinkage of Cement-ASA mortar with curing time, it was also observed that the drying shrinkages for all the curing ages were greater than the control specimen all of which were greater than the code recommendation of less than 0.03%. The scanning electron microscope (SEM) was used to study the Cement-ASA mortar microstructure and to also look for hydration product and morphology.

Keywords: amorphous silica ash, cement mortar, cement paste, scanning electron microscope

Procedia PDF Downloads 407
259 Finite Element Analysis of the Anaconda Device: Efficiently Predicting the Location and Shape of a Deployed Stent

Authors: Faidon Kyriakou, William Dempster, David Nash

Abstract:

Abdominal Aortic Aneurysm (AAA) is a major life-threatening pathology for which modern approaches reduce the need for open surgery through the use of stenting. The success of stenting though is sometimes jeopardized by the final position of the stent graft inside the human artery which may result in migration, endoleaks or blood flow occlusion. Herein, a finite element (FE) model of the commercial medical device AnacondaTM (Vascutek, Terumo) has been developed and validated in order to create a numerical tool able to provide useful clinical insight before the surgical procedure takes place. The AnacondaTM device consists of a series of NiTi rings sewn onto woven polyester fabric, a structure that despite its column stiffness is flexible enough to be used in very tortuous geometries. For the purposes of this study, a FE model of the device was built in Abaqus® (version 6.13-2) with the combination of beam, shell and surface elements; the choice of these building blocks was made to keep the computational cost to a minimum. The validation of the numerical model was performed by comparing the deployed position of a full stent graft device inside a constructed AAA with a duplicate set-up in Abaqus®. Specifically, an AAA geometry was built in CAD software and included regions of both high and low tortuosity. Subsequently, the CAD model was 3D printed into a transparent aneurysm, and a stent was deployed in the lab following the steps of the clinical procedure. Images on the frontal and sagittal planes of the experiment allowed the comparison with the results of the numerical model. By overlapping the experimental and computational images, the mean and maximum distances between the rings of the two models were measured in the longitudinal, and the transverse direction and, a 5mm upper bound was set as a limit commonly used by clinicians when working with simulations. The two models showed very good agreement of their spatial positioning, especially in the less tortuous regions. As a result, and despite the inherent uncertainties of a surgical procedure, the FE model allows confidence that the final position of the stent graft, when deployed in vivo, can also be predicted with significant accuracy. Moreover, the numerical model run in just a few hours, an encouraging result for applications in the clinical routine. In conclusion, the efficient modelling of a complicated structure which combines thin scaffolding and fabric has been demonstrated to be feasible. Furthermore, the prediction capabilities of the location of each stent ring, as well as the global shape of the graft, has been shown. This can allow surgeons to better plan their procedures and medical device manufacturers to optimize their designs. The current model can further be used as a starting point for patient specific CFD analysis.

Keywords: AAA, efficiency, finite element analysis, stent deployment

Procedia PDF Downloads 170
258 Producing Carbon Nanoparticles from Agricultural and Municipal Wastes

Authors: Kanik Sharma

Abstract:

In the year of 2011, the global production of carbon nano-materials (CNMs) was around 3,500 tons, and it is projected to expand at a compound annual growth rate of 30.6%. Expanding markets for applications of CNMs, such as carbon nano-tubes (CNTs) and carbon nano-fibers (CNFs), place ever-increasing demands on lowering their production costs. Current technologies for CNM generation require intensive premium feedstock consumption and employ costly catalysts; they also require input of external energy. Industrial-scale CNM production is conventionally achieved through chemical vapor deposition (CVD) methods which consume a variety of expensive premium chemical feedstocks such as ethylene, carbon monoxide (CO) and hydrogen (H2); or by flame synthesis techniques, which also consume premium feedstock fuels. Additionally, CVD methods are energy-intensive. Renewable and replenishable feedstocks, such as those found in municipal, industrial, agricultural recycling streams have a more judicious reason for usage, in the light of current emerging needs for sustainability. Agricultural sugarcane bagasse and corn residues, scrap tire chips as well as post-consumer polyethylene (PE) and polyethylene terephthalate (PET) bottle shreddings when either thermally treated by sole pyrolysis or by sequential pyrolysis and partial oxidation result in the formation of gaseous carbon-bearing effluents which when channeled into a heated reactor, produce CNMs, including carbon nano-tubes, catalytically synthesized therein on stainless steel meshes. The structure of the nano-material synthesized depends on the type of feedstock available for pyrolysis, and can be determined by analysing the feedstock. These feedstocks could supersede the use of costly and often toxic or highly-flammable chemicals such as hydrocarbon gases, carbon monoxide and hydrogen, which are commonly used as feedstocks in current nano-manufacturing process for CNMs.

Keywords: nanomaterials, waste plastics, sugarcane bagasse, pyrolysis

Procedia PDF Downloads 210
257 Ecological and Historical Components of the Cultural Code of the City of Florence as Part of the Edutainment Project Velonotte International

Authors: Natalia Zhabo, Sergey Nikitin, Marina Avdonina, Mariya Nikitina

Abstract:

The analysis of the activities of one of the events of the international educational and entertainment project Velonotte is provided: an evening bicycle tour with children around Florence. The aim of the project is to develop methods and techniques for increasing the sensitivity of the cycling participants and listeners of the radio broadcasts to the treasures of the national heritage, in this case, to the historical layers of the city and the ecology of the Renaissance epoch. The block of educational tasks is considered, and the issues of preserving the identity of the city are discussed. Methods. The Florentine event was prepared during more than a year. First of all the creative team selected such events of the history of the city which seem to be important for revealing the specifics of the city, its spirit - from antiquity to our days – including the forums of Internet with broad public opinion. Then a route (seven kilometers) was developed, which was proposed to the authorities and organizations of the city. The selection of speakers was conducted according to several criteria: they should be authors of books, famous scientists, connoisseurs in a certain sphere (toponymy, history of urban gardens, art history), capable and willing to talk with participants directly at the points of stops, in order to make a dialogue and so that performances could be organized with their participation. The music was chosen for each part of the itinerary to prepare the audience emotionally. Cards for coloring with images of the main content of each stop were created for children. A site was done to inform the participants and to keep photos, videos and the audio files with speakers’ speech afterward. Results: Held in April 2017, the event was dedicated to the 640th Anniversary of the Filippo Brunelleschi, Florentine architect, and to the 190th anniversary of the publication of Florence guide by Stendhal. It was supported by City of Florence and Florence Bike Festival. Florence was explored to transfer traditional elements of culture, sometimes unfairly forgotten from ancient times to Brunelleschi and Michelangelo and Tschaikovsky and David Bowie with lectures by professors of Universities. Memorable art boards were installed in public spaces. Elements of the cultural code are deeply internalized in the minds of the townspeople, the perception of the city in everyday life and human communication is comparable to such fundamental concepts of the self-awareness of the townspeople as mental comfort and the level of happiness. The format of a fun and playful walk with the ICT support gives new opportunities for enriching the city's cultural code of each citizen with new components, associations, connotations.

Keywords: edutainment, cultural code, cycling, sensitization Florence

Procedia PDF Downloads 192
256 Performance of Reinforced Concrete Beams under Different Fire Durations

Authors: Arifuzzaman Nayeem, Tafannum Torsha, Tanvir Manzur, Shaurav Alam

Abstract:

Performance evaluation of reinforced concrete (RC) beams subjected to accidental fire is significant for post-fire capacity measurement. Mechanical properties of any RC beam degrade due to heating since the strength and modulus of concrete and reinforcement suffer considerable reduction under elevated temperatures. Moreover, fire-induced thermal dilation and shrinkage cause internal stresses within the concrete and eventually result in cracking, spalling, and loss of stiffness, which ultimately leads to lower service life. However, conducting full-scale comprehensive experimental investigation for RC beams exposed to fire is difficult and cost-intensive, where the finite element (FE) based numerical study can provide an economical alternative for evaluating the post-fire capacity of RC beams. In this study, an attempt has been made to study the fire behavior of RC beams using FE software package ABAQUS under different durations of fire. The damaged plasticity model of concrete in ABAQUS was used to simulate behavior RC beams. The effect of temperature on strength and modulus of concrete and steel was simulated following relevant Eurocodes. Initially, the result of FE models was validated using several experimental results from available scholarly articles. It was found that the response of the developed FE models matched quite well with the experimental outcome for beams without heat. The FE analysis of beams subjected to fire showed some deviation from the experimental results, particularly in terms of stiffness degradation. However, the ultimate strength and deflection of FE models were similar to that of experimental values. The developed FE models, thus, exhibited the good potential to predict the fire behavior of RC beams. Once validated, FE models were then used to analyze several RC beams having different strengths (ranged between 20 MPa and 50 MPa) exposed to the standard fire curve (ASTM E119) for different durations. The post-fire performance of RC beams was investigated in terms of load-deflection behavior, flexural strength, and deflection characteristics.

Keywords: fire durations, flexural strength, post fire capacity, reinforced concrete beam, standard fire

Procedia PDF Downloads 118
255 Separating Landform from Noise in High-Resolution Digital Elevation Models through Scale-Adaptive Window-Based Regression

Authors: Anne M. Denton, Rahul Gomes, David W. Franzen

Abstract:

High-resolution elevation data are becoming increasingly available, but typical approaches for computing topographic features, like slope and curvature, still assume small sliding windows, for example, of size 3x3. That means that the digital elevation model (DEM) has to be resampled to the scale of the landform features that are of interest. Any higher resolution is lost in this resampling. When the topographic features are computed through regression that is performed at the resolution of the original data, the accuracy can be much higher, and the reported result can be adjusted to the length scale that is relevant locally. Slope and variance are calculated for overlapping windows, meaning that one regression result is computed per raster point. The number of window centers per area is the same for the output as for the original DEM. Slope and variance are computed by performing regression on the points in the surrounding window. Such an approach is computationally feasible because of the additive nature of regression parameters and variance. Any doubling of window size in each direction only takes a single pass over the data, corresponding to a logarithmic scaling of the resulting algorithm as a function of the window size. Slope and variance are stored for each aggregation step, allowing the reported slope to be selected to minimize variance. The approach thereby adjusts the effective window size to the landform features that are characteristic to the area within the DEM. Starting with a window size of 2x2, each iteration aggregates 2x2 non-overlapping windows from the previous iteration. Regression results are stored for each iteration, and the slope at minimal variance is reported in the final result. As such, the reported slope is adjusted to the length scale that is characteristic of the landform locally. The length scale itself and the variance at that length scale are also visualized to aid in interpreting the results for slope. The relevant length scale is taken to be half of the window size of the window over which the minimum variance was achieved. The resulting process was evaluated for 1-meter DEM data and for artificial data that was constructed to have defined length scales and added noise. A comparison with ESRI ArcMap was performed and showed the potential of the proposed algorithm. The resolution of the resulting output is much higher and the slope and aspect much less affected by noise. Additionally, the algorithm adjusts to the scale of interest within the region of the image. These benefits are gained without additional computational cost in comparison with resampling the DEM and computing the slope over 3x3 images in ESRI ArcMap for each resolution. In summary, the proposed approach extracts slope and aspect of DEMs at the lengths scales that are characteristic locally. The result is of higher resolution and less affected by noise than existing techniques.

Keywords: high resolution digital elevation models, multi-scale analysis, slope calculation, window-based regression

Procedia PDF Downloads 104
254 Effect of Crystallographic Characteristics on Toughness of Coarse Grain Heat Affected Zone for Different Heat Inputs

Authors: Trishita Ray, Ashok Perka, Arnab Karani, M. Shome, Saurabh Kundu

Abstract:

Line pipe steels are used for long distance transportation of crude oil and gas under extreme environmental conditions. Welding is necessary to lay large scale pipelines. Coarse Grain Heat Affected Zone (CGHAZ) of a welded joint exhibits worst toughness because of excessive grain growth and brittle microstructures like bainite and martensite, leading to early failure. Therefore, it is necessary to investigate microstructures and properties of the CGHAZ for different welding heat inputs. In the present study, CGHAZ for two heat inputs of 10 kJ/cm and 50 kJ/cm were simulated in Gleeble 3800, and the microstructures were investigated in detail by means of Scanning Electron Microscopy (SEM) and Electron Backscattered Diffraction (EBSD). Charpy Impact Tests were also done to evaluate the impact properties. High heat input was characterized with very low toughness and massive prior austenite grains. With the crystallographic information from EBSD, the area of a single prior austenite grain was traced out for both the welding conditions. Analysis of the prior austenite grains showed the formation of high angle boundaries between the crystallographic packets. Effect of these packet boundaries on secondary cleavage crack propagation was discussed. It was observed that in the low heat input condition, formation of finer packets with a criss-cross morphology inside prior austenite grains was effective in crack arrest whereas, in the high heat input condition, formation of larger packets with higher volume of low angle boundaries failed to resist crack propagation resulting in a brittle fracture. Thus, the characteristics in a crystallographic packet and impact properties are related and should be controlled to obtain optimum properties.

Keywords: coarse grain heat affected zone, crystallographic packet, toughness, line pipe steel

Procedia PDF Downloads 226