Search results for: David Steel
304 The Relationship between Spindle Sound and Tool Performance in Turning
Authors: N. Seemuang, T. McLeay, T. Slatter
Abstract:
Worn tools have a direct effect on the surface finish and part accuracy. Tool condition monitoring systems have been developed over a long period and used to avoid a loss of productivity resulting from using a worn tool. However, the majority of tool monitoring research has applied expensive sensing systems not suitable for production. In this work, the cutting sound in turning machine was studied using microphone. Machining trials using seven cutting conditions were conducted until the observable flank wear width (FWW) on the main cutting edge exceeded 0.4 mm. The cutting inserts were removed from the tool holder and the flank wear width was measured optically. A microphone with built-in preamplifier was used to record the machining sound of EN24 steel being face turned by a CNC lathe in a wet cutting condition using constant surface speed control. The sound was sampled at 50 kS/s and all sound signals recorded from microphone were transformed into the frequency domain by FFT in order to establish the frequency content in the audio signature that could be then used for tool condition monitoring. The extracted feature from audio signal was compared to the flank wear progression on the cutting inserts. The spectrogram reveals a promising feature, named as ‘spindle noise’, which emits from the main spindle motor of turning machine. The spindle noise frequency was detected at 5.86 kHz of regardless of cutting conditions used on this particular CNC lathe. Varying cutting speed and feed rate have an influence on the magnitude of power spectrum of spindle noise. The magnitude of spindle noise frequency alters in conjunction with the tool wear progression. The magnitude increases significantly in the transition state between steady-state wear and severe wear. This could be used as a warning signal to prepare for tool replacement or adapt cutting parameters to extend tool life.Keywords: tool wear, flank wear, condition monitoring, spindle noise
Procedia PDF Downloads 338303 Strengths and Weaknesses of Tally, an LCA Tool for Comparative Analysis
Authors: Jacob Seddlemeyer, Tahar Messadi, Hongmei Gu, Mahboobeh Hemmati
Abstract:
The main purpose of this first tier of the study is to quantify and compare the embodied environmental impacts associated with alternative materials applied to Adohi Hall, a residence building at the University of Arkansas campus, Fayetteville, AR. This 200,000square foot building has5 stories builtwith mass timber and is compared to another scenario where the same edifice is built with a steel frame. Based on the defined goal and scope of the project, the materials respectivetothe respective to the two building options are compared in terms of Global Warming Potential (GWP), starting from cradle to the construction site, which includes the material manufacturing stage (raw material extract, process, supply, transport, and manufacture) plus transportation to the site (module A1-A4, based on standard EN 15804 definition). The consumedfossil fuels and emitted CO2 associated with the buildings are the major reason for the environmental impacts of climate change. In this study, GWP is primarily assessed to the exclusion of other environmental factors. The second tier of this work is to evaluate Tally’s performance in the decision-making process through the design phases, as well as determine its strengths and weaknesses. Tally is a Life Cycle Assessment (LCA) tool capable of conducting a cradle-to-grave analysis. As opposed to other software applications, Tally is specifically targeted at buildings LCA. As a peripheral application, this software tool is directly run within the core modeling application platform called Revit. This unique functionality causes Tally to stand out from other similar tools in the building sector LCA analysis. The results of this study also provide insights for making more environmentally efficient decisions in the building environment and help in the move forward to reduce Green House Gases (GHGs) emissions and GWP mitigation.Keywords: comparison, GWP, LCA, materials, tally
Procedia PDF Downloads 226302 Effect of Impact Angle on Erosive Abrasive Wear of Ductile and Brittle Materials
Authors: Ergin Kosa, Ali Göksenli
Abstract:
Erosion and abrasion are wear mechanisms reducing the lifetime of machine elements like valves, pump and pipe systems. Both wear mechanisms are acting at the same time, causing a “Synergy” effect, which leads to a rapid damage of the surface. Different parameters are effective on erosive abrasive wear rate. In this study effect of particle impact angle on wear rate and wear mechanism of ductile and brittle materials was investigated. A new slurry pot was designed for experimental investigation. As abrasive particle, silica sand was used. Particle size was ranking between 200-500 µm. All tests were carried out in a sand-water mixture of 20% concentration for four hours. Impact velocities of the particles were 4,76 m/s. As ductile material steel St 37 with Brinell Hardness Number (BHN) of 245 and quenched St 37 with 510 BHN was used as brittle material. After wear tests, morphology of the eroded surfaces were investigated for better understanding of the wear mechanisms acting at different impact angles by using optical microscopy and Scanning Electron Microscope. The results indicated that wear rate of ductile material was higher than brittle material. Maximum wear was observed by ductile material at a particle impact angle of 300. On the contrary wear rate increased by brittle materials by an increase in impact angle and reached maximum value at 450. High amount of craters were detected after observation on ductile material surface Also plastic deformation zones were detected, which are typical failure modes for ductile materials. Craters formed by particles were deeper according to brittle material worn surface. Amount of craters decreased on brittle material surface. Microcracks around craters were detected which are typical failure modes of brittle materials. Deformation wear was the dominant wear mechanism on brittle material. At the end it is concluded that wear rate could not be directly related to impact angle of the hard particle due to the different responses of ductile and brittle materials.Keywords: erosive wear, particle impact angle, silica sand, wear rate, ductile-brittle material
Procedia PDF Downloads 401301 Comparison of the Yumul Faces Anxiety Scale to the Categorization Scale, the Numerical Verbal Rating Scale, and the State-Trait Anxiety Inventory for Preoperative Anxiety Evaluation
Authors: Ofelia Loani Elvir Lazo, Roya Yumul, David Chernobylsky, Omar Durra
Abstract:
Background: It is crucial to detect the patient’s existing anxiety to assist patients in a perioperative setting which is to be caused by the fear associated with surgical and anesthetic complications. However, the current gold standard for assessing patient anxiety, the STAI, is problematic to use in the preoperative setting, given the duration and concentration required to complete the 40-item questionnaire. Our primary aim in the study is to investigate the correlation of the Yumul Visual Facial Anxiety Scale (VFAS) and Numerical Verbal Rating Scale (NVRS) to State-Trait Anxiety Inventory (STAI) to determine the optimal anxiety scale to use in the perioperative setting. Methods: A clinical study of patients undergoing various surgeries was conducted utilizing each of the preoperative anxiety scales. Inclusion criteria included patients undergoing elective surgeries, while exclusion criteria included patients with anesthesia contraindications, inability to comprehend instructions, impaired judgement, substance abuse history, and those pregnant or lactating. 293 patients were analyzed in terms of demographics, anxiety scale survey results, and anesthesia data via Spearman Coefficients, Chi-Squared Analysis, and Fischer’s exact test utilized for comparative analysis. Results: Statistical analysis showed that VFAS had a higher correlation to STAI than NVRS (rs=0.66, p<0.0001 vs. rs=0.64, p<0.0001). The combined VFAS-Categorization Scores showed the highest correlation with the gold standard (rs=0.72, p<0.0001). Subgroup analysis showed similar results. STAI evaluation time (247.7 ± 54.81 sec) far exceeds VFAS (7.29 ± 1.61 sec), NVRS (7.23 ± 1.60 sec), and Categorization scales (7.29 ± 1.99 sec). Patients preferred VFAS (54.4%), Categorization (11.6%), and NVRS (8.8%). Anesthesiologists preferred VFAS (63.9%), NVRS (22.1%), and Categorization Scales (14.0%). Of note, the top five causes of preoperative anxiety were determined to be waiting (56.5%), pain (42.5%), family concerns (40.5%), no information about surgery (40.1%), or anesthesia (31.6%). Conclusıons: Both VFAS and Categorization tests also take significantly less time than STAI, which is critical in the preoperative setting. Combined VFAS-Categorization Score (VCS) demonstrates the highest correlation to the gold standard, STAI. Among both patients and anesthesiologists, VFAS was the most preferred scale. This forms the basis of the Yumul Faces Anxiety Scale, designed for quick quantization and assessment in the preoperative setting while maintaining a high correlation to the golden standard. Additional studies using the formulated Yumul Faces Anxiety Scale are merited.Keywords: numerical verbal anxiety scale, preoperative anxiety, state-trait anxiety inventory, visual facial anxiety scale
Procedia PDF Downloads 117300 Determination of Mechanical Properties of Adhesives via Digital Image Correlation (DIC) Method
Authors: Murat Demir Aydin, Elanur Celebi
Abstract:
Adhesively bonded joints are used as an alternative to traditional joining methods due to the important advantages they provide. The most important consideration in the use of adhesively bonded joints is that these joints have appropriate requirements for their use in terms of safety. In order to ensure control of this condition, damage analysis of the adhesively bonded joints should be performed by determining the mechanical properties of the adhesives. When the literature is investigated; it is generally seen that the mechanical properties of adhesives are determined by traditional measurement methods. In this study, to determine the mechanical properties of adhesives, the Digital Image Correlation (DIC) method, which can be an alternative to traditional measurement methods, has been used. The DIC method is a new optical measurement method which is used to determine the parameters of displacement and strain in an appropriate and correct way. In this study, tensile tests of Thick Adherent Shear Test (TAST) samples formed using DP410 liquid structural adhesive and steel materials and bulk tensile specimens formed using and DP410 liquid structural adhesive was performed. The displacement and strain values of the samples were determined by DIC method and the shear stress-strain curves of the adhesive for TAST specimens and the tensile strain curves of the bulk adhesive specimens were obtained. Various methods such as numerical methods are required as conventional measurement methods (strain gauge, mechanic extensometer, etc.) are not sufficient in determining the strain and displacement values of the very thin adhesive layer such as TAST samples. As a result, the DIC method removes these requirements and easily achieves displacement measurements with sufficient accuracy.Keywords: structural adhesive, adhesively bonded joints, digital image correlation, thick adhered shear test (TAST)
Procedia PDF Downloads 321299 Removal of Cr (VI) from Water through Adsorption Process Using GO/PVA as Nanosorbent
Authors: Syed Hadi Hasan, Devendra Kumar Singh, Viyaj Kumar
Abstract:
Cr (VI) is a known toxic heavy metal and has been considered as a priority pollutant in water. The effluent of various industries including electroplating, anodizing baths, leather tanning, steel industries and chromium based catalyst are the major source of Cr (VI) contamination in the aquatic environment. Cr (VI) show high mobility in the environment and can easily penetrate cell membrane of the living tissues to exert noxious effects. The Cr (VI) contamination in drinking water causes various hazardous health effects to the human health such as cancer, skin and stomach irritation or ulceration, dermatitis, damage to liver, kidney circulation and nerve tissue damage. Herein, an attempt has been done to develop an efficient adsorbent for the removal of Cr (VI) from water. For this purpose nanosorbent composed of polyvinyl alcohol functionalized graphene oxide (GO/PVA) was prepared. Thus, obtained GO/PVA was characterized through FTIR, XRD, SEM, and Raman Spectroscopy. As prepared nanosorbent of GO/PVA was utilized for the removal Cr (VI) in batch mode experiment. The process variables such as contact time, initial Cr (VI) concentration, pH, and temperature were optimized. The maximum 99.8 % removal of Cr (VI) was achieved at initial Cr (VI) concentration 60 mg/L, pH 2, temperature 35 °C and equilibrium was achieved within 50 min. The two widely used isotherm models viz. Langmuir and Freundlich were analyzed using linear correlation coefficient (R2) and it was found that Langmuir model gives best fit with high value of R2 for the data of present adsorption system which indicate the monolayer adsorption of Cr (VI) on the GO/PVA. Kinetic studies were also conducted using pseudo-first order and pseudo-second order models and it was observed that chemosorptive pseudo-second order model described the kinetics of current adsorption system in better way with high value of correlation coefficient. Thermodynamic studies were also conducted and results showed that the adsorption was spontaneous and endothermic in nature.Keywords: adsorption, GO/PVA, isotherm, kinetics, nanosorbent, thermodynamics
Procedia PDF Downloads 389298 Life Cycle Assessment of Mass Timber Structure, Construction Process as System Boundary
Authors: Mahboobeh Hemmati, Tahar Messadi, Hongmei Gu
Abstract:
Today, life cycle assessment (LCA) is a leading method in mitigating the environmental impacts emerging from the building sector. In this paper, LCA is used to quantify the Green House Gas (GHG) emissions during the construction phase of the largest mass timber residential structure in the United States, Adohi Hall. This building is a 200,000 square foot 708-bed complex located on the campus of the University of Arkansas. The energy used for buildings’ operation is the most dominant source of emissions in the building industry. Lately, however, the efforts were successful at increasing the efficiency of building operation in terms of emissions. As a result, the attention is now shifted to the embodied carbon, which is more noticeable in the building life cycle. Unfortunately, most of the studies have, however, focused on the manufacturing stage, and only a few have addressed to date the construction process. Specifically, less data is available about environmental impacts associated with the construction of mass timber. This study presents, therefore, an assessment of the environmental impact of the construction processes based on the real and newly built mass timber building mentioned above. The system boundary of this study covers modules A4 and A5 based on building LCA standard EN 15978. Module A4 includes material and equipment transportation. Module A5 covers the construction and installation process. This research evolves through 2 stages: first, to quantify materials and equipment deployed in the building, and second, to determine the embodied carbon associated with running equipment for construction materials, both transported to, and installed on, the site where the edifice is built. The Global Warming Potential (GWP) of the building is the primary metric considered in this research. The outcomes of this study bring to the front a better understanding of hotspots in terms of emission during the construction process. Moreover, the comparative analysis of the mass timber construction process with that of a theoretically similar steel building will enable an effective assessment of the environmental efficiency of mass timber.Keywords: construction process, GWP, LCA, mass timber
Procedia PDF Downloads 165297 Design of a Low-Cost, Portable, Sensor Device for Longitudinal, At-Home Analysis of Gait and Balance
Authors: Claudia Norambuena, Myissa Weiss, Maria Ruiz Maya, Matthew Straley, Elijah Hammond, Benjamin Chesebrough, David Grow
Abstract:
The purpose of this project is to develop a low-cost, portable sensor device that can be used at home for long-term analysis of gait and balance abnormalities. One area of particular concern involves the asymmetries in movement and balance that can accompany certain types of injuries and/or the associated devices used in the repair and rehabilitation process (e.g. the use of splints and casts) which can often increase chances of falls and additional injuries. This device has the capacity to monitor a patient during the rehabilitation process after injury or operation, increasing the patient’s access to healthcare while decreasing the number of visits to the patient’s clinician. The sensor device may thereby improve the quality of the patient’s care, particularly in rural areas where access to the clinician could be limited, while simultaneously decreasing the overall cost associated with the patient’s care. The device consists of nine interconnected accelerometer/ gyroscope/compass chips (9-DOF IMU, Adafruit, New York, NY). The sensors attach to and are used to determine the orientation and acceleration of the patient’s lower abdomen, C7 vertebra (lower neck), L1 vertebra (middle back), anterior side of each thigh and tibia, and dorsal side of each foot. In addition, pressure sensors are embedded in shoe inserts with one sensor (ESS301, Tekscan, Boston, MA) beneath the heel and three sensors (Interlink 402, Interlink Electronics, Westlake Village, CA) beneath the metatarsal bones of each foot. These sensors measure the distribution of the weight applied to each foot as well as stride duration. A small microntroller (Arduino Mega, Arduino, Ivrea, Italy) is used to collect data from these sensors in a CSV file. MATLAB is then used to analyze the data and output the hip, knee, ankle, and trunk angles projected on the sagittal plane. An open-source program Processing is then used to generate an animation of the patient’s gait. The accuracy of the sensors was validated through comparison to goniometric measurements (±2° error). The sensor device was also shown to have sufficient sensitivity to observe various gait abnormalities. Several patients used the sensor device, and the data collected from each represented the patient’s movements. Further, the sensors were found to have the ability to observe gait abnormalities caused by the addition of a small amount of weight (4.5 - 9.1 kg) to one side of the patient. The user-friendly interface and portability of the sensor device will help to construct a bridge between patients and their clinicians with fewer necessary inpatient visits.Keywords: biomedical sensing, gait analysis, outpatient, rehabilitation
Procedia PDF Downloads 288296 The Effect of Composite Hybridization on the Back Face Deformation of Armor Plates
Authors: Attef Kouadria, Yehya Bouteghrine, Amar Manaa, Tarek Mouats, Djalel Eddine Tria, Hamid Abdelhafid Ghouti
Abstract:
Personal protection systems have been used in several forms for centuries. The need for light-weight composite structures has been in great demand due to their weight and high mechanical properties ratios in comparison to heavy and cumbersome steel plates. In this regard, lighter ceramic plates with a backing plate made of high strength polymeric fibers, mostly aramids, are widely used for protection against ballistic threats. This study aims to improve the ballistic performance of ceramic/composite plates subjected to ballistic impact by reducing the back face deformation (BFD) measured after each test. A new hybridization technique was developed in this investigation to increase the energy absorption capabilities of the backing plates. The hybridization consists of combining different types of aramid fabrics with different linear densities of aramid fibers (Dtex) and areal densities with an epoxy resin to form the backing plate. Therefore, several composite structures architectures were prepared and tested. For better understanding the effect of the hybridization, a serial of tensile, compression, and shear tests were conducted to determine the mechanical properties of the homogeneous composite materials prepared from different fabrics. It was found that the hybridization allows the backing plate to combine between the mechanical properties of the used fabrics. Aramid fabrics with higher Dtex were found to increase the mechanical strength of the backing plate, while those with lower Dtex found to enhance the lateral wave dispersion ratio due to their lower areal density. Therefore, the back face deformation was significantly reduced in comparison to a homogeneous composite plate.Keywords: aramid fabric, ballistic impact, back face deformation, body armor, composite, mechanical testing
Procedia PDF Downloads 151295 Improvement of Fixed Offshore Structures' Boat Landing Performance Using Practicable Design Criteria
Authors: A. Hamadelnil, Z. Razak, E. Matsoom
Abstract:
Boat landings on fixed offshore structure are designed to absorb the impact energy from the boats approaching the platform for crew transfer. As the size and speed of operating boats vary, the design and maintenance of the boat landings become more challenging. Different oil and gas operators adopting different design criteria for the boat landing design in the region of South East Asia. Rubber strip is used to increase the capacity of the boat landing in absorbing bigger impact energy. Recently, it has been reported that all the rubber strips peel off the boat landing frame within one to two years, and replacement is required to avoid puncturing of the boat’s hull by the exposed sharp edges and bolts used to secure the rubber strip. The capacity of the boat landing in absorbing the impact energy is reduced after the failure of the rubber strip and results in failure of the steel members. The replacement of the rubber strip is costly as it requires a diving spread. The objective of this study is to propose the most practicable criteria to be adopted by oil and gas operators in the design of the boat landings in the region of South East Asia to improve the performance of the boat landing and assure safe operation and cheaper maintenance. This study explores the current design and maintenance challenges of boat landing and compares between the criteria adopted by different operators. In addition, this study explains the reasons behind the denting of many of the boat landing. It also evaluates the effect of grout and rubber strip in the capacity of the boat landing and jacket legs and highlight. Boat landing model and analysis using USFOS and SACS software are carried out and presented in this study considering different design criteria. This study proposes the most practicable criteria to be used in designing the boat landing in South East Asia region to save cost and achieve better performance, safe operation and less cost and maintenance.Keywords: boat landing, grout, plastic hinge, rubber strip
Procedia PDF Downloads 299294 Modeling Engagement with Multimodal Multisensor Data: The Continuous Performance Test as an Objective Tool to Track Flow
Authors: Mohammad H. Taheri, David J. Brown, Nasser Sherkat
Abstract:
Engagement is one of the most important factors in determining successful outcomes and deep learning in students. Existing approaches to detect student engagement involve periodic human observations that are subject to inter-rater reliability. Our solution uses real-time multimodal multisensor data labeled by objective performance outcomes to infer the engagement of students. The study involves four students with a combined diagnosis of cerebral palsy and a learning disability who took part in a 3-month trial over 59 sessions. Multimodal multisensor data were collected while they participated in a continuous performance test. Eye gaze, electroencephalogram, body pose, and interaction data were used to create a model of student engagement through objective labeling from the continuous performance test outcomes. In order to achieve this, a type of continuous performance test is introduced, the Seek-X type. Nine features were extracted including high-level handpicked compound features. Using leave-one-out cross-validation, a series of different machine learning approaches were evaluated. Overall, the random forest classification approach achieved the best classification results. Using random forest, 93.3% classification for engagement and 42.9% accuracy for disengagement were achieved. We compared these results to outcomes from different models: AdaBoost, decision tree, k-Nearest Neighbor, naïve Bayes, neural network, and support vector machine. We showed that using a multisensor approach achieved higher accuracy than using features from any reduced set of sensors. We found that using high-level handpicked features can improve the classification accuracy in every sensor mode. Our approach is robust to both sensor fallout and occlusions. The single most important sensor feature to the classification of engagement and distraction was shown to be eye gaze. It has been shown that we can accurately predict the level of engagement of students with learning disabilities in a real-time approach that is not subject to inter-rater reliability, human observation or reliant on a single mode of sensor input. This will help teachers design interventions for a heterogeneous group of students, where teachers cannot possibly attend to each of their individual needs. Our approach can be used to identify those with the greatest learning challenges so that all students are supported to reach their full potential.Keywords: affective computing in education, affect detection, continuous performance test, engagement, flow, HCI, interaction, learning disabilities, machine learning, multimodal, multisensor, physiological sensors, student engagement
Procedia PDF Downloads 94293 Neural Synchronization - The Brain’s Transfer of Sensory Data
Authors: David Edgar
Abstract:
To understand how the brain’s subconscious and conscious functions, we must conquer the physics of Unity, which leads to duality’s algorithm. Where the subconscious (bottom-up) and conscious (top-down) processes function together to produce and consume intelligence, we use terms like ‘time is relative,’ but we really do understand the meaning. In the brain, there are different processes and, therefore, different observers. These different processes experience time at different rates. A sensory system such as the eyes cycles measurement around 33 milliseconds, the conscious process of the frontal lobe cycles at 300 milliseconds, and the subconscious process of the thalamus cycle at 5 milliseconds. Three different observers experience time differently. To bridge observers, the thalamus, which is the fastest of the processes, maintains a synchronous state and entangles the different components of the brain’s physical process. The entanglements form a synchronous cohesion between the brain components allowing them to share the same state and execute in the same measurement cycle. The thalamus uses the shared state to control the firing sequence of the brain’s linear subconscious process. Sharing state also allows the brain to cheat on the amount of sensory data that must be exchanged between components. Only unpredictable motion is transferred through the synchronous state because predictable motion already exists in the shared framework. The brain’s synchronous subconscious process is entirely based on energy conservation, where prediction regulates energy usage. So, the eyes every 33 milliseconds dump their sensory data into the thalamus every day. The thalamus is going to perform a motion measurement to identify the unpredictable motion in the sensory data. Here is the trick. The thalamus conducts its measurement based on the original observation time of the sensory system (33 ms), not its own process time (5 ms). This creates a data payload of synchronous motion that preserves the original sensory observation. Basically, a frozen moment in time (Flat 4D). The single moment in time can then be processed through the single state maintained by the synchronous process. Other processes, such as consciousness (300 ms), can interface with the synchronous state to generate awareness of that moment. Now, synchronous data traveling through a separate faster synchronous process creates a theoretical time tunnel where observation time is tunneled through the synchronous process and is reproduced on the other side in the original time-relativity. The synchronous process eliminates time dilation by simply removing itself from the equation so that its own process time does not alter the experience. To the original observer, the measurement appears to be instantaneous, but in the thalamus, a linear subconscious process generating sensory perception and thought production is being executed. It is all just occurring in the time available because other observation times are slower than thalamic measurement time. For life to exist in the physical universe requires a linear measurement process, it just hides by operating at a faster time relativity. What’s interesting is time dilation is not the problem; it’s the solution. Einstein said there was no universal time.Keywords: neural synchronization, natural intelligence, 99.95% IoT data transmission savings, artificial subconscious intelligence (ASI)
Procedia PDF Downloads 126292 Modern Seismic Design Approach for Buildings with Hysteretic Dampers
Authors: Vanessa A. Segovia, Sonia E. Ruiz
Abstract:
The use of energy dissipation systems for seismic applications has increased worldwide, thus it is necessary to develop practical and modern criteria for their optimal design. Here, a direct displacement-based seismic design approach for frame buildings with hysteretic energy dissipation systems (HEDS) is applied. The building is constituted by two individual structural systems consisting of: 1) A main elastic structural frame designed for service loads and 2) A secondary system, corresponding to the HEDS, that controls the effects of lateral loads. The procedure implies to control two design parameters: A) The stiffness ratio (α=K_frame/K_(total system)), and B) The strength ratio (γ= V_damper / V_(total system)). The proposed damage-controlled approach contributes to the design of a more sustainable and resilient building because the structural damage is concentrated on the HEDS. The reduction of the design displacement spectrum is done by means of a damping factor (recently published) for elastic structural systems with HEDS, located in Mexico City. Two limit states are verified: Serviceability and near collapse. Instead of the traditional trial-error approach, a procedure that allows the designer to establish the preliminary sizes of the structural elements of both systems is proposed. The design methodology is applied to an 8-story steel building with buckling restrained braces, located in soft soil of Mexico City. With the aim of choosing the optimal design parameters, a parametric study is developed considering different values of α and γ. The simplified methodology is for preliminary sizing, design, and evaluation of the effectiveness of HEDS, and it constitutes a modern and practical tool that enables the structural designer to select the best design parameters.Keywords: damage-controlled buildings, direct displacement-based seismic design, optimal hysteretic energy dissipation systems, hysteretic dampers
Procedia PDF Downloads 483291 Optimization of Operational Water Quality Parameters in a Drinking Water Distribution System Using Response Surface Methodology
Authors: Sina Moradi, Christopher W. K. Chow, John Van Leeuwen, David Cook, Mary Drikas, Patrick Hayde, Rose Amal
Abstract:
Chloramine is commonly used as a disinfectant in drinking water distribution systems (DWDSs), particularly in Australia and the USA. Maintaining a chloramine residual throughout the DWDS is important in ensuring microbiologically safe water is supplied at the customer’s tap. In order to simulate how chloramine behaves when it moves through the distribution system, a water quality network model (WQNM) can be applied. In this work, the WQNM was based on mono-chloramine decomposition reactions, which enabled prediction of mono-chloramine residual at different locations through a DWDS in Australia, using the Bentley commercial hydraulic package (Water GEMS). The accuracy of WQNM predictions is influenced by a number of water quality parameters. Optimization of these parameters in order to obtain the closest results in comparison with actual measured data in a real DWDS would result in both cost reduction as well as reduction in consumption of valuable resources such as energy and materials. In this work, the optimum operating conditions of water quality parameters (i.e. temperature, pH, and initial mono-chloramine concentration) to maximize the accuracy of mono-chloramine residual predictions for two water supply scenarios in an entire network were determined using response surface methodology (RSM). To obtain feasible and economical water quality parameters for highest model predictability, Design Expert 8.0 software (Stat-Ease, Inc.) was applied to conduct the optimization of three independent water quality parameters. High and low levels of the water quality parameters were considered, inevitably, as explicit constraints, in order to avoid extrapolation. The independent variables were pH, temperature and initial mono-chloramine concentration. The lower and upper limits of each variable for two water supply scenarios were defined and the experimental levels for each variable were selected based on the actual conditions in studied DWDS. It was found that at pH of 7.75, temperature of 34.16 ºC, and initial mono-chloramine concentration of 3.89 (mg/L) during peak water supply patterns, root mean square error (RMSE) of WQNM for the whole network would be minimized to 0.189, and the optimum conditions for averaged water supply occurred at pH of 7.71, temperature of 18.12 ºC, and initial mono-chloramine concentration of 4.60 (mg/L). The proposed methodology to predict mono-chloramine residual can have a great potential for water treatment plant operators in accurately estimating the mono-chloramine residual through a water distribution network. Additional studies from other water distribution systems are warranted to confirm the applicability of the proposed methodology for other water samples.Keywords: chloramine decay, modelling, response surface methodology, water quality parameters
Procedia PDF Downloads 224290 Shape Management Method of Large Structure Based on Octree Space Partitioning
Authors: Gichun Cha, Changgil Lee, Seunghee Park
Abstract:
The objective of the study is to construct the shape management method contributing to the safety of the large structure. In Korea, the research of the shape management is lack because of the new attempted technology. Terrestrial Laser Scanning (TLS) is used for measurements of large structures. TLS provides an efficient way to actively acquire accurate the point clouds of object surfaces or environments. The point clouds provide a basis for rapid modeling in the industrial automation, architecture, construction or maintenance of the civil infrastructures. TLS produce a huge amount of point clouds. Registration, Extraction and Visualization of data require the processing of a massive amount of scan data. The octree can be applied to the shape management of the large structure because the scan data is reduced in the size but, the data attributes are maintained. The octree space partitioning generates the voxel of 3D space, and the voxel is recursively subdivided into eight sub-voxels. The point cloud of scan data was converted to voxel and sampled. The experimental site is located at Sungkyunkwan University. The scanned structure is the steel-frame bridge. The used TLS is Leica ScanStation C10/C5. The scan data was condensed 92%, and the octree model was constructed with 2 millimeter in resolution. This study presents octree space partitioning for handling the point clouds. The basis is created by shape management of the large structures such as double-deck tunnel, building and bridge. The research will be expected to improve the efficiency of structural health monitoring and maintenance. "This work is financially supported by 'U-City Master and Doctor Course Grant Program' and the National Research Foundation of Korea(NRF) grant funded by the Korea government (MSIP) (NRF- 2015R1D1A1A01059291)."Keywords: 3D scan data, octree space partitioning, shape management, structural health monitoring, terrestrial laser scanning
Procedia PDF Downloads 297289 Failure Analysis of Recoiler Mandrel Shaft Used for Coiling of Rolled Steel Sheet
Authors: Sachin Pawar, Suman Patra, Goutam Mukhopadhyay
Abstract:
The primary function of a shaft is to transfer power. The shaft can be cast or forged and then machined to the final shape. Manufacturing of ~5 m length and 0.6 m diameter shaft is very critical. More difficult is to maintain its straightness during heat treatment and machining operations, which involve thermal and mechanical loads, respectively. During the machining operation of a such forged mandrel shaft, a deflection of 3-4mm was observed. To remove this deflection shaft was pressed at both ends which led to the development of cracks in it. To investigate the root cause of the deflection and cracking, the sample was cut from the failed shaft. Possible causes were identified with the help of a cause and effect diagram. Chemical composition analysis, microstructural analysis, and hardness measurement were done to confirm whether the shaft meets the required specifications or not. Chemical composition analysis confirmed that the material grade was 42CrMo4. Microstructural analysis revealed the presence of untempered martensite, indicating improper heat treatment. Due to this, ductility and impact toughness values were considerably lower than the specification of the mentioned grade. Residual stress measurement of one more bent shaft manufactured by a similar route was done by portable X-ray diffraction(XRD) technique. For better understanding, measurements were done at twelve different locations along the length of the shaft. The occurrence of a high amount of undesirable tensile residual stresses close to the Ultimate Tensile Strength(UTS) of the material was observed. Untempered martensitic structure, lower ductility, lower impact strength, and presence of a high amount of residual stresses all confirmed the improper tempering heat treatment of the shaft. Tempering relieves the residual stresses. Based on the findings of this study, stress-relieving heat treatment was done to remove the residual stresses and deflection in the shaft successfully.Keywords: residual stress, mandrel shaft, untempered martensite, portable XRD
Procedia PDF Downloads 112288 Evolutionary Advantages of Loneliness with an Agent-Based Model
Authors: David Gottlieb, Jason Yoder
Abstract:
The feeling of loneliness is not uncommon in modern society, and yet, there is a fundamental lack of understanding in its origins and purpose in nature. One interpretation of loneliness is that it is a subjective experience that punishes a lack of social behavior, and thus its emergence in human evolution is seemingly tied to the survival of early human tribes. Still, a common counterintuitive response to loneliness is a state of hypervigilance, resulting in social withdrawal, which may appear maladaptive to modern society. So far, no computational model of loneliness’ effect during evolution yet exists; however, agent-based models (ABM) can be used to investigate social behavior, and applying evolution to agents’ behaviors can demonstrate selective advantages for particular behaviors. We propose an ABM where each agent contains four social behaviors, and one goal-seeking behavior, letting evolution select the best behavioral patterns for resource allocation. In our paper, we use an algorithm similar to the boid model to guide the behavior of agents, but expand the set of rules that govern their behavior. While we use cohesion, separation, and alignment for simple social movement, our expanded model adds goal-oriented behavior, which is inspired by particle swarm optimization, such that agents move relative to their personal best position. Since agents are given the ability to form connections by interacting with each other, our final behavior guides agent movement toward its social connections. Finally, we introduce a mechanism to represent a state of loneliness, which engages when an agent's perceived social involvement does not meet its expected social involvement. This enables us to investigate a minimal model of loneliness, and using evolution we attempt to elucidate its value in human survival. Agents are placed in an environment in which they must acquire resources, as their fitness is based on the total resource collected. With these rules in place, we are able to run evolution under various conditions, including resource-rich environments, and when disease is present. Our simulations indicate that there is strong selection pressure for social behavior under circumstances where there is a clear discrepancy between initial resource locations, and against social behavior when disease is present, mirroring hypervigilance. This not only provides an explanation for the emergence of loneliness, but also reflects the diversity of response to loneliness in the real world. In addition, there is evidence of a richness of social behavior when loneliness was present. By introducing just two resource locations, we observed a divergence in social motivation after agents became lonely, where one agent learned to move to the other, who was in a better resource position. The results and ongoing work from this project show that it is possible to glean insight into the evolutionary advantages of even simple mechanisms of loneliness. The model we developed has produced unexpected results and has led to more questions, such as the impact loneliness would have at a larger scale, or the effect of creating a set of rules governing interaction beyond adjacency.Keywords: agent-based, behavior, evolution, loneliness, social
Procedia PDF Downloads 96287 Industrial Prototype for Hydrogen Separation and Purification: Graphene Based-Materials Application
Authors: Juan Alfredo Guevara Carrio, Swamy Toolahalli Thipperudra, Riddhi Naik Dharmeshbhai, Sergio Graniero Echeverrigaray, Jose Vitorio Emiliano, Antonio Helio Castro
Abstract:
In order to advance the hydrogen economy, several industrial sectors can potentially benefit from the trillions of stimulus spending for post-coronavirus. Blending hydrogen into natural gas pipeline networks has been proposed as a means of delivering it during the early market development phase, using separation and purification technologies downstream to extract the pure H₂ close to the point of end-use. This first step has been mentioned around the world as an opportunity to use existing infrastructures for immediate decarbonisation pathways. Among current technologies used to extract hydrogen from mixtures in pipelines or liquid carriers, membrane separation can achieve the highest selectivity. The most efficient approach for the separation of H₂ from other substances by membranes is offered from the research of 2D layered materials due to their exceptional physical and chemical properties. Graphene-based membranes, with their distribution of pore sizes in nanometers and angstrom range, have shown fundamental and economic advantages over other materials. Their combination with the structure of ceramic and geopolymeric materials enabled the synthesis of nanocomposites and the fabrication of membranes with long-term stability and robustness in a relevant range of physical and chemical conditions. Versatile separation modules have been developed for hydrogen separation, which adaptability allows their integration in industrial prototypes for applications in heavy transport, steel, and cement production, as well as small installations at end-user stations of pipeline networks. The developed membranes and prototypes are a practical contribution to the technological challenge of supply pure H₂ for the mentioned industries as well as hydrogen energy-based fuel cells.Keywords: graphene nano-composite membranes, hydrogen separation and purification, separation modules, indsutrial prototype
Procedia PDF Downloads 159286 Automated Computer-Vision Analysis Pipeline of Calcium Imaging Neuronal Network Activity Data
Authors: David Oluigbo, Erik Hemberg, Nathan Shwatal, Wenqi Ding, Yin Yuan, Susanna Mierau
Abstract:
Introduction: Calcium imaging is an established technique in neuroscience research for detecting activity in neural networks. Bursts of action potentials in neurons lead to transient increases in intracellular calcium visualized with fluorescent indicators. Manual identification of cell bodies and their contours by experts typically takes 10-20 minutes per calcium imaging recording. Our aim, therefore, was to design an automated pipeline to facilitate and optimize calcium imaging data analysis. Our pipeline aims to accelerate cell body and contour identification and production of graphical representations reflecting changes in neuronal calcium-based fluorescence. Methods: We created a Python-based pipeline that uses OpenCV (a computer vision Python package) to accurately (1) detect neuron contours, (2) extract the mean fluorescence within the contour, and (3) identify transient changes in the fluorescence due to neuronal activity. The pipeline consisted of 3 Python scripts that could both be easily accessed through a Python Jupyter notebook. In total, we tested this pipeline on ten separate calcium imaging datasets from murine dissociate cortical cultures. We next compared our automated pipeline outputs with the outputs of manually labeled data for neuronal cell location and corresponding fluorescent times series generated by an expert neuroscientist. Results: Our results show that our automated pipeline efficiently pinpoints neuronal cell body location and neuronal contours and provides a graphical representation of neural network metrics accurately reflecting changes in neuronal calcium-based fluorescence. The pipeline detected the shape, area, and location of most neuronal cell body contours by using binary thresholding and grayscale image conversion to allow computer vision to better distinguish between cells and non-cells. Its results were also comparable to manually analyzed results but with significantly reduced result acquisition times of 2-5 minutes per recording versus 10-20 minutes per recording. Based on these findings, our next step is to precisely measure the specificity and sensitivity of the automated pipeline’s cell body and contour detection to extract more robust neural network metrics and dynamics. Conclusion: Our Python-based pipeline performed automated computer vision-based analysis of calcium image recordings from neuronal cell bodies in neuronal cell cultures. Our new goal is to improve cell body and contour detection to produce more robust, accurate neural network metrics and dynamic graphs.Keywords: calcium imaging, computer vision, neural activity, neural networks
Procedia PDF Downloads 82285 Violent, Psychological, Sexual and Abuse-Related Emergency Department Usage amongst Pediatric Victims of Physical Assault and Gun Violence: A Case-Control Study
Authors: Mary Elizabeth Bernardin, Margie Batek, Joseph Moen, David Schnadower
Abstract:
Background: Injuries due to interpersonal violence are a common reason for emergency department (ED) visits amongst the American pediatric population. Gun violence, in particular, is associated with high morbidity, mortality as well as financial costs. Patterns of pediatric ED usage may be an indicator of risk for future violence, but very little data on the topic exists. Objective: The aims of this study were to assess for frequencies of ED usage for previous interpersonal violence, mental/behavioral issues, sexual/reproductive issues and concerns for abuse in youths presenting to EDs due to physical assault injuries (PAIs) compared to firearm injuries (FIs). Methods: In this retrospective case-control study, ED charts of children ages 8-19 years who presented with injuries due to interpersonal violent encounters from 2014-2017 were reviewed. Data was collected regarding all previous ED visits for injuries due to interpersonal violence (including physical assaults and firearm injuries), mental/behavioral health visits (including depression, suicidal ideation, suicide attempt, homicidal ideation and violent behavior), sexual/reproductive health visits (including sexually transmitted infections and pregnancy related issues), and concerns for abuse (including physical abuse or domestic violence, neglect, sexual abuse, sexual assault, and intimate partner violence). Logistic regression was used to identify predictors of gun violence based on previous ED visits amongst physical assault injured versus firearm injured youths. Results: A total of 407 patients presenting to the ED for an interpersonal violent encounter were analyzed, 251 (62%) of which were due to physical assault injuries (PAIs) and 156 (38%) due to firearm injuries (FIs). The majority of both PAI and FI patients had no previous history of ED visits for violence, mental/behavioral health, sexual/reproductive health or concern for abuse (60.8% PAI, 76.3% FI). 19.2% of PAI and 13.5% of FI youths had previous ED visits for physical assault injuries (OR 0.68, P=0.24, 95% CI 0.36 to 1.29). 1.6% of PAI and 3.2% of FI youths had a history of ED visits for previous firearm injuries (OR 3.6, P=0.34, 95% CI 0.04 to 2.95). 10% of PAI and 3.8% of FI youths had previous ED visits for mental/behavioral health issues (OR 0.91, P=0.80, 95% CI 0.43 to 1.93). 10% of PAI and 2.6% of FI youths had previous ED visits due to concerns for abuse (OR 0.76, P=0.55, 95% CI 0.31 to 1.86). Conclusions: There are no statistically significant differences between physical assault-injured and firearm-injured youths in terms of ED usage for previous violent injuries, mental/behavioral health visits, sexual/reproductive health visits or concerns for abuse. However, violently injured youths in this study have more than twice the number of previous ED usage for physical assaults and mental health visits than previous literature indicates. Data comparing ED usage of victims of interpersonal violence to nonviolent ED patients is needed, but this study supports the notion that EDs may be a useful place for identification of and enrollment in interventions for youths most at risk for future violence.Keywords: child abuse, emergency department usage, pediatric gun violence, pediatric interpersonal violence, pediatric mental health, pediatric reproductive health
Procedia PDF Downloads 235284 Verification of a Simple Model for Rolling Isolation System Response
Authors: Aarthi Sridhar, Henri Gavin, Karah Kelly
Abstract:
Rolling Isolation Systems (RISs) are simple and effective means to mitigate earthquake hazards to equipment in critical and precious facilities, such as hospitals, network collocation facilities, supercomputer centers, and museums. The RIS works by isolating components acceleration the inertial forces felt by the subsystem. The RIS consists of two platforms with counter-facing concave surfaces (dishes) in each corner. Steel balls lie inside the dishes and allow the relative motion between the top and bottom platform. Formerly, a mathematical model for the dynamics of RISs was developed using Lagrange’s equations (LE) and experimentally validated. A new mathematical model was developed using Gauss’s Principle of Least Constraint (GPLC) and verified by comparing impulse response trajectories of the GPLC model and the LE model in terms of the peak displacements and accelerations of the top platform. Mathematical models for the RIS are tedious to derive because of the non-holonomic rolling constraints imposed on the system. However, using Gauss’s Principle of Least constraint to find the equations of motion removes some of the obscurity and yields a system that can be easily extended. Though the GPLC model requires more state variables, the equations of motion are far simpler. The non-holonomic constraint is enforced in terms of accelerations and therefore requires additional constraint stabilization methods in order to avoid the possibility that numerical integration methods can cause the system to go unstable. The GPLC model allows the incorporation of more physical aspects related to the RIS, such as contribution of the vertical velocity of the platform to the kinetic energy and the mass of the balls. This mathematical model for the RIS is a tool to predict the motion of the isolation platform. The ability to statistically quantify the expected responses of the RIS is critical in the implementation of earthquake hazard mitigation.Keywords: earthquake hazard mitigation, earthquake isolation, Gauss’s Principle of Least Constraint, nonlinear dynamics, rolling isolation system
Procedia PDF Downloads 250283 Metacognitive Processing in Early Readers: The Role of Metacognition in Monitoring Linguistic and Non-Linguistic Performance and Regulating Students' Learning
Authors: Ioanna Taouki, Marie Lallier, David Soto
Abstract:
Metacognition refers to the capacity to reflect upon our own cognitive processes. Although there is an ongoing discussion in the literature on the role of metacognition in learning and academic achievement, little is known about its neurodevelopmental trajectories in early childhood, when children begin to receive formal education in reading. Here, we evaluate the metacognitive ability, estimated under a recently developed Signal Detection Theory model, of a cohort of children aged between 6 and 7 (N=60), who performed three two-alternative-forced-choice tasks (two linguistic: lexical decision task, visual attention span task, and one non-linguistic: emotion recognition task) including trial-by-trial confidence judgements. Our study has three aims. First, we investigated how metacognitive ability (i.e., how confidence ratings track accuracy in the task) relates to performance in general standardized tasks related to students' reading and general cognitive abilities using Spearman's and Bayesian correlation analysis. Second, we assessed whether or not young children recruit common mechanisms supporting metacognition across the different task domains or whether there is evidence for domain-specific metacognition at this early stage of development. This was done by examining correlations in metacognitive measures across different task domains and evaluating cross-task covariance by applying a hierarchical Bayesian model. Third, using robust linear regression and Bayesian regression models, we assessed whether metacognitive ability in this early stage is related to the longitudinal learning of children in a linguistic and a non-linguistic task. Notably, we did not observe any association between students’ reading skills and metacognitive processing in this early stage of reading acquisition. Some evidence consistent with domain-general metacognition was found, with significant positive correlations between metacognitive efficiency between lexical and emotion recognition tasks and substantial covariance indicated by the Bayesian model. However, no reliable correlations were found between metacognitive performance in the visual attention span and the remaining tasks. Remarkably, metacognitive ability significantly predicted children's learning in linguistic and non-linguistic domains a year later. These results suggest that metacognitive skill may be dissociated to some extent from general (i.e., language and attention) abilities and further stress the importance of creating educational programs that foster students’ metacognitive ability as a tool for long term learning. More research is crucial to understand whether these programs can enhance metacognitive ability as a transferable skill across distinct domains or whether unique domains should be targeted separately.Keywords: confidence ratings, development, metacognitive efficiency, reading acquisition
Procedia PDF Downloads 150282 Improvement in Blast Furnace Performance Using Softening - Melting Zone Profile Prediction Model at G Blast Furnace, Tata Steel Jamshedpur
Authors: Shoumodip Roy, Ankit Singhania, K. R. K. Rao, Ravi Shankar, M. K. Agarwal, R. V. Ramna, Uttam Singh
Abstract:
The productivity of a blast furnace and the quality of the hot metal produced are significantly dependent on the smoothness and stability of furnace operation. The permeability of the furnace bed, as well as the gas flow pattern, influences the steady control of process parameters. The softening – melting zone that is formed inside the furnace contributes largely in distribution of the gas flow and the bed permeability. A better shape of softening-melting zone enhances the performance of blast furnace, thereby reducing the fuel rates and improving furnace life. Therefore, predictive model of the softening- melting zone profile can be utilized to control and improve the furnace operation. The shape of softening-melting zone depends upon the physical and chemical properties of the agglomerates and iron ore charged in the furnace. The variations in the agglomerate proportion in the burden at G Blast furnace disturbed the furnace stability. During such circumstances, it was analyzed that a w-shape softening-melting zone profile was formed inside the furnace. The formation of w-shape zone resulted in poor bed permeability and non-uniform gas flow. There was a significant increase in the heat loss at the lower zone of the furnace. The fuel demand increased, and the huge production loss was incurred. Therefore, visibility of softening-melting zone profile was necessary in order to pro-actively optimize the process parameters and thereby to operate the furnace smoothly. Using stave temperatures, a model was developed that predicted the shape of the softening-melting zone inside the furnace. It was observed that furnace operated smoothly during inverse V-shape of the zone and vice-versa during w-shape. This model helped to control the heat loss, optimize the burden distribution and lower the fuel rate at G Blast Furnace, TSL Jamshedpur. As a result of furnace stabilization productivity increased by 10% and fuel rate reduced by 80 kg/thm. Details of the process have been discussed in this paper.Keywords: agglomerate, blast furnace, permeability, softening-melting
Procedia PDF Downloads 252281 Honneth, Feenberg, and the Redemption of Critical Theory of Technology
Authors: David Schafer
Abstract:
Critical Theory is in sore need of a workable account of technology. It had one in the writings of Herbert Marcuse, or so it seemed until Jürgen Habermas mounted a critique in 'Technology and Science as Ideology' (Habermas, 1970) that decisively put it away. Ever since Marcuse’s work has been regarded outdated – a 'philosophy of consciousness' no longer seriously tenable. But with Marcuse’s view has gone the important insight that technology is no norm-free system (as Habermas portrays it) but can be laden with social bias. Andrew Feenberg is among a few serious scholars who have perceived this problem in post-Habermasian critical theory and has sought to revive a basically Marcusean account of technology. On his view, while so-called ‘technical elements’ that physically make up technologies are neutral with regard to social interests, there is a sense in which we may speak of a normative grammar or ‘technical code’ built-in to technology that can be socially biased in favor of certain groups over others (Feenberg, 2002). According to Feenberg, those perspectives on technology are reified which consider technology only by their technical elements to the neglect of their technical codes. Nevertheless, Feenberg’s account fails to explain what is normatively problematic with such reified views of technology. His plausible claim that they represent false perspectives on technology by itself does not explain how such views may be oppressive, even though Feenberg surely wants to be doing that stronger level of normative theorizing. Perceiving this deficit in his own account of reification, he tries to adopt Habermas’s version of systems-theory to ground his own critical theory of technology (Feenberg, 1999). But this is a curious move in light of Feenberg’s own legitimate critiques of Habermas’s portrayals of technology as reified or ‘norm-free.’ This paper argues that a better foundation may be found in Axel Honneth’s recent text, Freedom’s Right (Honneth, 2014). Though Honneth there says little explicitly about technology, he offers an implicit account of reification formulated in opposition to Habermas’s systems-theoretic approach. On this ‘normative functionalist’ account of reification, social spheres are reified when participants prioritize individualist ideals of freedom (moral and legal freedom) to the neglect of an intersubjective form of freedom-through-recognition that Honneth calls ‘social freedom.’ Such misprioritization is ultimately problematic because it is unsustainable: individual freedom is philosophically and institutionally dependent upon social freedom. The main difficulty in adopting Honneth’s social theory for the purposes of a theory of technology, however, is that the notion of social freedom is predicable only of social institutions, whereas it appears difficult to conceive of technology as an institution. Nevertheless, in light of Feenberg’s work, the idea that technology includes within itself a normative grammar (technical code) takes on much plausibility. To the extent that this normative grammar may be understood by the category of social freedom, Honneth’s dialectical account of the relationship between individual and social forms of freedom provides a more solid basis from which to ground the normative claims of Feenberg’s sociological account of technology than Habermas’s systems theory.Keywords: Habermas, Honneth, technology, Feenberg
Procedia PDF Downloads 197280 Development of an Automatic Computational Machine Learning Pipeline to Process Confocal Fluorescence Images for Virtual Cell Generation
Authors: Miguel Contreras, David Long, Will Bachman
Abstract:
Background: Microscopy plays a central role in cell and developmental biology. In particular, fluorescence microscopy can be used to visualize specific cellular components and subsequently quantify their morphology through development of virtual-cell models for study of effects of mechanical forces on cells. However, there are challenges with these imaging experiments, which can make it difficult to quantify cell morphology: inconsistent results, time-consuming and potentially costly protocols, and limitation on number of labels due to spectral overlap. To address these challenges, the objective of this project is to develop an automatic computational machine learning pipeline to predict cellular components morphology for virtual-cell generation based on fluorescence cell membrane confocal z-stacks. Methods: Registered confocal z-stacks of nuclei and cell membrane of endothelial cells, consisting of 20 images each, were obtained from fluorescence confocal microscopy and normalized through software pipeline for each image to have a mean pixel intensity value of 0.5. An open source machine learning algorithm, originally developed to predict fluorescence labels on unlabeled transmitted light microscopy cell images, was trained using this set of normalized z-stacks on a single CPU machine. Through transfer learning, the algorithm used knowledge acquired from its previous training sessions to learn the new task. Once trained, the algorithm was used to predict morphology of nuclei using normalized cell membrane fluorescence images as input. Predictions were compared to the ground truth fluorescence nuclei images. Results: After one week of training, using one cell membrane z-stack (20 images) and corresponding nuclei label, results showed qualitatively good predictions on training set. The algorithm was able to accurately predict nuclei locations as well as shape when fed only fluorescence membrane images. Similar training sessions with improved membrane image quality, including clear lining and shape of the membrane, clearly showing the boundaries of each cell, proportionally improved nuclei predictions, reducing errors relative to ground truth. Discussion: These results show the potential of pre-trained machine learning algorithms to predict cell morphology using relatively small amounts of data and training time, eliminating the need of using multiple labels in immunofluorescence experiments. With further training, the algorithm is expected to predict different labels (e.g., focal-adhesion sites, cytoskeleton), which can be added to the automatic machine learning pipeline for direct input into Principal Component Analysis (PCA) for generation of virtual-cell mechanical models.Keywords: cell morphology prediction, computational machine learning, fluorescence microscopy, virtual-cell models
Procedia PDF Downloads 205279 Cost-Effective Materials for Hydrocarbons Recovery from Produced Water
Authors: Fahd I. Alghunaimi, Hind S. Dossary, Norah W. Aljuryyed, Tawfik A. Saleh
Abstract:
Produced water (PW) is one of the largest by-volume waste streams and one of the most challenging effluents in the oil and gas industry. This is due to the variation of contaminants that make up PW. Severalmaterialshavebeen developed, studied, and implemented to remove hydrocarbonsfrom PW. Adsorption is one of the most effective ways ofremoving oil fromPW. In this work, three new and cost-effective hydrophobic adsorbentmaterials based on 9-octadecenoic acid grafted graphene (POG) were synthesized for oil/water separation. Graphene derived from graphite was modified with 9-octadecenoic acid to yield 9-octadecenoic acid grafted graphene (OG). The newsynthesized materials which called POG25, POG50, and POG75 were characterized by using N₂-physisorption (BET) and Fourier transform infrared (FTIR). The BET surface area of POG75 was the highest with 288 m²/g, whereas POG50 was 225 m²/g and POG25 was lowest 79 m²/g. These three materials were also evaluated for their oil-water separation efficiency using a model mixture, whichdemonstrated that POG-75 has the highest oil removal efficiency and the faster rate of the adsorption (Figure-1). POG75 was regenerated, and its performance was verified again with a little reduced adsorption rate compared to the fresh material. The mixtures that used in the performance test were prepared by mixing nonpolar organic liquids such as heptane, dodecane, or hexadecane into the colored water. In general, the new materials showed fast uptake of the certain quantity of the oildue to the high hydrophobicity nature of the materials, which repel water as confirmed by the contact angle of approximately 150˚. Besides that, novel superhydrophobic material was also synthesized by introducing hydrophobic branches of laurate on the surface of the stainless steel mesh (SSM). This novel mesh could help to hold the novel adsorbent materials in a column to remove oil from PW. Both BOG-75 and the novel mesh have the potential to remove oil contaminants from produced water, which will help to provide an opportunity to recover useful components, in addition, to reduce the environmental impact and reuse produced water in several applications such as fracturing.Keywords: graphite to graphene, oleophilic, produced water, separation
Procedia PDF Downloads 122278 Random Vertical Seismic Vibrations of the Long Span Cantilever Beams
Authors: Sergo Esadze
Abstract:
Seismic resistance norms require calculation of cantilevers on vertical components of the base seismic acceleration. Long span cantilevers, as a rule, must be calculated as a separate construction element. According to the architectural-planning solution, functional purposes and environmental condition of a designing buildings/structures, long span cantilever construction may be of very different types: both by main bearing element (beam, truss, slab), and by material (reinforced concrete, steel). A choice from these is always linked with bearing construction system of the building. Research of vertical seismic vibration of these constructions requires individual approach for each (which is not specified in the norms) in correlation with model of seismic load. The latest may be given both as deterministic load and as a random process. Loading model as a random process is more adequate to this problem. In presented paper, two types of long span (from 6m – up to 12m) reinforcement concrete cantilever beams have been considered: a) bearing elements of cantilevers, i.e., elements in which they fixed, have cross-sections with large sizes and cantilevers are made with haunch; b) cantilever beam with load-bearing rod element. Calculation models are suggested, separately for a) and b) types. They are presented as systems with finite quantity degree (concentrated masses) of freedom. Conditions for fixing ends are corresponding with its types. Vertical acceleration and vertical component of the angular acceleration affect masses. Model is based on assumption translator-rotational motion of the building in the vertical plane, caused by vertical seismic acceleration. Seismic accelerations are considered as random processes and presented by multiplication of the deterministic envelope function on stationary random process. Problem is solved within the framework of the correlation theory of random process. Solved numerical examples are given. The method is effective for solving the specific problems.Keywords: cantilever, random process, seismic load, vertical acceleration
Procedia PDF Downloads 188277 Structural Design of a Relief Valve Considering Strength
Authors: Nam-Hee Kim, Jang-Hoon Ko, Kwon-Hee Lee
Abstract:
A relief valve is a mechanical element to keep safety by controlling high pressure. Usually, the high pressure is relieved by using the spring force and letting the fluid to flow from another way out of system. When its normal pressure is reached, the relief valve can return to initial state. The relief valve in this study has been applied for pressure vessel, evaporator, piping line, etc. The relief valve should be designed for smooth operation and should satisfy the structural safety requirement under operating condition. In general, the structural analysis is performed by following fluid flow analysis. In this process, the FSI (Fluid-Structure Interaction) is required to input the force obtained from the output of the flow analysis. Firstly, this study predicts the velocity profile and the pressure distribution in the given system. In this study, the assumptions for flow analysis are as follows: • The flow is steady-state and three-dimensional. • The fluid is Newtonian and incompressible. • The walls of the pipe and valve are smooth. The flow characteristics in this relief valve does not induce any problem. The commercial software ANSYS/CFX is utilized for flow analysis. On the contrary, very high pressure may cause structural problem due to severe stress. The relief valve is made of body, bonnet, guide, piston and nozzle, and its material is stainless steel. To investigate its structural safety, the worst case loading is considered as the pressure of 700 bar. The load is applied to inside the valve, which is greater than the load obtained from FSI. The maximum stress is calculated as 378 MPa by performing the finite element analysis. However, the value is greater than its allowable value. Thus, an alternative design is suggested to improve the structural performance through case study. We found that the sensitive design variable to the strength is the shape of the nozzle. The case study is to vary the size of the nozzle. Finally, it can be seen that the suggested design satisfy the structural design requirement. The FE analysis is performed by using the commercial software ANSYS/Workbench.Keywords: relief valve, structural analysis, structural design, strength, safety factor
Procedia PDF Downloads 303276 The Confluence between Autism Spectrum Disorder and the Schizoid Personality
Authors: Murray David Schane
Abstract:
Though years of clinical encounters with patients with autism spectrum disorders and those with a schizoid personality the many defining diagnostic features shared between these conditions have been explored and current neurobiological differences have been reviewed; and, critical and different treatment strategies for each have been devised. The paper compares and contrasts the apparent similarities between autism spectrum disorders and the schizoid personality are found in these DSM descriptive categories: restricted range of social-emotional reciprocity; poor non-verbal communicative behavior in social interactions; difficulty developing and maintaining relationships; detachment from social relationships; lack of the desire for or enjoyment of close relationships; and preference for solitary activities. In this paper autism, fundamentally a communicative disorder, is revealed to present clinically as a pervasive aversive response to efforts to engage with or be engaged by others. Autists with the Asperger presentation typically have language but have difficulty understanding humor, irony, sarcasm, metaphoric speech, and even narratives about social relationships. They also tend to seek sameness, possibly to avoid problems of social interpretation. Repetitive behaviors engage many autists as a screen against ambient noise, social activity, and challenging interactions. Also in this paper, the schizoid personality is revealed as a pattern of social avoidance, self-sufficiency and apparent indifference to others as a complex psychological defense against a deep, long-abiding fear of appropriation and perverse manipulation. Neither genetic nor MRI studies have yet located the explanatory data that identifies the cause or the neurobiology of autism. Similarly, studies of the schizoid have yet to group that condition with those found in schizophrenia. Through presentations of clinical examples, the treatment of autists of the Asperger type is revealed to address the autist’s extreme social aversion which also precludes the experience of empathy. Autists will be revealed as forming social attachments but without the capacity to interact with mutual concern. Empathy will be shown be teachable and, as social avoidance relents, understanding of the meaning and signs of empathic needs that autists can recognize and acknowledge. Treatment of schizoids will be shown to revolve around joining empathically with the schizoid’s apprehensions about interpersonal, interactive proximity. Models of both autism and schizoid personality traits have yet to be replicated in animals, thereby eliminating the role of translational research in providing the kind of clues to behavioral patterns that can be related to genetic, epigenetic and neurobiological measures. But as these clinical examples will attest, treatment strategies have significant impact.Keywords: autism spectrum, schizoid personality traits, neurobiological implications, critical diagnostic distinctions
Procedia PDF Downloads 114275 Assessment of Surface Water Quality near Landfill Sites Using a Water Pollution Index
Authors: Alejandro Cittadino, David Allende
Abstract:
Landfilling of municipal solid waste is a common waste management practice in Argentina as in many parts of the world. There is extensive scientific literature on the potential negative effects of landfill leachates on the environment, so it’s necessary to be rigorous with the control and monitoring systems. Due to the specific municipal solid waste composition in Argentina, local landfill leachates contain large amounts of organic matter (biodegradable, but also refractory to biodegradation), as well as ammonia-nitrogen, small trace of some heavy metals, and inorganic salts. In order to investigate the surface water quality in the Reconquista river adjacent to the Norte III landfill, water samples both upstream and downstream the dumpsite are quarterly collected and analyzed for 43 parameters including organic matter, heavy metals, and inorganic salts, as required by the local standards. The objective of this study is to apply a water quality index that considers the leachate characteristics in order to determine the quality status of the watercourse through the landfill. The water pollution index method has been widely used in water quality assessments, particularly rivers, and it has played an increasingly important role in water resource management, since it provides a number simple enough for the public to understand, that states the overall water quality at a certain location and time. The chosen water quality index (ICA) is based on the values of six parameters: dissolved oxygen (in mg/l and percent saturation), temperature, biochemical oxygen demand (BOD5), ammonia-nitrogen and chloride (Cl-) concentration. The index 'ICA' was determined both upstream and downstream the Reconquista river, being the rating scale between 0 (very poor water quality) and 10 (excellent water quality). The monitoring results indicated that the water quality was unaffected by possible leachate runoff since the index scores upstream and downstream were ranked in the same category, although in general, most of the samples were classified as having poor water quality according to the index’s scale. The annual averaged ICA index scores (computed quarterly) were 4.9, 3.9, 4.4 and 5.0 upstream and 3.9, 5.0, 5.1 and 5.0 downstream the river during the study period between 2014 and 2017. Additionally, the water quality seemed to exhibit distinct seasonal variations, probably due to annual precipitation patterns in the study area. The ICA water quality index appears to be appropriate to evaluate landfill impacts since it accounts mainly for organic pollution and inorganic salts and the absence of heavy metals in the local leachate composition, however, the inclusion of other parameters could be more decisive in discerning the affected stream reaches from the landfill activities. A future work may consider adding to the index other parameters like total organic carbon (TOC) and total suspended solids (TSS) since they are present in the leachate in high concentrations.Keywords: landfill, leachate, surface water, water quality index
Procedia PDF Downloads 150