Search results for: multiplicative capability
250 Therapeutic Role of T Subpopulations Cells (CD4, CD8 and Treg (CD25 and FOXP3+ Cells) of UC MSC Isolated from Three Different Methods in Various Disease
Authors: Kumari Rekha, Mathur K Dhananjay, Maheshwari Deepanshu, Nautiyal Nidhi, Shubham Smriti, Laal Deepika, Sinha Swati, Kumar Anupam, Biswas Subhrajit, Shiv Kumar Sarin
Abstract:
Background: Mesenchymal stem cells are multipotent stem cells derived from mesoderm and are used for therapeutic purposes because of their self-renewal, homing capacity, Immunomodulatory capability, low immunogenicity and mitochondrial transfer signaling. MSCs have the ability to regulate the mechanism of both innate as well as adaptive immune responses through the modulation of cellular response and the secretion of inflammatory mediators. Different sources of MSC are UC MSC, BM MSC, Dental Pulp, and Adipose MSC. The most frequent source used is umbilical cord tissue due to its being easily available and free of limitations of collection procedures from respective hospitals. The immunosuppressive role of MSCs is particularly interesting for clinical use since it confers resistance to rejection by the host immune response. Methodology: In this study, T helper cells (TH4), Cytotoxic T cells (CD-8), immunoregulatory cells (CD25 +FOXP3+) are compared from isolated MSC from three different methods, UC Dissociation Kit (Miltenyi), Explant Culture and Collagenase Type-IV. To check the immunomodulatory property, these MSCs were seeded with PBMC(Coculture) in CD3 coated 24 well plates. Cd28 antibody was added in coculture for six days. The coculture was analyzed in FACS Verse flow cytometry. Results: From flow cytometry analysis of coculture, it found that All over T helper cells (CD4+) number p<0.0264 increases in (All Enzymes) MSC rather than explant MSC(p>0.0895) as compared to Collagenase(p>0.7889) in a coculture of Activated T cell and Mesenchymal Stem Cell. Similar T reg cells (CD25+, FOXP3+) expression p<0.0234increases in All Enzymes), decreases in Explant and Collagenase. Experiments have shown that MSCs can also directly prevent the cytotoxic activity of CD8 lymphocytes mainly by blocking their proliferation rather than by inhibiting the cytotoxic effect. And promoting the t-reg cells, which helps in the mediation of immune response in various diseases. Conclusion: MSC suppress Cytotoxic CD8 T cell and Enhance immunoregulatory T reg (CD4+, CD25+, FOXP3+) Cell expression. Thus, MSC maintains a proper balance(ratio) between CD4 T cells and Cytotoxic CD8 T cells.Keywords: MSC, disease, T cell, T regulatory
Procedia PDF Downloads 114249 Conversational Assistive Technology of Visually Impaired Person for Social Interaction
Authors: Komal Ghafoor, Tauqir Ahmad, Murtaza Hanif, Hira Zaheer
Abstract:
Assistive technology has been developed to support visually impaired people in their social interactions. Conversation assistive technology is designed to enhance communication skills, facilitate social interaction, and improve the quality of life of visually impaired individuals. This technology includes speech recognition, text-to-speech features, and other communication devices that enable users to communicate with others in real time. The technology uses natural language processing and machine learning algorithms to analyze spoken language and provide appropriate responses. It also includes features such as voice commands and audio feedback to provide users with a more immersive experience. These technologies have been shown to increase the confidence and independence of visually impaired individuals in social situations and have the potential to improve their social skills and relationships with others. Overall, conversation-assistive technology is a promising tool for empowering visually impaired people and improving their social interactions. One of the key benefits of conversation-assistive technology is that it allows visually impaired individuals to overcome communication barriers that they may face in social situations. It can help them to communicate more effectively with friends, family, and colleagues, as well as strangers in public spaces. By providing a more seamless and natural way to communicate, this technology can help to reduce feelings of isolation and improve overall quality of life. The main objective of this research is to give blind users the capability to move around in unfamiliar environments through a user-friendly device by face, object, and activity recognition system. This model evaluates the accuracy of activity recognition. This device captures the front view of the blind, detects the objects, recognizes the activities, and answers the blind query. It is implemented using the front view of the camera. The local dataset is collected that includes different 1st-person human activities. The results obtained are the identification of the activities that the VGG-16 model was trained on, where Hugging, Shaking Hands, Talking, Walking, Waving video, etc.Keywords: dataset, visually impaired person, natural language process, human activity recognition
Procedia PDF Downloads 58248 Identifying Temporary Housing Main Vertexes through Assessing Post-Disaster Recovery Programs
Authors: S. M. Amin Hosseini, Oriol Pons, Carmen Mendoza Arroyo, Albert de la Fuente
Abstract:
In the aftermath of a natural disaster, the major challenge most cities and societies face, regardless of their diverse level of prosperity, is to provide temporary housing (TH) for the displaced population (DP). However, the features of TH, which have been applied in previous recovery programs, greatly varied from case to case. This situation demonstrates that providing temporary accommodation for DP in a short period time and usually in great numbers is complicated in terms of satisfying all the beneficiaries’ needs, regardless of the societies’ welfare levels. Furthermore, when previously used strategies are applied to different areas, the chosen strategies are most likely destined to fail, unless the strategies are context and culturally based. Therefore, as the population of disaster-prone cities are increasing, decision-makers need a platform to help to determine all the factors, which caused the outcomes of the prior programs. To this end, this paper aims to assess the problems, requirements, limitations, potential responses, chosen strategies, and their outcomes, in order to determine the main elements that have influenced the TH process. In this regard, and in order to determine a customizable strategy, this study analyses the TH programs of five different cases as: Marmara earthquake, 1999; Bam earthquake, 2003; Aceh earthquake and tsunami, 2004; Hurricane Katrina, 2005; and, L’Aquila earthquake, 2009. The research results demonstrate that the main vertexes of TH are: (1) local characteristics, including local potential and affected population features, (2) TH properties, which needs to be considered in four phases: planning, provision/construction, operation, and second life, and (3) natural hazards impacts, which embraces intensity and type. Accordingly, this study offers decision-makers the opportunity to discover the main vertexes, their subsets, interactions, and the relation between strategies and outcomes based on the local conditions of each case. Consequently, authorities may acquire the capability to design a customizable method in the face of complicated post-disaster housing in the wake of future natural disasters.Keywords: post-disaster temporary accommodation, urban resilience, natural disaster, local characteristic
Procedia PDF Downloads 243247 The Mechanism Study on the Difference between High and Low Voltage Performance of Li3V2(PO4)3
Authors: Enhui Wang, Qingzhu Ou, Yan Tang, Xiaodong Guo
Abstract:
As one of most popular polyanionic compounds in lithium-ion cathode materials, Li3V2(PO4)3 has always suffered from the low rate capability especially during 3~4.8V, which is considered to be related with the ion diffusion resistance and structural transformation during the Li+ de/intercalation. Here, as the change of cut-off voltages, cycling numbers and current densities, the process of SEI interfacial film’s formation-growing- destruction-repair on the surface of the cathode, the structural transformation during the charge and discharge, the de/intercalation kinetics reflected by the electrochemical impedance and the diffusion coefficient, have been investigated in detail. Current density, cycle numbers and cut-off voltage impacting on interfacial film and structure was studied specifically. Firstly, the matching between electrolyte and material was investigated, it turned out that the batteries with high voltage electrolyte showed the best electrochemical performance and high voltage electrolyte would be the best electrolyte. Secondly, AC impedance technology was used to study the changes of interface impedance and lithium ion diffusion coefficient, the results showed that current density, cycle numbers and cut-off voltage influenced the interfacial film together and the one who changed the interfacial properties most was the key factor. Scanning electron microscopy (SEM) analysis confirmed that the attenuation of discharge specific capacity was associated with the destruction and repair process of the SEI film. Thirdly, the X-ray diffraction was used to study the changes of structure, which was also impacted by current density, cycle numbers and cut-off voltage. The results indicated that the cell volume of Li3V2 (PO4 )3 increased as the current density increased; cycle numbers merely influenced the structure of material; the cell volume decreased first and moved back gradually after two Li-ion had been deintercalated as the charging cut-off voltage increased, and increased as the intercalation number of Li-ion increased during the discharging process. Then, the results which studied the changes of interface impedance and lithium ion diffusion coefficient turned out that the interface impedance and lithium ion diffusion coefficient increased when the cut-off voltage passed the voltage platforms and decreased when the cut-off voltage was between voltage platforms. Finally, three-electrode system was first adopted to test the activation energy of the system, the results indicated that the activation energy of the three-electrode system (22.385 KJ /mol) was much smaller than that of two-electrode system (40.064 KJ /mol).Keywords: cut-off voltage, de/intercalation kinetics, solid electrolyte interphase film, structural transformation
Procedia PDF Downloads 296246 Using The Flight Heritage From >150 Electric Propulsion Systems To Design The Next Generation Field Emission Electric Propulsion Thrusters
Authors: David Krejci, Tony Schönherr, Quirin Koch, Valentin Hugonnaud, Lou Grimaud, Alexander Reissner, Bernhard Seifert
Abstract:
In 2018 the NANO thruster became the first Field Emission Electric Propulsion (FEEP) system ever to be verified in space in an In-Orbit Demonstration mission conducted together with Fotec. Since then, 160 additional ENPULSION NANO propulsion systems have been deployed in orbit on 73 different spacecraft across multiple customers and missions. These missions included a variety of different satellite bus sizes ranging from 3U Cubesats to >100kg buses, and different orbits in Low Earth Orbit and Geostationary Earth orbit, providing an abundance of on orbit data for statistical analysis. This large-scale industrialization and flight heritage allows for a holistic way of gathering data from testing, integration and operational phases, deriving lessons learnt over a variety of different mission types, operator approaches, use cases and environments. Based on these lessons learnt a new generation of propulsion systems is developed, addressing key findings from the large NANO heritage and adding new capabilities, including increased resilience, thrust vector steering and increased power and thrust level. Some of these successor products have already been validated in orbit, including the MICRO R3 and the NANO AR3. While the MICRO R3 features increased power and thrust level, the NANO AR3 is a successor of the heritage NANO thruster with added thrust vectoring capability. 5 NANO AR3 have been launched to date on two different spacecraft. This work presents flight telemetry data of ENPULSION NANO systems and onorbit statistical data of the ENPULSION NANO as well as lessons learnt during onorbit operations, customer assembly, integration and testing support and ground test campaigns conducted at different facilities. We discuss how transfer of lessons learnt and operational improvement across independent missions across customers has been accomplished. Building on these learnings and exhaustive heritage, we present the design of the new generation of propulsion systems that increase the power and thrust level of FEEP systems to address larger spacecraft buses.Keywords: FEEP, field emission electric propulsion, electric propulsion, flight heritage
Procedia PDF Downloads 93245 A Multi-Role Oriented Collaboration Platform for Distributed Disaster Reduction in China
Authors: Linyao Qiu, Zhiqiang Du
Abstract:
As the rapid development of urbanization, economic developments, and steady population growth in China, the widespread devastation, economic damages, and loss of human lives caused by numerous forms of natural disasters are becoming increasingly serious every year. Disaster management requires available and effective cooperation of different roles and organizations in whole process including mitigation, preparedness, response and recovery. Due to the imbalance of regional development in China, the disaster management capabilities of national and provincial disaster reduction centers are uneven. When an undeveloped area suffers from disaster, neither local reduction department could get first-hand information like high-resolution remote sensing images from satellites and aircrafts independently, nor sharing mechanism is provided for the department to access to data resources deployed in other place directly. Most existing disaster management systems operate in a typical passive data-centric mode and work for single department, where resources cannot be fully shared. The impediment blocks local department and group from quick emergency response and decision-making. In this paper, we introduce a collaborative platform for distributed disaster reduction. To address the issues of imbalance of sharing data sources and technology in the process of disaster reduction, we propose a multi-role oriented collaboration business mechanism, which is capable of scheduling and allocating for optimum utilization of multiple resources, to link various roles for collaborative reduction business in different place. The platform fully considers the difference of equipment conditions in different provinces and provide several service modes to satisfy technology need in disaster reduction. An integrated collaboration system based on focusing services mechanism is designed and implemented for resource scheduling, functional integration, data processing, task management, collaborative mapping, and visualization. Actual applications illustrate that the platform can well support data sharing and business collaboration between national and provincial department. It could significantly improve the capability of disaster reduction in China.Keywords: business collaboration, data sharing, distributed disaster reduction, focusing service
Procedia PDF Downloads 295244 A Unified Approach for Digital Forensics Analysis
Authors: Ali Alshumrani, Nathan Clarke, Bogdan Ghite, Stavros Shiaeles
Abstract:
Digital forensics has become an essential tool in the investigation of cyber and computer-assisted crime. Arguably, given the prevalence of technology and the subsequent digital footprints that exist, it could have a significant role across almost all crimes. However, the variety of technology platforms (such as computers, mobiles, Closed-Circuit Television (CCTV), Internet of Things (IoT), databases, drones, cloud computing services), heterogeneity and volume of data, forensic tool capability, and the investigative cost make investigations both technically challenging and prohibitively expensive. Forensic tools also tend to be siloed into specific technologies, e.g., File System Forensic Analysis Tools (FS-FAT) and Network Forensic Analysis Tools (N-FAT), and a good deal of data sources has little to no specialist forensic tools. Increasingly it also becomes essential to compare and correlate evidence across data sources and to do so in an efficient and effective manner enabling an investigator to answer high-level questions of the data in a timely manner without having to trawl through data and perform the correlation manually. This paper proposes a Unified Forensic Analysis Tool (U-FAT), which aims to establish a common language for electronic information and permit multi-source forensic analysis. Core to this approach is the identification and development of forensic analyses that automate complex data correlations, enabling investigators to investigate cases more efficiently. The paper presents a systematic analysis of major crime categories and identifies what forensic analyses could be used. For example, in a child abduction, an investigation team might have evidence from a range of sources including computing devices (mobile phone, PC), CCTV (potentially a large number), ISP records, and mobile network cell tower data, in addition to third party databases such as the National Sex Offender registry and tax records, with the desire to auto-correlate and across sources and visualize in a cognitively effective manner. U-FAT provides a holistic, flexible, and extensible approach to providing digital forensics in technology, application, and data-agnostic manner, providing powerful and automated forensic analysis.Keywords: digital forensics, evidence correlation, heterogeneous data, forensics tool
Procedia PDF Downloads 196243 The Accuracy of an 8-Minute Running Field Test to Estimate Lactate Threshold
Authors: Timothy Quinn, Ronald Croce, Aliaksandr Leuchanka, Justin Walker
Abstract:
Many endurance athletes train at or just below an intensity associated with their lactate threshold (LT) and often the heart rate (HR) that these athletes use for their LT are above their true LT-HR measured in a laboratory. Training above their true LT-HR may lead to overtraining and injury. Few athletes have the capability of measuring their LT in a laboratory and rely on perception to guide them, as accurate field tests to determine LT are limited. Therefore, the purpose of this study was to determine if an 8-minute field test could accurately define the HR associated with LT as measured in the laboratory. On Day 1, fifteen male runners (mean±SD; age, 27.8±4.1 years; height, 177.9±7.1 cm; body mass, 72.3±6.2 kg; body fat, 8.3±3.1%) performed a discontinuous treadmill LT/maximal oxygen consumption (LT/VO2max) test using a portable metabolic gas analyzer (Cosmed K4b2) and a lactate analyzer (Analox GL5). The LT (and associated HR) was determined using the 1/+1 method, where blood lactate increased by 1 mMol•L-1 over baseline followed by an additional 1 mMol•L-1 increase. Days 2 and 3 were randomized, and the athletes performed either an 8-minute run on the treadmill (TM) or on a 160-m indoor track (TR) in an effort to cover as much distance as possible while maintaining a high intensity throughout the entire 8 minutes. VO2, HR, ventilation (VE), and respiratory exchange ratio (RER) were measured using the Cosmed system, and rating of perceived exertion (RPE; 6-20 scale) was recorded every minute. All variables were averaged over the 8 minutes. The total distance covered over the 8 minutes was measured in both conditions. At the completion of the 8-minute runs, blood lactate was measured. Paired sample t-tests and pairwise Pearson correlations were computed to determine the relationship between variables measured in the field tests versus those obtained in the laboratory at LT. An alpha level of <0.05 was required for statistical significance. The HR (mean +SD) during the TM (167+9 bpm) and TR (172+9 bpm) tests were strongly correlated to the HR measured during the laboratory LT (169+11 bpm) test (r=0.68; p<0.03 and r=0.88; p<0.001, respectively). Blood lactate values during the TM and TR tests were not different from each other but were strongly correlated with the laboratory LT (r=0.73; p<0.04 and r=0.66; p<0.05, respectively). VE (Lmin-1) was significantly greater during the TR (134.8+11.4 Lmin-1) as compared to the TM (123.3+16.2 Lmin-1) with moderately strong correlations to the laboratory threshold values (r=0.38; p=0.27 and r=0.58; p=0.06, respectively). VO2 was higher during TR (51.4 mlkg-1min-1) compared to TM (47.4 mlkg-1min-1) with correlations of 0.33 (p=0.35) and 0.48 (p=0.13), respectively to threshold values. Total distance run was significantly greater during the TR (2331.6+180.9 m) as compared to the TM (2177.0+232.6 m), but they were strongly correlated with each other (r=0.82; p<0.002). These results suggest that an 8-minute running field test can accurately predict the HR associated with the LT and may be a simple test that athletes and coaches could implement to aid in training techniques.Keywords: blood lactate, heart rate, running, training
Procedia PDF Downloads 252242 Optimum Method to Reduce the Natural Frequency for Steel Cantilever Beam
Authors: Eqqab Maree, Habil Jurgen Bast, Zana K. Shakir
Abstract:
Passive damping, once properly characterized and incorporated into the structure design is an autonomous mechanism. Passive damping can be achieved by applying layers of a polymeric material, called viscoelastic layers (VEM), to the base structure. This type of configuration is known as free or unconstrained layer damping treatment. A shear or constrained damping treatment uses the idea of adding a constraining layer, typically a metal, on top of the polymeric layer. Constrained treatment is a more efficient form of damping than the unconstrained damping treatment. In constrained damping treatment a sandwich is formed with the viscoelastic layer as the core. When the two outer layers experience bending, as they would if the structure was oscillating, they shear the viscoelastic layer and energy is dissipated in the form of heat. This form of energy dissipation allows the structural oscillations to attenuate much faster. The purpose behind this study is to predict damping effects by using two methods of passive viscoelastic constrained layer damping. First method is Euler-Bernoulli beam theory; it is commonly used for predicting the vibratory response of beams. Second method is Finite Element software packages provided in this research were obtained by using two-dimensional solid structural elements in ANSYS14 specifically eight nodded (SOLID183) and the output results from ANSYS 14 (SOLID183) its damped natural frequency values and mode shape for first five modes. This method of passive damping treatment is widely used for structural application in many industries like aerospace, automobile, etc. In this paper, take a steel cantilever sandwich beam with viscoelastic core type 3M-468 by using methods of passive viscoelastic constrained layer damping. Also can proved that, the percentage reduction of modal frequency between undamped and damped steel sandwich cantilever beam 8mm thickness for each mode is very high, this is due to the effect of viscoelastic layer on damped beams. Finally this types of damped sandwich steel cantilever beam with viscoelastic materials core type (3M468) is very appropriate to use in automotive industry and in many mechanical application, because has very high capability to reduce the modal vibration of structures.Keywords: steel cantilever, sandwich beam, viscoelastic materials core type (3M468), ANSYS14, Euler-Bernoulli beam theory
Procedia PDF Downloads 318241 Optimization Approach to Integrated Production-Inventory-Routing Problem for Oxygen Supply Chains
Authors: Yena Lee, Vassilis M. Charitopoulos, Karthik Thyagarajan, Ian Morris, Jose M. Pinto, Lazaros G. Papageorgiou
Abstract:
With globalisation, the need to have better coordination of production and distribution decisions has become increasingly important for industrial gas companies in order to remain competitive in the marketplace. In this work, we investigate a problem that integrates production, inventory, and routing decisions in a liquid oxygen supply chain. The oxygen supply chain consists of production facilities, external third-party suppliers, and multiple customers, including hospitals and industrial customers. The product produced by the plants or sourced from the competitors, i.e., third-party suppliers, is distributed by a fleet of heterogenous vehicles to satisfy customer demands. The objective is to minimise the total operating cost involving production, third-party, and transportation costs. The key decisions for production include production and inventory levels and product amount from third-party suppliers. In contrast, the distribution decisions involve customer allocation, delivery timing, delivery amount, and vehicle routing. The optimisation of the coordinated production, inventory, and routing decisions is a challenging problem, especially when dealing with large-size problems. Thus, we present a two-stage procedure to solve the integrated problem efficiently. First, the problem is formulated as a mixed-integer linear programming (MILP) model by simplifying the routing component. The solution from the first-stage MILP model yields the optimal customer allocation, production and inventory levels, and delivery timing and amount. Then, we fix the previous decisions and solve a detailed routing. In the second stage, we propose a column generation scheme to address the computational complexity of the resulting detailed routing problem. A case study considering a real-life oxygen supply chain in the UK is presented to illustrate the capability of the proposed models and solution method. Furthermore, a comparison of the solutions from the proposed approach with the corresponding solutions provided by existing metaheuristic techniques (e.g., guided local search and tabu search algorithms) is presented to evaluate the efficiency.Keywords: production planning, inventory routing, column generation, mixed-integer linear programming
Procedia PDF Downloads 112240 Flora of Seaweeds and the Preliminary Screening of the Fungal Endophytes
Authors: Nur Farah Ain Zainee, Ahmad Ismail, Nazlina Ibrahim, Asmida Ismail
Abstract:
Seaweeds are economically important as they have the potential of being utilized, the capabilities and opportunities for further expansion as well as the availability of other species for future development. Hence, research on the diversity and distribution of seaweeds have to be expanded whilst the seaweeds are one of the Malaysian marine valuable heritage. The study on the distribution of seaweeds at Pengerang, Johor was carried out between February and November 2015 at Kampung Jawa Darat and Kampung Sungai Buntu. The study sites are located at the south-southeast of Peninsular Malaysia where the Petronas Refinery and Petrochemicals Integrated Project Development (RAPID) are in progress. In future, the richness of seaweeds in Pengerang will vanish soon due to the loss of habitat prior to RAPID project. The research was completed to study the diversity of seaweed and to determine the present of fungal endophyte isolated from the seaweed. The sample was calculated by using quadrat with 25-meter line transect by 3 replication for each site. The specimen were preserved, identified, processed in the laboratory and kept as herbarium specimen in Algae Herbarium, Universiti Kebangsaan Malaysia. The complete thallus specimens for fungal endophyte screening were chosen meticulously, transferred into sterile zip-lock plastic bag and kept in the freezer for further process. A total of 29 species has been identified including 12 species of Chlorophyta, 2 species of Phaeophyta and 14 species of Rhodophyta. From February to November 2015, the number of species highly varied and there was a significant change in community structure of seaweeds. Kampung Sungai Buntu shows the highest diversity throughout the study compared to Kampung Jawa Darat. This evidence can be related to the high habitat preference such as types of shores which is rocky, sandy and having lagoon and bay. These can enhance the existence of the seaweeds community due to variations of the habitat. Eighteen seaweed species were selected and screened for the capability presence of fungal endophyte; Sargassum polycystum marked having the highest number of fungal endophyte compared to the other species. These evidence has proved the seaweed have capable of accommodating a lot of species of fungal endophytes. Thus, these evidence leads to positive consequences where further research should be employed.Keywords: diversity, fungal endophyte, macroalgae, screening, seaweed
Procedia PDF Downloads 229239 Comparing Two Unmanned Aerial Systems in Determining Elevation at the Field Scale
Authors: Brock Buckingham, Zhe Lin, Wenxuan Guo
Abstract:
Accurate elevation data is critical in deriving topographic attributes for the precision management of crop inputs, especially water and nutrients. Traditional ground-based elevation data acquisition is time consuming, labor intensive, and often inconvenient at the field scale. Various unmanned aerial systems (UAS) provide the capability of generating digital elevation data from high-resolution images. The objective of this study was to compare the performance of two UAS with different global positioning system (GPS) receivers in determining elevation at the field scale. A DJI Phantom 4 Pro and a DJI Phantom 4 RTK(real-time kinematic) were applied to acquire images at three heights, including 40m, 80m, and 120m above ground. Forty ground control panels were placed in the field, and their geographic coordinates were determined using an RTK GPS survey unit. For each image acquisition using a UAS at a particular height, two elevation datasets were generated using the Pix4D stitching software: a calibrated dataset using the surveyed coordinates of the ground control panels and an uncalibrated dataset without using the surveyed coordinates of the ground control panels. Elevation values for each panel derived from the elevation model of each dataset were compared to the corresponding coordinates of the ground control panels. The coefficient of the determination (R²) and the root mean squared error (RMSE) were used as evaluation metrics to assess the performance of each image acquisition scenario. RMSE values for the uncalibrated elevation dataset were 26.613 m, 31.141 m, and 25.135 m for images acquired at 120 m, 80 m, and 40 m, respectively, using the Phantom 4 Pro UAS. With calibration for the same UAS, the accuracies were significantly improved with RMSE values of 0.161 m, 0.165, and 0.030 m, respectively. The best results showed an RMSE of 0.032 m and an R² of 0.998 for calibrated dataset generated using the Phantom 4 RTK UAS at 40m height. The accuracy of elevation determination decreased as the flight height increased for both UAS, with RMSE values greater than 0.160 m for the datasets acquired at 80 m and 160 m. The results of this study show that calibration with ground control panels improves the accuracy of elevation determination, especially for the UAS with a regular GPS receiver. The Phantom 4 Pro provides accurate elevation data with substantial surveyed ground control panels for the 40 m dataset. The Phantom 4 Pro RTK UAS provides accurate elevation at 40 m without calibration for practical precision agriculture applications. This study provides valuable information on selecting appropriate UAS and flight heights in determining elevation for precision agriculture applications.Keywords: unmanned aerial system, elevation, precision agriculture, real-time kinematic (RTK)
Procedia PDF Downloads 164238 Differences in Guilt, Shame, Self-Anger, and Suicide Cognitions Based on Recent Suicide Ideation and Lifetime Suicide Attempt History
Authors: E. H. Szeto, E. Ammendola, J. V. Tabares, A. Starkey, J. Hay, J. G. McClung, C. J. Bryan
Abstract:
Introduction: Suicide is a leading cause of death globally, which accounts for more deaths annually than war, acquired immunodeficiency syndrome, homicides, and car accidents, while an estimated 140 million individuals have significant suicide ideation (SI) each year in the United States. Typical risk factors such as hopelessness, depression, and psychiatric disorders can predict suicide ideation but cannot distinguish between those who ideate from those who attempt suicide (SA). The Fluid Vulnerability Theory of suicide posits that a person’s activation of the suicidal mode is predicated on one’s predisposition, triggers, baseline/acute risk, and protective factors. The current study compares self-conscious cognitive-affective states (including guilt, shame, anger towards the self, and suicidal beliefs) among patients based on the endorsement of recent SI (i.e., past two weeks; acute risk) and lifetime SA (i.e., baseline risk). Method: A total of 2,722 individuals in an outpatient primary care setting were included in this cross-sectional, observational study; data for 2,584 were valid and retained for analysis. The Differential Emotions Scale measuring guilt, shame, and self-anger and the Suicide Cognitions Scale measuring suicide cognitions were administered. Results: A total of 2,222 individuals reported no recent SI or lifetime SA (Group 1), 161 reported recent SI only (Group 2), 145 reported lifetime SA only (Group 3), 56 reported both recent SI and lifetime SA (Group 4). The Kruskal-Wallis test showed that guilt, shame, self-anger, and suicide cognitions were the highest for Group 4 (both recent SI and lifetime SA), followed by Group 2 (recent SI-only), then Group 3 (lifetime SA-only), and lastly, Group 1 (no recent SI or lifetime SA). Conclusion: The results on recent SI-only versus lifetime SA-only contribute to the literature on the Fluid Vulnerability Theory of suicide by capturing SI and SA in two different time periods, which signify the acute risks and chronic baseline risks of the suicidal mode, respectively. It is also shown that: (a) people with a lifetime SA reported more severe symptoms than those without, (b) people with recent SI reported more severe symptoms than those without, and (c) people with both recent SI and lifetime SA were the most severely distressed. Future studies may replicate the findings here with other pertinent risk factors such as thwarted belongingness, perceived burdensomeness, and acquired capability, the last of which is consistently linked to attempting among ideators.Keywords: suicide, guilt, shame, self-anger, suicide cognitions, suicide ideation, suicide attempt
Procedia PDF Downloads 162237 Monolithic Integrated GaN Resonant Tunneling Diode Pair with Picosecond Switching Time for High-speed Multiple-valued Logic System
Authors: Fang Liu, JiaJia Yao, GuanLin Wu, ZuMaoLi, XueYan Yang, HePeng Zhang, ZhiPeng Sun, JunShuai Xue
Abstract:
The explosive increasing needs of data processing and information storage strongly drive the advancement of the binary logic system to multiple-valued logic system. Inherent negative differential resistance characteristic, ultra-high-speed switching time, and robust anti-irradiation capability make III-nitride resonant tunneling diode one of the most promising candidates for multi-valued logic devices. Here we report the monolithic integration of GaN resonant tunneling diodes in series to realize multiple negative differential resistance regions, obtaining at least three stable operating states. A multiply-by-three circuit is achieved by this combination, increasing the frequency of the input triangular wave from f0 to 3f0. The resonant tunneling diodes are grown by plasma-assistedmolecular beam epitaxy on free-standing c-plane GaN substrates, comprising double barriers and a single quantum well both at the atomic level. Device with a peak current density of 183kA/cm² in conjunction with a peak-to-valley current ratio (PVCR) of 2.07 is observed, which is the best result reported in nitride-based resonant tunneling diodes. Microwave oscillation event at room temperature was discovered with a fundamental frequency of 0.31GHz and an output power of 5.37μW, verifying the high repeatability and robustness of our device. The switching behavior measurement was successfully carried out, featuring rise and fall times in the order of picoseconds, which can be used in high-speed digital circuits. Limited by the measuring equipment and the layer structure, the switching time can be further improved. In general, this article presents a novel nitride device with multiple negative differential regions driven by the resonant tunneling mechanism, which can be used in high-speed multiple value logic field with reduced circuit complexity, demonstrating a new solution of nitride devices to break through the limitations of binary logic.Keywords: GaN resonant tunneling diode, negative differential resistance, multiple-valued logic system, switching time, peak-to-valley current ratio
Procedia PDF Downloads 100236 Understanding How Posting and Replying Behaviors in Social Media Differentiate the Social Capital Cultivation Capabilities of Users
Authors: Jung Lee
Abstract:
This study identifies how the cultivation capabilities of social capital influence the overall attitudes of social media users and how these influences differ across user groups. First, the cultivation capabilities of social capital are identified from three aspects, namely, social capital accessibility, potentiality and sensitivity. These three types of social capital acquisition capabilities collectively represent how the social media users perceive the social media environment in terms of possibilities for social capital creation. These three capabilities are hypothesized to influence social media satisfaction and continuing use intention. Next, two essential activities in social media are identified, namely, posting and replying, to categorise social media users based on behavioral patterns. Various social media activities consist of the combinations of these two basic activities. Posting represents the broadcasting aspect of social media, whereas replying represents the communicative aspect of social media. We categorize users into four from communicators to observers by using these two behaviors to develop usage pattern matrix. By applying the usage pattern matrix to the capability model, we argue that posting behavior generally has a positive moderating effect on the attitudes of social media users, whereas replying behavior occasionally exhibits the negative moderating effect. These different moderating effects of posting and replying behavior are explained based on the different levels of social capital sensitivity and expectation of individuals. When a person is highly expecting social capital from social media, he or she would post actively. However, when one is highly sensitive to social capital, he or she would actively respond and reply to postings of other people because such an act would create a longer and more interactive relationship. A total of 512 social media users are invited to answer the survey. They were asked about their attitudes toward the social media and how they expect social capital through this practice. They were asked to check their general social media usage pattern for user categorization. Result confirmed that most of the hypotheses were supported. Three types of social capital cultivation capabilities are significant determinants of social media attitudes, and two social media activities (i.e., posting and replying) exhibited different moderating effects on attitudes. This study provides following discussions. First, three types of social capital cultivation capabilities were identified. Despite the numerous concerns about social media, such as whether it is a decent and real environment that produces social capital, this study confirms that people explicitly expect and experience social capital values from social media. Second, posting and replying activities are two building blocks of social media activities. These two activities are useful in explaining different the attitudes of social media users and predict future usage.Keywords: social media, social capital, social media satisfaction, social media use intention
Procedia PDF Downloads 191235 A Digital Twin Approach to Support Real-time Situational Awareness and Intelligent Cyber-physical Control in Energy Smart Buildings
Authors: Haowen Xu, Xiaobing Liu, Jin Dong, Jianming Lian
Abstract:
Emerging smart buildings often employ cyberinfrastructure, cyber-physical systems, and Internet of Things (IoT) technologies to increase the automation and responsiveness of building operations for better energy efficiency and lower carbon emission. These operations include the control of Heating, Ventilation, and Air Conditioning (HVAC) and lighting systems, which are often considered a major source of energy consumption in both commercial and residential buildings. Developing energy-saving control models for optimizing HVAC operations usually requires the collection of high-quality instrumental data from iterations of in-situ building experiments, which can be time-consuming and labor-intensive. This abstract describes a digital twin approach to automate building energy experiments for optimizing HVAC operations through the design and development of an adaptive web-based platform. The platform is created to enable (a) automated data acquisition from a variety of IoT-connected HVAC instruments, (b) real-time situational awareness through domain-based visualizations, (c) adaption of HVAC optimization algorithms based on experimental data, (d) sharing of experimental data and model predictive controls through web services, and (e) cyber-physical control of individual instruments in the HVAC system using outputs from different optimization algorithms. Through the digital twin approach, we aim to replicate a real-world building and its HVAC systems in an online computing environment to automate the development of building-specific model predictive controls and collaborative experiments in buildings located in different climate zones in the United States. We present two case studies to demonstrate our platform’s capability for real-time situational awareness and cyber-physical control of the HVAC in the flexible research platforms within the Oak Ridge National Laboratory (ORNL) main campus. Our platform is developed using adaptive and flexible architecture design, rendering the platform generalizable and extendable to support HVAC optimization experiments in different types of buildings across the nation.Keywords: energy-saving buildings, digital twins, HVAC, cyber-physical system, BIM
Procedia PDF Downloads 110234 Effect of Cutting Tools and Working Conditions on the Machinability of Ti-6Al-4V Using Vegetable Oil-Based Cutting Fluids
Authors: S. Gariani, I. Shyha
Abstract:
Cutting titanium alloys are usually accompanied with low productivity, poor surface quality, short tool life and high machining costs. This is due to the excessive generation of heat at the cutting zone and difficulties in heat dissipation due to relatively low heat conductivity of this metal. The cooling applications in machining processes are crucial as many operations cannot be performed efficiently without cooling. Improving machinability, increasing productivity, enhancing surface integrity and part accuracy are the main advantages of cutting fluids. Conventional fluids such as mineral oil-based, synthetic and semi-synthetic are the most common cutting fluids in the machining industry. Although, these cutting fluids are beneficial in the industries, they pose a great threat to human health and ecosystem. Vegetable oils (VOs) are being investigated as a potential source of environmentally favourable lubricants, due to a combination of biodegradability, good lubricous properties, low toxicity, high flash points, low volatility, high viscosity indices and thermal stability. Fatty acids of vegetable oils are known to provide thick, strong, and durable lubricant films. These strong lubricating films give the vegetable oil base stock a greater capability to absorb pressure and high load carrying capacity. This paper details preliminary experimental results when turning Ti-6Al-4V. The impact of various VO-based cutting fluids, cutting tool materials, working conditions was investigated. The full factorial experimental design was employed involving 24 tests to evaluate the influence of process variables on average surface roughness (Ra), tool wear and chip formation. In general, Ra varied between 0.5 and 1.56 µm and Vasco1000 cutting fluid presented comparable performance with other fluids in terms of surface roughness while uncoated coarse grain WC carbide tool achieved lower flank wear at all cutting speeds. On the other hand, all tools tips were subjected to uniform flank wear during whole cutting trails. Additionally, formed chip thickness ranged between 0.1 and 0.14 mm with a noticeable decrease in chip size when higher cutting speed was used.Keywords: cutting fluids, turning, Ti-6Al-4V, vegetable oils, working conditions
Procedia PDF Downloads 279233 Optimization of Dez Dam Reservoir Operation Using Genetic Algorithm
Authors: Alireza Nikbakht Shahbazi, Emadeddin Shirali
Abstract:
Since optimization issues of water resources are complicated due to the variety of decision making criteria and objective functions, it is sometimes impossible to resolve them through regular optimization methods or, it is time or money consuming. Therefore, the use of modern tools and methods is inevitable in resolving such problems. An accurate and essential utilization policy has to be determined in order to use natural resources such as water reservoirs optimally. Water reservoir programming studies aim to determine the final cultivated land area based on predefined agricultural models and water requirements. Dam utilization rule curve is also provided in such studies. The basic information applied in water reservoir programming studies generally include meteorological, hydrological, agricultural and water reservoir related data, and the geometric characteristics of the reservoir. The system of Dez dam water resources was simulated applying the basic information in order to determine the capability of its reservoir to provide the objectives of the performed plan. As a meta-exploratory method, genetic algorithm was applied in order to provide utilization rule curves (intersecting the reservoir volume). MATLAB software was used in order to resolve the foresaid model. Rule curves were firstly obtained through genetic algorithm. Then the significance of using rule curves and the decrease in decision making variables in the system was determined through system simulation and comparing the results with optimization results (Standard Operating Procedure). One of the most essential issues in optimization of a complicated water resource system is the increasing number of variables. Therefore a lot of time is required to find an optimum answer and in some cases, no desirable result is obtained. In this research, intersecting the reservoir volume has been applied as a modern model in order to reduce the number of variables. Water reservoir programming studies has been performed based on basic information, general hypotheses and standards and applying monthly simulation technique for a statistical period of 30 years. Results indicated that application of rule curve prevents the extreme shortages and decrease the monthly shortages.Keywords: optimization, rule curve, genetic algorithm method, Dez dam reservoir
Procedia PDF Downloads 265232 Artificial Membrane Comparison for Skin Permeation in Skin PAMPA
Authors: Aurea C. L. Lacerda, Paulo R. H. Moreno, Bruna M. P. Vianna, Cristina H. R. Serra, Airton Martin, André R. Baby, Vladi O. Consiglieri, Telma M. Kaneko
Abstract:
The modified Franz cell is the most widely used model for in vitro permeation studies, however it still presents some disadvantages. Thus, some alternative methods have been developed such as Skin PAMPA, which is a bio- artificial membrane that has been applied for skin penetration estimation of xenobiotics based on HT permeability model consisting. Skin PAMPA greatest advantage is to carry out more tests, in a fast and inexpensive way. The membrane system mimics the stratum corneum characteristics, which is the primary skin barrier. The barrier properties are given by corneocytes embedded in a multilamellar lipid matrix. This layer is the main penetration route through the paracellular permeation pathway and it consists of a mixture of cholesterol, ceramides, and fatty acids as the dominant components. However, there is no consensus on the membrane composition. The objective of this work was to compare the performance among different bio-artificial membranes for studying the permeation in skin PAMPA system. Material and methods: In order to mimetize the lipid composition`s present in the human stratum corneum six membranes were developed. The membrane composition was equimolar mixture of cholesterol, ceramides 1-O-C18:1, C22, and C20, plus fatty acids C20 and C24. The membrane integrity assay was based on the transport of Brilliant Cresyl Blue, which has a low permeability; and Lucifer Yellow with very poor permeability and should effectively be completely rejected. The membrane characterization was performed using Confocal Laser Raman Spectroscopy, using stabilized laser at 785 nm with 10 second integration time and 2 accumulations. The membrane behaviour results on the PAMPA system were statistically evaluated and all of the compositions have shown integrity and permeability. The confocal Raman spectra were obtained in the region of 800-1200 cm-1 that is associated with the C-C stretches of the carbon scaffold from the stratum corneum lipids showed similar pattern for all the membranes. The ceramides, long chain fatty acids and cholesterol in equimolar ratio permitted to obtain lipid mixtures with self-organization capability, similar to that occurring into the stratum corneum. Conclusion: The artificial biological membranes studied for Skin PAMPA showed to be similar and with comparable properties to the stratum corneum.Keywords: bio-artificial membranes, comparison, confocal Raman, skin PAMPA
Procedia PDF Downloads 509231 Effect of Locally Injected Mesenchymal Stem Cells on Bone Regeneration of Rat Calvaria Defects
Authors: Gileade P. Freitas, Helena B. Lopes, Alann T. P. Souza, Paula G. F. P. Oliveira, Adriana L. G. Almeida, Paulo G. Coelho, Marcio M. Beloti, Adalberto L. Rosa
Abstract:
Bone tissue presents great capacity to regenerate when injured by trauma, infectious processes, or neoplasia. However, the extent of injury may exceed the inherent tissue regeneration capability demanding some kind of additional intervention. In this scenario, cell therapy has emerged as a promising alternative to treat challenging bone defects. This study aimed at evaluating the effect of local injection of bone marrow-derived mesenchymal stem cells (BM-MSCs) and adipose tissue-derived mesenchymal stem cells (AT-MSCs) on bone regeneration of rat calvaria defects. BM-MSCs and AT-MSCs were isolated and characterized by expression of surface markers; cell viability was evaluated after injection through a 21G needle. Defects of 5 mm in diameter were created in calvaria and after two weeks a single injection of BM-MSCs, AT-MSCs or vehicle-PBS without cells (Control) was carried out. Cells were tracked by bioluminescence and at 4 weeks post-injection bone formation was evaluated by micro-computed tomography (μCT) and histology, nanoindentation, and through gene expression of bone remodeling markers. The data were evaluated by one-way analysis of variance (p≤0.05). BM-MSCs and AT-MSCs presented characteristics of mesenchymal stem cells, kept viability after passing through a 21G needle and remained in the defects until day 14. In general, injection of both BM-MSCs and AT-MSCs resulted in higher bone formation compared to Control. Additionally, this bone tissue displayed elastic modulus and hardness similar to the pristine calvaria bone. The expression of all evaluated genes involved in bone formation was upregulated in bone tissue formed by BM-MSCs compared to AT-MSCs while genes involved in bone resorption were upregulated in AT-MSCs-formed bone. We show that cell therapy based on the local injection of BM-MSCs or AT-MSCs is effective in delivering viable cells that displayed local engraftment and induced a significant improvement in bone healing. Despite differences in the molecular cues observed between BM-MSCs and AT-MSCs, both cells were capable of forming bone tissue at comparable amounts and properties. These findings may drive cell therapy approaches toward the complete bone regeneration of challenging sites.Keywords: cell therapy, mesenchymal stem cells, bone repair, cell culture
Procedia PDF Downloads 184230 Sensitivity Analysis of the Heat Exchanger Design in Net Power Oxy-Combustion Cycle for Carbon Capture
Authors: Hirbod Varasteh, Hamidreza Gohari Darabkhani
Abstract:
The global warming and its impact on climate change is one of main challenges for current century. Global warming is mainly due to the emission of greenhouse gases (GHG) and carbon dioxide (CO2) is known to be the major contributor to the GHG emission profile. Whilst the energy sector is the primary source for CO2 emission, Carbon Capture and Storage (CCS) are believed to be the solution for controlling this emission. Oxyfuel combustion (Oxy-combustion) is one of the major technologies for capturing CO2 from power plants. For gas turbines, several Oxy-combustion power cycles (Oxyturbine cycles) have been investigated by means of thermodynamic analysis. NetPower cycle is one of the leading oxyturbine power cycles with almost full carbon capture capability from a natural gas fired power plant. In this manuscript, sensitivity analysis of the heat exchanger design in NetPower cycle is completed by means of process modelling. The heat capacity variation and supercritical CO2 with gaseous admixtures are considered for multi-zone analysis with Aspen Plus software. It is found that the heat exchanger design has a major role to increase the efficiency of NetPower cycle. The pinch-point analysis is done to extract the composite and grand composite curve for the heat exchanger. In this paper, relationship between the cycle efficiency and the minimum approach temperature (∆Tmin) of the heat exchanger has also been evaluated. Increase in ∆Tmin causes a decrease in the temperature of the recycle flue gases (RFG) and an overall decrease in the required power for the recycled gas compressor. The main challenge in the design of heat exchangers in power plants is a tradeoff between the capital and operational costs. To achieve lower ∆Tmin, larger size of heat exchanger is required. This means a higher capital cost but leading to a better heat recovery and lower operational cost. To achieve this, ∆Tmin is selected from the minimum point in the diagrams of capital and operational costs. This study provides an insight into the NetPower Oxy-combustion cycle’s performance analysis and operational condition based on its heat exchanger design.Keywords: carbon capture and storage, oxy-combustion, netpower cycle, oxy turbine cycles, zero emission, heat exchanger design, supercritical carbon dioxide, oxy-fuel power plant, pinch point analysis
Procedia PDF Downloads 204229 Bioinformatics High Performance Computation and Big Data
Authors: Javed Mohammed
Abstract:
Right now, bio-medical infrastructure lags well behind the curve. Our healthcare system is dispersed and disjointed; medical records are a bit of a mess; and we do not yet have the capacity to store and process the crazy amounts of data coming our way from widespread whole-genome sequencing. And then there are privacy issues. Despite these infrastructure challenges, some researchers are plunging into bio medical Big Data now, in hopes of extracting new and actionable knowledge. They are doing delving into molecular-level data to discover bio markers that help classify patients based on their response to existing treatments; and pushing their results out to physicians in novel and creative ways. Computer scientists and bio medical researchers are able to transform data into models and simulations that will enable scientists for the first time to gain a profound under-standing of the deepest biological functions. Solving biological problems may require High-Performance Computing HPC due either to the massive parallel computation required to solve a particular problem or to algorithmic complexity that may range from difficult to intractable. Many problems involve seemingly well-behaved polynomial time algorithms (such as all-to-all comparisons) but have massive computational requirements due to the large data sets that must be analyzed. High-throughput techniques for DNA sequencing and analysis of gene expression have led to exponential growth in the amount of publicly available genomic data. With the increased availability of genomic data traditional database approaches are no longer sufficient for rapidly performing life science queries involving the fusion of data types. Computing systems are now so powerful it is possible for researchers to consider modeling the folding of a protein or even the simulation of an entire human body. This research paper emphasizes the computational biology's growing need for high-performance computing and Big Data. It illustrates this article’s indispensability in meeting the scientific and engineering challenges of the twenty-first century, and how Protein Folding (the structure and function of proteins) and Phylogeny Reconstruction (evolutionary history of a group of genes) can use HPC that provides sufficient capability for evaluating or solving more limited but meaningful instances. This article also indicates solutions to optimization problems, and benefits Big Data and Computational Biology. The article illustrates the Current State-of-the-Art and Future-Generation Biology of HPC Computing with Big Data.Keywords: high performance, big data, parallel computation, molecular data, computational biology
Procedia PDF Downloads 364228 Test Method Development for Evaluation of Process and Design Effect on Reinforced Tube
Authors: Cathal Merz, Gareth O’Donnell
Abstract:
Coil reinforced thin-walled (CRTW) tubes are used in medicine to treat problems affecting blood vessels within the body through minimally invasive procedures. The CRTW tube considered in this research makes up part of such a device and is inserted into the patient via their femoral or brachial arteries and manually navigated to the site in need of treatment. This procedure replaces the requirement to perform open surgery but is limited by reduction of blood vessel lumen diameter and increase in tortuosity of blood vessels deep in the brain. In order to maximize the capability of these procedures, CRTW tube devices are being manufactured with decreasing wall thicknesses in order to deliver treatment deeper into the body and to allow passage of other devices through its inner diameter. This introduces significant stresses to the device materials which have resulted in an observed increase in the breaking of the proximal segment of the device into two separate pieces after it has failed by buckling. As there is currently no international standard for measuring the mechanical properties of these CRTW tube devices, it is difficult to accurately analyze this problem. The aim of the current work is to address this discrepancy in the biomedical device industry by developing a measurement system that can be used to quantify the effect of process and design changes on CRTW tube performance, aiding in the development of better performing, next generation devices. Using materials testing frames, micro-computed tomography (micro-CT) imaging, experiment planning, analysis of variance (ANOVA), T-tests and regression analysis, test methods have been developed for assessing the impact of process and design changes on the device. The major findings of this study have been an insight into the suitability of buckle and three-point bend tests for the measurement of the effect of varying processing factors on the device’s performance, and guidelines for interpreting the output data from the test methods. The findings of this study are of significant interest with respect to verifying and validating key process and design changes associated with the device structure and material condition. Test method integrity evaluation is explored throughout.Keywords: neurovascular catheter, coil reinforced tube, buckling, three-point bend, tensile
Procedia PDF Downloads 117227 Analysis of Wheel Lock up Effects on Skidding Distance for Heavy Vehicles
Authors: Mahdieh Zamzamzadeh, Ahmad Abdullah Saifizul, Rahizar Ramli
Abstract:
The road accidents involving heavy vehicles have been showing worrying trends and, year after year, have increased the concern and awareness levels on safety of roads and transportations especially in developing countries like Malaysia. Statistics of road crashes continue to show that there are many contributing factors on the capability of a heavy vehicle to stop on safe distance and ultimately prevent traffic crashes. However, changes in the road condition due to weather variations and the vehicle dynamic specifications such as loading conditions and speed are the main risk factors because they will affect a heavy vehicle’s braking performance due to losing control and not being able to stop the vehicle, and in many cases will cause wheel lock up and accordingly skidding. Predicting heavy vehicle skidding distance is crucial for accident reconstruction and roadside safety engineers. Despite this, formal tools to study heavy vehicle skidding distance before stopping completely are totally limited, and most researchers have only considered braking distance in their studies. As a possible new tool, this work presents the iterative use of vehicle dynamic simulations to study heavy vehicle-roadway interaction in order to predict wheel lock up effects on skidding distance and safety. This research addresses the influence of the vehicle and road conditions on skidding distance after wheel lock up and presents a precise analysis of skidding phenomenon. The vehicle speed, vehicle loading condition and road friction parameters were all varied in a simulation-based analysis. In order to simulate the wheel lock up situation, a heavy vehicle model was constructed and simulated using multibody vehicle dynamics simulation software, and careful analysis was made on the conditions which caused the skidding distance to increase or decrease through a method using to predict skidding distance as part of braking distance. By applying many simulations, the results were quite revealing relation between the heavy vehicles loading condition, various sets of speed and road coefficient of friction and their interaction effect on the skidding distance. A number of results are presented which illustrate how the heavy vehicle overloading can seriously affect the skidding distance. Moreover, the results of simulation give the skid mark length, which is a necessary input data during accident reconstruction involving emergency braking.Keywords: accident reconstruction, Braking, heavy vehicle, skidding distance, skid mark, wheel lock up
Procedia PDF Downloads 499226 On the Optimality Assessment of Nano-Particle Size Spectrometry and Its Association to the Entropy Concept
Authors: A. Shaygani, R. Saifi, M. S. Saidi, M. Sani
Abstract:
Particle size distribution, the most important characteristics of aerosols, is obtained through electrical characterization techniques. The dynamics of charged nano-particles under the influence of electric field in electrical mobility spectrometer (EMS) reveals the size distribution of these particles. The accuracy of this measurement is influenced by flow conditions, geometry, electric field and particle charging process, therefore by the transfer function (transfer matrix) of the instrument. In this work, a wire-cylinder corona charger was designed and the combined field-diffusion charging process of injected poly-disperse aerosol particles was numerically simulated as a prerequisite for the study of a multi-channel EMS. The result, a cloud of particles with non-uniform charge distribution, was introduced to the EMS. The flow pattern and electric field in the EMS were simulated using computational fluid dynamics (CFD) to obtain particle trajectories in the device and therefore to calculate the reported signal by each electrometer. According to the output signals (resulted from bombardment of particles and transferring their charges as currents), we proposed a modification to the size of detecting rings (which are connected to electrometers) in order to evaluate particle size distributions more accurately. Based on the capability of the system to transfer information contents about size distribution of the injected particles, we proposed a benchmark for the assessment of optimality of the design. This method applies the concept of Von Neumann entropy and borrows the definition of entropy from information theory (Shannon entropy) to measure optimality. Entropy, according to the Shannon entropy, is the ''average amount of information contained in an event, sample or character extracted from a data stream''. Evaluating the responses (signals) which were obtained via various configurations of detecting rings, the best configuration which gave the best predictions about the size distributions of injected particles, was the modified configuration. It was also the one that had the maximum amount of entropy. A reasonable consistency was also observed between the accuracy of the predictions and the entropy content of each configuration. In this method, entropy is extracted from the transfer matrix of the instrument for each configuration. Ultimately, various clouds of particles were introduced to the simulations and predicted size distributions were compared to the exact size distributions.Keywords: aerosol nano-particle, CFD, electrical mobility spectrometer, von neumann entropy
Procedia PDF Downloads 343225 Estimates of Freshwater Content from ICESat-2 Derived Dynamic Ocean Topography
Authors: Adan Valdez, Shawn Gallaher, James Morison, Jordan Aragon
Abstract:
Global climate change has impacted atmospheric temperatures contributing to rising sea levels, decreasing sea ice, and increased freshening of high latitude oceans. This freshening has contributed to increased stratification inhibiting local mixing and nutrient transport and modifying regional circulations in polar oceans. In recent years, the Western Arctic has seen an increase in freshwater volume at an average rate of 397+-116 km3/year. The majority of the freshwater volume resides in the Beaufort Gyre surface lens driven by anticyclonic wind forcing, sea ice melt, and Arctic river runoff. The total climatological freshwater content is typically defined as water fresher than 34.8. The near-isothermal nature of Arctic seawater and non-linearities in the equation of state for near-freezing waters result in a salinity driven pycnocline as opposed to the temperature driven density structure seen in the lower latitudes. In this study, we investigate the relationship between freshwater content and remotely sensed dynamic ocean topography (DOT). In-situ measurements of freshwater content are useful in providing information on the freshening rate of the Beaufort Gyre; however, their collection is costly and time consuming. NASA’s Advanced Topographic Laser Altimeter System (ATLAS) derived dynamic ocean topography (DOT), and Air Expendable CTD (AXCTD) derived Freshwater Content are used to develop a linear regression model. In-situ data for the regression model is collected across the 150° West meridian, which typically defines the centerline of the Beaufort Gyre. Two freshwater content models are determined by integrating the freshwater volume between the surface and an isopycnal corresponding to reference salinities of 28.7 and 34.8. These salinities correspond to those of the winter pycnocline and total climatological freshwater content, respectively. Using each model, we determine the strength of the linear relationship between freshwater content and satellite derived DOT. The result of this modeling study could provide a future predictive capability of freshwater volume changes in the Beaufort-Chukchi Sea using non in-situ methods. Successful employment of the ICESat-2’s DOT approximation of freshwater content could potentially reduce reliance on field deployment platforms to characterize physical ocean properties.Keywords: ICESat-2, dynamic ocean topography, freshwater content, beaufort gyre
Procedia PDF Downloads 87224 Integrating Virtual Reality and Building Information Model-Based Quantity Takeoffs for Supporting Construction Management
Authors: Chin-Yu Lin, Kun-Chi Wang, Shih-Hsu Wang, Wei-Chih Wang
Abstract:
A construction superintendent needs to know not only the amount of quantities of cost items or materials completed to develop a daily report or calculate the daily progress (earned value) in each day, but also the amount of quantities of materials (e.g., reinforced steel and concrete) to be ordered (or moved into the jobsite) for performing the in-progress or ready-to-start construction activities (e.g., erection of reinforced steel and concrete pouring). These daily construction management tasks require great effort in extracting accurate quantities in a short time (usually must be completed right before getting off work every day). As a result, most superintendents can only provide these quantity data based on either what they see on the site (high inaccuracy) or the extraction of quantities from two-dimension (2D) construction drawings (high time consumption). Hence, the current practice of providing the amount of quantity data completed in each day needs improvement in terms of more accuracy and efficiency. Recently, a three-dimension (3D)-based building information model (BIM) technique has been widely applied to support construction quantity takeoffs (QTO) process. The capability of virtual reality (VR) allows to view a building from the first person's viewpoint. Thus, this study proposes an innovative system by integrating VR (using 'Unity') and BIM (using 'Revit') to extract quantities to support the above daily construction management tasks. The use of VR allows a system user to be present in a virtual building to more objectively assess the construction progress in the office. This VR- and BIM-based system is also facilitated by an integrated database (consisting of the information and data associated with the BIM model, QTO, and costs). In each day, a superintendent can work through a BIM-based virtual building to quickly identify (via a developed VR shooting function) the building components (or objects) that are in-progress or finished in the jobsite. And he then specifies a percentage (e.g., 20%, 50% or 100%) of completion of each identified building object based on his observation on the jobsite. Next, the system will generate the completed quantities that day by multiplying the specified percentage by the full quantities of the cost items (or materials) associated with the identified object. A building construction project located in northern Taiwan is used as a case study to test the benefits (i.e., accuracy and efficiency) of the proposed system in quantity extraction for supporting the development of daily reports and the orders of construction materials.Keywords: building information model, construction management, quantity takeoffs, virtual reality
Procedia PDF Downloads 132223 Detection of Aflatoxin B1 Producing Aspergillus flavus Genes from Maize Feed Using Loop-Mediated Isothermal Amplification (LAMP) Technique
Authors: Sontana Mimapan, Phattarawadee Wattanasuntorn, Phanom Saijit
Abstract:
Aflatoxin contamination in maize, one of several agriculture crops grown for livestock feeding, is still a problem throughout the world mainly under hot and humid weather conditions like Thailand. In this study Aspergillus flavus (A. Flavus), the key fungus for aflatoxin production especially aflatoxin B1 (AFB1), isolated from naturally infected maize were identified and characterized according to colony morphology and PCR using ITS, Beta-tubulin and calmodulin genes. The strains were analysed for the presence of four aflatoxigenic biosynthesis genes in relation to their capability to produce AFB1, Ver1, Omt1, Nor1, and aflR. Aflatoxin production was then confirmed using immunoaffinity column technique. A loop-mediated isothermal amplification (LAMP) was applied as an innovative technique for rapid detection of target nucleic acid. The reaction condition was optimized at 65C for 60 min. and calcein flurescent reagent was added before amplification. The LAMP results showed clear differences between positive and negative reactions in end point analysis under daylight and UV light by the naked eye. In daylight, the samples with AFB1 producing A. Flavus genes developed a yellow to green color, but those without the genes retained the orange color. When excited with UV light, the positive samples become visible by bright green fluorescence. LAMP reactions were positive after addition of purified target DNA until dilutions of 10⁻⁶. The reaction products were then confirmed and visualized with 1% agarose gel electrophoresis. In this regards, 50 maize samples were collected from dairy farms and tested for the presence of four aflatoxigenic biosynthesis genes using LAMP technique. The results were positive in 18 samples (36%) but negative in 32 samples (64%). All of the samples were rechecked by PCR and the results were the same as LAMP, indicating 100% specificity. Additionally, when compared with the immunoaffinity column-based aflatoxin analysis, there was a significant correlation between LAMP results and aflatoxin analysis (r= 0.83, P < 0.05) which suggested that positive maize samples were likely to be a high- risk feed. In conclusion, the LAMP developed in this study can provide a simple and rapid approach for detecting AFB1 producing A. Flavus genes from maize and appeared to be a promising tool for the prediction of potential aflatoxigenic risk in livestock feedings.Keywords: Aflatoxin B1, Aspergillus flavus genes, maize, loop-mediated isothermal amplification
Procedia PDF Downloads 240222 Antioxidant, Hypoglycemic and Hypotensive Effects Affected by Various Molecular Weights of Cold Water Extract from Pleurotus Citrinopileatus
Authors: Pao-Huei Chen, Shu-Mei Lin, Yih-Ming Weng, Zer-Ran Yu, Be-Jen Wang
Abstract:
Pancreatic α-amylase and intestinal α-glucosidase are the critical enzymes for the breakdown of complex carbohydrates into di- or mono-saccharide, which play an important role in modulating postprandial blood sugars. Angiotensin converting enzyme (ACE) converts inactive angiotensin-I into active angiotensin-II, which subsequently increase blood pressure through triggering vasoconstriction and aldosterone secretion. Thus, inhibition of carbohydrate-digestion enzymes and ACE will help the management of blood glucose and blood pressure, respectively. Studies showed Pleurotus citrinopileatus (PC), an edible mushroom and commonly cultured in oriental countries, exerted anticancer, immune improving, antioxidative, hypoglycemic and hypolipidemic effects. Previous studies also showed various molecular weights (MW) fractioned from extracts may affect biological activities due to varying contents of bioactive components. Thus, the objective of this study is to investigate the in vitro antioxidant, hypoglycemic and hypotenstive effects and distribution of active compounds of various MWs of cold water extract from P. citrinopileatus (CWEPC). CWEPC was fractioned into four various MW fractions, PC-I (<1 kDa), PC-II (1-3.5 kDa), PC-III (3.5-10 kDa), and PC-IV (>10 kDa), using an ultrafiltration system. The physiological activities, including antioxidant activities, the inhibition capabilities of pancreatic α-amylase, intestinal α-glucosidase, and hypertension-linked ACE, and the active components, including polysaccharides, protein, and phenolic contents, of CWEPC and four fractions were determined. The results showed that fractions with lower MW exerted a higher antioxidant activity (p<0.05), which was positively correlated to the levels of total phenols. In contrast, the inhibition effects on the activities of α-amylase, α-glucosidase, and ACE of PC-IV fraction were significantly higher than CWEPC and the other three low MW fractions (< 10 kDa), which was more related to protein contents. The inhibition capability of CWEPC and PC-IV on α-amylase activity was 1/13.4 to 1/2.7 relative to that of acarbose (positive control), respectively. However, the inhibitory ability of PC-IV on α-glucosidase (IC50 = 0.5 mg/mL) was significantly higher than acarbose (IC50 = 1.7 mg/mL). Kinetic data revealed that PC-IV fraction followed a non-competitive inhibition on α-glucosidase activity. In conclusion, the distribution of various bioactive components contribute to the functions of different MW fractions on oxidative stress prevention, and blood pressure and glucose modulation.Keywords: α-Amylase, angiotensin converting enzyme, α-Glucosidase, Pleurotus citrinopileatus
Procedia PDF Downloads 460221 Component Test of Martensitic/Ferritic Steels and Nickel-Based Alloys and Their Welded Joints under Creep and Thermo-Mechanical Fatigue Loading
Authors: Daniel Osorio, Andreas Klenk, Stefan Weihe, Andreas Kopp, Frank Rödiger
Abstract:
Future power plants currently face high design requirements due to worsening climate change and environmental restrictions, which demand high operational flexibility, superior thermal performance, minimal emissions, and higher cyclic capability. The aim of the paper is, therefore, to investigate the creep and thermo-mechanical material behavior of improved materials experimentally and welded joints at component scale under near-to-service operating conditions, which are promising for application in highly efficient and flexible future power plants. These materials promise an increase in flexibility and a reduction in manufacturing costs by providing enhanced creep strength and, therefore, the possibility for wall thickness reduction. At the temperature range between 550°C and 625°C, the investigation focuses on the in-phase thermo-mechanical fatigue behavior of dissimilar welded joints of conventional materials (ferritic and martensitic material T24 and T92) to nickel-based alloys (A617B and HR6W) by means of membrane test panels. The temperature and external load are varied in phase during the test, while the internal pressure remains constant. At the temperature range between 650°C and 750°C, it focuses on the creep behavior under multiaxial stress loading of similar and dissimilar welded joints of high temperature resistant nickel-based alloys (A740H, A617B, and HR6W) by means of a thick-walled-component test. In this case, the temperature, the external axial load, and the internal pressure remain constant during testing. Numerical simulations are used for the estimation of the axial component load in order to induce a meaningful damage evolution without causing a total component failure. Metallographic investigations after testing will provide support for understanding the damage mechanism and the influence of the thermo-mechanical load and multiaxiality on the microstructure change and on the creep and TMF- strength.Keywords: creep, creep-fatigue, component behaviour, weld joints, high temperature material behaviour, nickel-alloys, high temperature resistant steels
Procedia PDF Downloads 119