Search results for: geometric constraint solver
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 1240

Search results for: geometric constraint solver

160 Raising the Property Provisions of the Topographic Located near the Locality of Gircov, Romania

Authors: Carmen Georgeta Dumitrache

Abstract:

Measurements of terrestrial science aims to study the totality of operations and computing, which are carried out for the purposes of representation on the plan or map of the land surface in a specific cartographic projection and topographic scale. With the development of society, the metrics have evolved, and they land, being dependent on the achievement of a goal-bound utility of economic activity and of a scientific purpose related to determining the form and dimensions of the Earth. For measurements in the field, data processing and proper representation on drawings and maps of planimetry and landform of the land, using topographic and geodesic instruments, calculation and graphical reporting, which requires a knowledge of theoretical and practical concepts from different areas of science and technology. In order to use properly in practice, topographical and geodetic instruments designed to measure precise angles and distances are required knowledge of geometric optics, precision mechanics, the strength of materials, and more. For processing, the results from field measurements are necessary for calculation methods, based on notions of geometry, trigonometry, algebra, mathematical analysis and computer science. To be able to illustrate topographic measurements was established for the lifting of property located near the locality of Gircov, Romania. We determine this total surface of the plan (T30), parcel/plot, but also in the field trace the coordinates of a parcel. The purpose of the removal of the planimetric consisted of: the exact determination of the bounding surface; analytical calculation of the surface; comparing the surface determined with the one registered in the documents produced; drawing up a plan of location and delineation with closeness and distance contour, as well as highlighting the parcels comprising this property; drawing up a plan of location and delineation with closeness and distance contour for a parcel from Dave; in the field trace outline of plot points from the previous point. The ultimate goal of this work was to determine and represent the surface, but also to tear off a plot of the surface total, while respecting the first surface condition imposed by the Act of the beneficiary's property.

Keywords: topography, surface, coordinate, modeling

Procedia PDF Downloads 257
159 Physicochemical Investigation of Caffeic Acid and Caffeinates with Chosen Metals (Na, Mg, Al, Fe, Ru, Os)

Authors: Włodzimierz Lewandowski, Renata Świsłocka, Aleksandra Golonko, Grzegorz Świderski, Monika Kalinowska

Abstract:

Caffeic acid (3,4-dihydroxycinnamic) is distributed in a free form or as ester conjugates in many fruits, vegetables and seasonings including plants used for medical purpose. Caffeic acid is present in propolis – a substance with exceptional healing properties used in natural medicine since ancient times. The antioxidant, antibacterial, antiinflammatory and anticarcinogenic properties of caffeic acid are widely described in the literature. The biological activity of chemical compounds can be modified by the synthesis of their derivatives or metal complexes. The structure of the compounds determines their biological properties. This work is a continuation of the broader topic concerning the investigation of the correlation between the electronic charge distribution and biological (anticancer and antioxidant) activity of the chosen phenolic acids and their metal complexes. In the framework of this study the synthesis of new metal complexes of sodium, magnesium, aluminium, iron (III) ruthenium (III) and osmium (III) with caffeic acid was performed. The spectroscopic properties of these compounds were studied by means of FT-IR, FT-Raman, UV-Vis, ¹H and ¹³C NMR. The quantum-chemical calculations (at B3LYP/LAN L2DZ level) of caffeic acid and selected complexes were done. Moreover the antioxidant properties of synthesized complexes were studied in relation to selected stable radicals (method of reduction of DPPH and method of reduction of ABTS). On the basis of the differences in the number, intensity and locations of the bands from the IR, Raman, UV/Vis and NMR spectra of caffeic acid and its metal complexes the effect of metal cations on the electronic system of ligand was discussed. The geometry, theoretical spectra and electronic charge distribution were calculated by the use of Gaussian 09 programme. The geometric aromaticity indices (Aj – normalized function of the variance in bond lengths; BAC - bond alternation coefficient; HOMA – harmonic oscillator model of aromaticity and I₆ – Bird’s index) were calculated and the changes in the aromaticity of caffeic acid and its complexes was discussed. This work was financially supported by National Science Centre, Poland, under the research project number 2014/13/B/NZ7/02-352.

Keywords: antioxidant properties, caffeic acid, metal complexes, spectroscopic methods

Procedia PDF Downloads 216
158 Building Information Modeling-Based Information Exchange to Support Facilities Management Systems

Authors: Sandra T. Matarneh, Mark Danso-Amoako, Salam Al-Bizri, Mark Gaterell

Abstract:

Today’s facilities are ever more sophisticated and the need for available and reliable information for operation and maintenance activities is vital. The key challenge for facilities managers is to have real-time accurate and complete information to perform their day-to-day activities and to provide their senior management with accurate information for decision-making process. Currently, there are various technology platforms, data repositories, or database systems such as Computer-Aided Facility Management (CAFM) that are used for these purposes in different facilities. In most current practices, the data is extracted from paper construction documents and is re-entered manually in one of these computerized information systems. Construction Operations Building information exchange (COBie), is a non-proprietary data format that contains the asset non-geometric data which was captured and collected during the design and construction phases for owners and facility managers use. Recently software vendors developed add-in applications to generate COBie spreadsheet automatically. However, most of these add-in applications are capable of generating a limited amount of COBie data, in which considerable time is still required to enter the remaining data manually to complete the COBie spreadsheet. Some of the data which cannot be generated by these COBie add-ins is essential for facilities manager’s day-to-day activities such as job sheet which includes preventive maintenance schedules. To facilitate a seamless data transfer between BIM models and facilities management systems, we developed a framework that enables automated data generation using the data extracted directly from BIM models to external web database, and then enabling different stakeholders to access to the external web database to enter the required asset data directly to generate a rich COBie spreadsheet that contains most of the required asset data for efficient facilities management operations. The proposed framework is a part of ongoing research and will be demonstrated and validated on a typical university building. Moreover, the proposed framework supplements the existing body of knowledge in facilities management domain by providing a novel framework that facilitates seamless data transfer between BIM models and facilities management systems.

Keywords: building information modeling, BIM, facilities management systems, interoperability, information management

Procedia PDF Downloads 115
157 Design and Manufacture of Removable Nosecone Tips with Integrated Pitot Tubes for High Power Sounding Rocketry

Authors: Bjorn Kierulf, Arun Chundru

Abstract:

Over the past decade, collegiate rocketry teams have emerged across the country with various goals: space, liquid-fueled flight, etc. A critical piece of the development of knowledge within a club is the use of so-called "sounding rockets," whose goal is to take in-flight measurements that inform future rocket design. Common measurements include acceleration from inertial measurement units (IMU's), and altitude from barometers. With a properly tuned filter, these measurements can be used to find velocity, but are susceptible to noise, offset, and filter settings. Instead, velocity can be measured more directly and more instantaneously using a pitot tube, which operates by measuring the stagnation pressure. At supersonic speeds, an additional thermodynamic property is necessary to constrain the upstream state. One possibility is the stagnation temperature, measured by a thermocouple in the pitot tube. The routing of the pitot tube from the nosecone tip down to a pressure transducer is complicated by the nosecone's structure. Commercial-off-the-shelf (COTS) nosecones come with a removable metal tip (without a pitot tube). This provides the opportunity to make custom tips with integrated measurement systems without making the nosecone from scratch. The main design constraint is how the nosecone tip is held down onto the nosecone, using the tension in a threaded rod anchored to a bulkhead below. Because the threaded rod connects into a threaded hole in the center of the nosecone tip, the pitot tube follows a winding path, and the pressure fitting is off-center. Two designs will be presented in the paper, one with a curved pitot tube and a coaxial design that eliminates the need for the winding path by routing pressure through a structural tube. Additionally, three manufacturing methods will be presented for these designs: bound powder filament metal 3D printing, stereo-lithography (SLA) 3D printing, and traditional machining. These will employ three different materials, copper, steel, and proprietary resin. These manufacturing methods and materials are relatively low cost, thus accessible to student researchers. These designs and materials cover multiple use cases, based on how fast the sounding rocket is expected to travel and how important heating effects are - to measure and to avoid melting. This paper will include drawings showing key features and an overview of the design changes necessitated by the manufacture. It will also include a look at the successful use of these nosecone tips and the data they have gathered to date.

Keywords: additive manufacturing, machining, pitot tube, sounding rocketry

Procedia PDF Downloads 164
156 Purpose-Driven Collaborative Strategic Learning

Authors: Mingyan Hong, Shuozhao Hou

Abstract:

Collaborative Strategic Learning (CSL) teaches students to use learning strategies while working cooperatively. Student strategies include the following steps: defining the learning task and purpose; conducting ongoing negotiation of the learning materials by deciding "click" (I get it and I can teach it – green card, I get it –yellow card) or "clunk" (I don't get it – red card) at the end of each learning unit; "getting the gist" of the most important parts of the learning materials; and "wrapping up" key ideas. Find out how to help students of mixed achievement levels apply learning strategies while learning content area in materials in small groups. The design of CSL is based on social-constructivism and Vygotsky’s best-known concept of the Zone of Proximal Development (ZPD). The definition of ZPD is the distance between the actual acquisition level as decided by individual problem solution case and the level of potential acquisition level, similar to Krashen (1980)’s i+1, as decided through the problem-solution case under the facilitator’s guidance, or in group work with other more capable members (Vygotsky, 1978). Vygotsky claimed that learners’ ideal learning environment is in the ZPD. An ideal teacher or more-knowledgable-other (MKO) should be able to recognize a learner’s ZPD and facilitates them to develop beyond it. Then the MKO is able to leave the support step by step until the learner can perform the task without aid. Steven Krashen (1980) proposed Input hypothesis including i+1 hypothesis. The input hypothesis models are the application of ZPD in second language acquisition and have been widely recognized until today. Krashen (2019)’s optimal language learning environment (2019) further developed the application of ZPD and added the component of strategic group learning. The strategic group learning is composed of desirable learning materials learners are motivated to learn and desirable group members who are more capable and are therefore able to offer meaningful input to the learners. Purpose-driven Collaborative Strategic Learning Model is a strategic integration of ZPD, i+1 hypothesis model, and Optimal Language Learning Environment Model. It is purpose driven to ensure group members are motivated. It is collaborative so that an optimal learning environment where meaningful input from meaningful conversation can be generated. It is strategic because facilitators in the model strategically assign each member a meaningful and collaborative role, e.g., team leader, technician, problem solver, appraiser, offer group learning instrument so that the learning process is structured, and integrate group learning and team building making sure holistic development of each participant. Using data collected from college year one and year two students’ English courses, this presentation will demonstrate how purpose-driven collaborative strategic learning model is implemented in the second/foreign language classroom, using the qualitative data from questionnaire and interview. Particular, this presentation will show how second/foreign language learners grow from functioning with facilitator or more capable peer’s aid to performing without aid. The implication of this research is that purpose-driven collaborative strategic learning model can be used not only in language learning, but also in any subject area.

Keywords: collaborative, strategic, optimal input, second language acquisition

Procedia PDF Downloads 127
155 On-Site Coaching on Freshly-Graduated Nurses to Improves Quality of Clinical Handover and to Avoid Clinical Error

Authors: Sau Kam Adeline Chan

Abstract:

World Health Organization had listed ‘Communication during Patient Care Handovers’ as one of its highest 5 patient safety initiatives. Clinical handover means transfer of accountability and responsibility of clinical information from one health professional to another. The main goal of clinical handover is to convey patient’s current condition and treatment plan accurately. Ineffective communication at point of care is globally regarded as the main cause of the sentinel event. Situation, Background, Assessment and Recommendation (SBAR), a communication tool, is extensively regarded as an effective communication tool in healthcare setting. Nonetheless, just by scenario-based program in nursing school or attending workshops on SBAR would not be enough for freshly graduated nurses to apply it competently in a complex clinical practice. To what extend and in-depth of information should be conveyed during handover process is not easy to learn. As such, on-site coaching is essential to upgrade their expertise on the usage of SBAR and ultimately to avoid any clinical error. On-site coaching for all freshly graduated nurses on the usage of SBAR in clinical handover was commenced in August 2014. During the preceptorship period, freshly graduated nurses were coached by the preceptor. After that, they were gradually assigned to take care of a group of patients independently. Nurse leaders would join in their shift handover process at patient’s bedside. Feedback and support were given to them accordingly. Discrepancies on their clinical handover process were shared with them and documented for further improvement work. Owing to the constraint of manpower in nurse leader, about coaching for 30 times were provided to a nurse in a year. Staff satisfaction survey was conducted to gauge their feelings about the coaching and look into areas for further improvement. Number of clinical error avoided was documented as well. The nurses reported that there was a significant improvement particularly in their confidence and knowledge in clinical handover process. In addition, the sense of empowerment was developed when liaising with senior and experienced nurses. Their proficiency in applying SBAR was enhanced and they become more alert to the critical criteria of an effective clinical handover. Most importantly, accuracy of transferring patient’s condition was improved and repetition of information was avoided. Clinical errors were prevented and quality patient care was ensured. Using SBAR as a communication tool looks simple. The tool only provides a framework to guide the handover process. Nevertheless, without on-site training, loophole on clinical handover still exists, patient’s safety will be affected and clinical error still happens.

Keywords: freshly graduated nurse, competency of clinical handover, quality, clinical error

Procedia PDF Downloads 148
154 Virtual Engineers on Wheels: Transitioning from Mobile to Online Outreach

Authors: Kauser Jahan, Jason Halvorsen, Kara Banks, Kara Natoli, Elizabeth McWeeney, Brittany LeMasney, Nicole Caramanna, Justin Hillman, Christopher Hauske, Meghan Sparks

Abstract:

The Virtual Engineers on Wheels (ViEW) is a revised version of our established mobile K-12 outreach program Engineers on Wheels in order to address the pandemic. The Virtual Engineers on Wheels' (VIEW) goal has stayed the same as in prior years: to provide K-12 students and educators with the necessary resources to peak interest in the expanding fields of engineering. With these trying times, the Virtual Engineers on Wheels outreach has adapted its medium of instruction to be more seamless with the online approach to teaching and outreach. In the midst of COVID-19, providing a safe transfer of information has become a constraint for research. The focus has become how to uphold a level of quality instruction without diminishing the safety of those involved by promoting proper health practices and giving hope to students as well as their families. Furthermore, ViEW has created resources on effective strategies that minimize risk factors of COVID-19 and inform families that there is still a promising future ahead. To obtain these goals while still maintaining true to the hands-on learning that is so crucial to young minds, the approach is online video lectures followed by experiments within different engineering disciplines. ViEW has created a comprehensive website that students can leverage to explore the different fields of study. One of the experiments entails teaching about drone usage and how it might play a factor in the future of unmanned deliveries. Some of the other experiments focus on the differences in mask materials and their effectiveness, as well as their environmental outlook. Having students perform from home enables them a safe environment to learn at their own pace while still providing quality instruction that would normally be achieved in the classroom. Contact information is readily available on the website to provide interested parties with a means to ask their inquiries. As it currently stands, the interest in engineering/STEM-related fields is underrepresented from women and certain minority groups. So alongside the desire to grow interest, helping balance the scales is one of the main priorities of VIEW. In previous years, VIEW surveyed students before and after instruction to see if their perception of engineering has changed. In general, it is the understanding that being exposed to engineering/STEM at a young age increases the chances that it will be pursued later in life.

Keywords: STEM, engineering outreach, teaching pedagogy, pandemic

Procedia PDF Downloads 129
153 A Method to Identify the Critical Delay Factors for Building Maintenance Projects of Institutional Buildings: Case Study of Eastern India

Authors: Shankha Pratim Bhattacharya

Abstract:

In general building repair and renovation projects are minor in nature. It requires less attention as the primary cost involvement is relatively small. Although the building repair and maintenance projects look simple, it involves much complexity during execution. Many of the present research indicate that few uncertain situations are usually linked with maintenance projects. Those may not be read properly in the planning stage of the projects, and finally, lead to time overrun. Building repair and maintenance become essential and periodical after commissioning of the building. In Institutional buildings, the regular maintenance projects also include addition –alteration, modification activities. Increase in the student admission, new departments, and sections, new laboratories and workshops, up gradation of existing laboratories are very common in the institutional buildings in the developing nations like India. The project becomes very critical because it undergoes space problem, architectural design issues, structural modification, etc. One of the prime factors in the institutional building maintenance and modification project is the time constraint. Mostly it required being executed a specific non-work time period. The present research considered only the institutional buildings of the Eastern part of India to analyse the repair and maintenance project delay. A general survey was conducted among the technical institutes to find the causes and corresponding nature of construction delay factors. Five technical institutes are considered in the present study with repair, renovation, modification and extension type of projects. Construction delay factors are categorically subdivided into four groups namely, material, manpower (works), Contract and Site. The survey data are collected for the nature of delay responsible for a specific project and the absolute amount of delay through proposed and actual duration of work. In the first stage of the paper, a relative importance index (RII) is proposed for the delay factors. The occurrence of the delay factors is also judged by its frequency-severity nature. Finally, the delay factors are then rated and linked with the type of work. In the second stage, a regression analysis is executed to establish an empirical relationship between the actual time of a project and the percentage of delay. It also indicates the impact of the factors for delay responsibility. Ultimately, the present paper makes an effort to identify the critical delay factors for the repair and renovation type project in the Eastern Indian Institutional building.

Keywords: delay factor, institutional building, maintenance, relative importance index, regression analysis, repair

Procedia PDF Downloads 250
152 Inversely Designed Chipless Radio Frequency Identification (RFID) Tags Using Deep Learning

Authors: Madhawa Basnayaka, Jouni Paltakari

Abstract:

Fully passive backscattering chipless RFID tags are an emerging wireless technology with low cost, higher reading distance, and fast automatic identification without human interference, unlike already available technologies like optical barcodes. The design optimization of chipless RFID tags is crucial as it requires replacing integrated chips found in conventional RFID tags with printed geometric designs. These designs enable data encoding and decoding through backscattered electromagnetic (EM) signatures. The applications of chipless RFID tags have been limited due to the constraints of data encoding capacity and the ability to design accurate yet efficient configurations. The traditional approach to accomplishing design parameters for a desired EM response involves iterative adjustment of design parameters and simulating until the desired EM spectrum is achieved. However, traditional numerical simulation methods encounter limitations in optimizing design parameters efficiently due to the speed and resource consumption. In this work, a deep learning neural network (DNN) is utilized to establish a correlation between the EM spectrum and the dimensional parameters of nested centric rings, specifically square and octagonal. The proposed bi-directional DNN has two simultaneously running neural networks, namely spectrum prediction and design parameters prediction. First, spectrum prediction DNN was trained to minimize mean square error (MSE). After the training process was completed, the spectrum prediction DNN was able to accurately predict the EM spectrum according to the input design parameters within a few seconds. Then, the trained spectrum prediction DNN was connected to the design parameters prediction DNN and trained two networks simultaneously. For the first time in chipless tag design, design parameters were predicted accurately after training bi-directional DNN for a desired EM spectrum. The model was evaluated using a randomly generated spectrum and the tag was manufactured using the predicted geometrical parameters. The manufactured tags were successfully tested in the laboratory. The amount of iterative computer simulations has been significantly decreased by this approach. Therefore, highly efficient but ultrafast bi-directional DNN models allow rapid and complicated chipless RFID tag designs.

Keywords: artificial intelligence, chipless RFID, deep learning, machine learning

Procedia PDF Downloads 50
151 Non-Linear Finite Element Investigation on the Behavior of CFRP Strengthened Steel Square HSS Columns under Eccentric Loading

Authors: Tasnuba Binte Jamal, Khan Mahmud Amanat

Abstract:

Carbon Fiber-Reinforced Polymer (CFRP) composite materials have proven to have valuable properties and suitability to be used in the construction of new buildings and in upgrading the existing ones due to its effectiveness, ease of implementation and many more. In the present study, a numerical finite element investigation has been conducted using ANSYS 18.1 to study the behavior of square HSS AISC sections under eccentric compressive loading strengthened with CFRP materials. A three-dimensional finite element model for square HSS section using shell element was developed. Application of CFRP strengthening was incorporated in the finite element model by adding an additional layer of shell elements. Both material and geometric nonlinearities were incorporated in the model. The developed finite element model was applied to simulate experimental studies done by past researchers and it was found that good agreement exists between the current analysis and past experimental results, which established the acceptability and validity of the developed finite element model to carry out further investigation. Study was then focused on some selected non-compact AISC square HSS columns and the effects of number of CFRP layers, amount of eccentricities and cross-sectional geometry on the strength gain of those columns were observed. Load was applied at a distance equal to the column dimension and twice that of column dimension. It was observed that CFRP strengthening is comparatively effective for smaller eccentricities. For medium sized sections, strengthening tends to be effective at smaller eccentricities as well. For relatively large AISC square HSS columns, with increasing number of CFRP layers (from 1 to 3 layers) the gain in strength is approximately 1 to 38% to that of unstrengthened section for smaller eccentricities and slenderness ratio ranging from 27 to 54. For medium sized square HSS sections, effectiveness of CFRP strengthening increases approximately by about 12 to 162%. The findings of the present study provide a better understanding of the behavior of HSS sections strengthened with CFRP subjected to eccentric compressive load.

Keywords: CFRP strengthening, eccentricity, finite element model, square hollow section

Procedia PDF Downloads 144
150 Feature Selection Approach for the Classification of Hydraulic Leakages in Hydraulic Final Inspection using Machine Learning

Authors: Christian Neunzig, Simon Fahle, Jürgen Schulz, Matthias Möller, Bernd Kuhlenkötter

Abstract:

Manufacturing companies are facing global competition and enormous cost pressure. The use of machine learning applications can help reduce production costs and create added value. Predictive quality enables the securing of product quality through data-supported predictions using machine learning models as a basis for decisions on test results. Furthermore, machine learning methods are able to process large amounts of data, deal with unfavourable row-column ratios and detect dependencies between the covariates and the given target as well as assess the multidimensional influence of all input variables on the target. Real production data are often subject to highly fluctuating boundary conditions and unbalanced data sets. Changes in production data manifest themselves in trends, systematic shifts, and seasonal effects. Thus, Machine learning applications require intensive pre-processing and feature selection. Data preprocessing includes rule-based data cleaning, the application of dimensionality reduction techniques, and the identification of comparable data subsets. Within the used real data set of Bosch hydraulic valves, the comparability of the same production conditions in the production of hydraulic valves within certain time periods can be identified by applying the concept drift method. Furthermore, a classification model is developed to evaluate the feature importance in different subsets within the identified time periods. By selecting comparable and stable features, the number of features used can be significantly reduced without a strong decrease in predictive power. The use of cross-process production data along the value chain of hydraulic valves is a promising approach to predict the quality characteristics of workpieces. In this research, the ada boosting classifier is used to predict the leakage of hydraulic valves based on geometric gauge blocks from machining, mating data from the assembly, and hydraulic measurement data from end-of-line testing. In addition, the most suitable methods are selected and accurate quality predictions are achieved.

Keywords: classification, achine learning, predictive quality, feature selection

Procedia PDF Downloads 162
149 Geometric, Energetic and Topological Analysis of (Ethanol)₉-Water Heterodecamers

Authors: Jennifer Cuellar, Angie L. Parada, Kevin N. S. Chacon, Sol M. Mejia

Abstract:

The purification of bio-ethanol through distillation methods is an unresolved issue at the biofuel industry because of the ethanol-water azeotrope formation, which increases the steps of the purification process and subsequently increases the production costs. Therefore, understanding the mixture nature at the molecular level could provide new insights for improving the current methods and/or designing new and more efficient purification methods. For that reason, the present study focuses on the evaluation and analysis of (ethanol)₉-water heterodecamers, as the systems with the minimum molecular proportion that represents the azeotropic concentration (96 %m/m in ethanol). The computational modelling was carried out with B3LYP-D3/6-311++G(d,p) in Gaussian 09. Initial explorations of the potential energy surface were done through two methods: annealing simulated runs and molecular dynamics trajectories besides intuitive structures obtained from smaller (ethanol)n-water heteroclusters, n = 7, 8 and 9. The energetic order of the seven stable heterodecamers determines the most stable heterodecamer (Hdec-1) as a structure forming a bicyclic geometry with the O-H---O hydrogen bonds (HBs) where the water is a double proton donor molecule. Hdec-1 combines 1 water molecule and the same quantity of every ethanol conformer; this is, 3 trans, 3 gauche 1 and 3 gauche 2; its abundance is 89%, its decamerization energy is -80.4 kcal/mol, i.e. 13 kcal/mol most stable than the less stable heterodecamer. Besides, a way to understand why methanol does not form an azeotropic mixture with water, analogous systems ((ethanol)10, (methanol)10, and (methanol)9-water)) were optimized. Topologic analysis of the electron density reveals that Hec-1 forms 33 weak interactions in total: 11 O-H---O, 8 C-H---O, 2 C-H---C hydrogen bonds and 12 H---H interactions. The strength and abundance of the most unconventional interactions (H---H, C-H---O and C-H---O) seem to explain the preference of the ethanol for forming heteroclusters instead of clusters. Besides, O-H---O HBs present a significant covalent character according to topologic parameters as the Laplacian of electron density and the relationship between potential and kinetic energy densities evaluated at the bond critical points; obtaining negatives values and values between 1 and 2, for those two topological parameters, respectively.

Keywords: ADMP, DFT, ethanol-water azeotrope, Grimme dispersion correction, simulated annealing, weak interactions

Procedia PDF Downloads 103
148 Risk and Reliability Based Probabilistic Structural Analysis of Railroad Subgrade Using Finite Element Analysis

Authors: Asif Arshid, Ying Huang, Denver Tolliver

Abstract:

Finite Element (FE) method coupled with ever-increasing computational powers has substantially advanced the reliability of deterministic three dimensional structural analyses of a structure with uniform material properties. However, railways trackbed is made up of diverse group of materials including steel, wood, rock and soil, while each material has its own varying levels of heterogeneity and imperfections. It is observed that the application of probabilistic methods for trackbed structural analysis while incorporating the material and geometric variabilities is deeply underworked. The authors developed and validated a 3-dimensional FE based numerical trackbed model and in this study, they investigated the influence of variability in Young modulus and thicknesses of granular layers (Ballast and Subgrade) on the reliability index (-index) of the subgrade layer. The influence of these factors is accounted for by changing their Coefficients of Variance (COV) while keeping their means constant. These variations are formulated using Gaussian Normal distribution. Two failure mechanisms in subgrade namely Progressive Shear Failure and Excessive Plastic Deformation are examined. Preliminary results of risk-based probabilistic analysis for Progressive Shear Failure revealed that the variations in Ballast depth are the most influential factor for vertical stress at the top of subgrade surface. Whereas, in case of Excessive Plastic Deformations in subgrade layer, the variations in its own depth and Young modulus proved to be most important while ballast properties remained almost indifferent. For both these failure moods, it is also observed that the reliability index for subgrade failure increases with the increase in COV of ballast depth and subgrade Young modulus. The findings of this work is of particular significance in studying the combined effect of construction imperfections and variations in ground conditions on the structural performance of railroad trackbed and evaluating the associated risk involved. In addition, it also provides an additional tool to supplement the deterministic analysis procedures and decision making for railroad maintenance.

Keywords: finite element analysis, numerical modeling, probabilistic methods, risk and reliability analysis, subgrade

Procedia PDF Downloads 139
147 Fuzzy Optimization for Identifying Anticancer Targets in Genome-Scale Metabolic Models of Colon Cancer

Authors: Feng-Sheng Wang, Chao-Ting Cheng

Abstract:

Developing a drug from conception to launch is costly and time-consuming. Computer-aided methods can reduce research costs and accelerate the development process during the early drug discovery and development stages. This study developed a fuzzy multi-objective hierarchical optimization framework for identifying potential anticancer targets in a metabolic model. First, RNA-seq expression data of colorectal cancer samples and their healthy counterparts were used to reconstruct tissue-specific genome-scale metabolic models. The aim of the optimization framework was to identify anticancer targets that lead to cancer cell death and evaluate metabolic flux perturbations in normal cells that have been caused by cancer treatment. Four objectives were established in the optimization framework to evaluate the mortality of cancer cells for treatment and to minimize side effects causing toxicity-induced tumorigenesis on normal cells and smaller metabolic perturbations. Through fuzzy set theory, a multiobjective optimization problem was converted into a trilevel maximizing decision-making (MDM) problem. The applied nested hybrid differential evolution was applied to solve the trilevel MDM problem using two nutrient media to identify anticancer targets in the genome-scale metabolic model of colorectal cancer, respectively. Using Dulbecco’s Modified Eagle Medium (DMEM), the computational results reveal that the identified anticancer targets were mostly involved in cholesterol biosynthesis, pyrimidine and purine metabolisms, glycerophospholipid biosynthetic pathway and sphingolipid pathway. However, using Ham’s medium, the genes involved in cholesterol biosynthesis were unidentifiable. A comparison of the uptake reactions for the DMEM and Ham’s medium revealed that no cholesterol uptake reaction was included in DMEM. Two additional media, i.e., a cholesterol uptake reaction was included in DMEM and excluded in HAM, were respectively used to investigate the relationship of tumor cell growth with nutrient components and anticancer target genes. The genes involved in the cholesterol biosynthesis were also revealed to be determinable if a cholesterol uptake reaction was not induced when the cells were in the culture medium. However, the genes involved in cholesterol biosynthesis became unidentifiable if such a reaction was induced.

Keywords: Cancer metabolism, genome-scale metabolic model, constraint-based model, multilevel optimization, fuzzy optimization, hybrid differential evolution

Procedia PDF Downloads 80
146 Integrated Geophysical Surveys for Sinkhole and Subsidence Vulnerability Assessment, in the West Rand Area of Johannesburg

Authors: Ramoshweu Melvin Sethobya, Emmanuel Chirenje, Mihlali Hobo, Simon Sebothoma

Abstract:

The recent surge in residential infrastructure development around the metropolitan areas of South Africa has necessitated conditions for thorough geotechnical assessments to be conducted prior to site developments to ensure human and infrastructure safety. This paper appraises the success in the application of multi-method geophysical techniques for the delineation of sinkhole vulnerability in a residential landscape. Geophysical techniques ERT, MASW, VES, Magnetics and gravity surveys were conducted to assist in mapping sinkhole vulnerability, using an existing sinkhole as a constraint at Venterspost town, West of Johannesburg city. A combination of different geophysical techniques and results integration from those proved to be useful in the delineation of the lithologic succession around sinkhole locality, and determining the geotechnical characteristics of each layer for its contribution to the development of sinkholes, subsidence and cavities at the vicinity of the site. Study results have also assisted in the determination of the possible depth extension of the currently existing sinkhole and the location of sites where other similar karstic features and sinkholes could form. Results of the ERT, VES and MASW surveys have uncovered dolomitic bedrock at varying depths around the sites, which exhibits high resistivity values in the range 2500-8000ohm.m and corresponding high velocities in the range 1000-2400 m/s. The dolomite layer was found to be overlain by a weathered chert-poor dolomite layer, which has resistivities between the range 250-2400ohm.m, and velocities ranging from 500-600m/s, from which the large sinkhole has been found to collapse/ cave in. A compiled 2.5D high resolution Shear Wave Velocity (Vs) map of the study area was created using 2D profiles of MASW data, offering insights into the prevailing lithological setup conducive for formation various types of karstic features around the site. 3D magnetic models of the site highlighted the regions of possible subsurface interconnections between the currently existing large sinkhole and the other subsidence feature at the site. A number of depth slices were used to detail the conditions near the sinkhole as depth increases. Gravity surveys results mapped the possible formational pathways for development of new karstic features around the site. Combination and correlation of different geophysical techniques proved useful in delineation of the site geotechnical characteristics and mapping the possible depth extend of the currently existing sinkhole.

Keywords: resistivity, magnetics, sinkhole, gravity, karst, delineation, VES

Procedia PDF Downloads 80
145 Multiscale Modeling of Damage in Textile Composites

Authors: Jaan-Willem Simon, Bertram Stier, Brett Bednarcyk, Evan Pineda, Stefanie Reese

Abstract:

Textile composites, in which the reinforcing fibers are woven or braided, have become very popular in numerous applications in aerospace, automotive, and maritime industry. These textile composites are advantageous due to their ease of manufacture, damage tolerance, and relatively low cost. However, physics-based modeling of the mechanical behavior of textile composites is challenging. Compared to their unidirectional counterparts, textile composites introduce additional geometric complexities, which cause significant local stress and strain concentrations. Since these internal concentrations are primary drivers of nonlinearity, damage, and failure within textile composites, they must be taken into account in order for the models to be predictive. The macro-scale approach to modeling textile-reinforced composites treats the whole composite as an effective, homogenized material. This approach is very computationally efficient, but it cannot be considered predictive beyond the elastic regime because the complex microstructural geometry is not considered. Further, this approach can, at best, offer a phenomenological treatment of nonlinear deformation and failure. In contrast, the mesoscale approach to modeling textile composites explicitly considers the internal geometry of the reinforcing tows, and thus, their interaction, and the effects of their curved paths can be modeled. The tows are treated as effective (homogenized) materials, requiring the use of anisotropic material models to capture their behavior. Finally, the micro-scale approach goes one level lower, modeling the individual filaments that constitute the tows. This paper will compare meso- and micro-scale approaches to modeling the deformation, damage, and failure of textile-reinforced polymer matrix composites. For the mesoscale approach, the woven composite architecture will be modeled using the finite element method, and an anisotropic damage model for the tows will be employed to capture the local nonlinear behavior. For the micro-scale, two different models will be used, the one being based on the finite element method, whereas the other one makes use of an embedded semi-analytical approach. The goal will be the comparison and evaluation of these approaches to modeling textile-reinforced composites in terms of accuracy, efficiency, and utility.

Keywords: multiscale modeling, continuum damage model, damage interaction, textile composites

Procedia PDF Downloads 354
144 Electromagnetic Simulation Based on Drift and Diffusion Currents for Real-Time Systems

Authors: Alexander Norbach

Abstract:

The script in this paper describes the use of advanced simulation environment using electronic systems (Microcontroller, Operational Amplifiers, and FPGA). The simulation may be used for all dynamic systems with the diffusion and the ionisation behaviour also. By additionally required observer structure, the system works with parallel real-time simulation based on diffusion model and the state-space representation for other dynamics. The proposed deposited model may be used for electrodynamic effects, including ionising effects and eddy current distribution also. With the script and proposed method, it is possible to calculate the spatial distribution of the electromagnetic fields in real-time. For further purpose, the spatial temperature distribution may be used also. With upon system, the uncertainties, unknown initial states and disturbances may be determined. This provides the estimation of the more precise system states for the required system, and additionally, the estimation of the ionising disturbances that occur due to radiation effects. The results have shown that a system can be also developed and adopted specifically for space systems with the real-time calculation of the radiation effects only. Electronic systems can take damage caused by impacts with charged particle flux in space or radiation environment. In order to be able to react to these processes, it must be calculated within a shorter time that ionising radiation and dose is present. All available sensors shall be used to observe the spatial distributions. By measured value of size and known location of the sensors, the entire distribution can be calculated retroactively or more accurately. With the formation, the type of ionisation and the direct effect to the systems and thus possible prevent processes can be activated up to the shutdown. The results show possibilities to perform more qualitative and faster simulations independent of kind of systems space-systems and radiation environment also. The paper gives additionally an overview of the diffusion effects and their mechanisms. For the modelling and derivation of equations, the extended current equation is used. The size K represents the proposed charge density drifting vector. The extended diffusion equation was derived and shows the quantising character and has similar law like the Klein-Gordon equation. These kinds of PDE's (Partial Differential Equations) are analytically solvable by giving initial distribution conditions (Cauchy problem) and boundary conditions (Dirichlet boundary condition). For a simpler structure, a transfer function for B- and E- fields was analytically calculated. With known discretised responses g₁(k·Ts) and g₂(k·Ts), the electric current or voltage may be calculated using a convolution; g₁ is the direct function and g₂ is a recursive function. The analytical results are good enough for calculation of fields with diffusion effects. Within the scope of this work, a proposed model of the consideration of the electromagnetic diffusion effects of arbitrary current 'waveforms' has been developed. The advantage of the proposed calculation of diffusion is the real-time capability, which is not really possible with the FEM programs available today. It makes sense in the further course of research to use these methods and to investigate them thoroughly.

Keywords: advanced observer, electrodynamics, systems, diffusion, partial differential equations, solver

Procedia PDF Downloads 131
143 Object-Scene: Deep Convolutional Representation for Scene Classification

Authors: Yanjun Chen, Chuanping Hu, Jie Shao, Lin Mei, Chongyang Zhang

Abstract:

Traditional image classification is based on encoding scheme (e.g. Fisher Vector, Vector of Locally Aggregated Descriptor) with low-level image features (e.g. SIFT, HoG). Compared to these low-level local features, deep convolutional features obtained at the mid-level layer of convolutional neural networks (CNN) have richer information but lack of geometric invariance. For scene classification, there are scattered objects with different size, category, layout, number and so on. It is crucial to find the distinctive objects in scene as well as their co-occurrence relationship. In this paper, we propose a method to take advantage of both deep convolutional features and the traditional encoding scheme while taking object-centric and scene-centric information into consideration. First, to exploit the object-centric and scene-centric information, two CNNs that trained on ImageNet and Places dataset separately are used as the pre-trained models to extract deep convolutional features at multiple scales. This produces dense local activations. By analyzing the performance of different CNNs at multiple scales, it is found that each CNN works better in different scale ranges. A scale-wise CNN adaption is reasonable since objects in scene are at its own specific scale. Second, a fisher kernel is applied to aggregate a global representation at each scale and then to merge into a single vector by using a post-processing method called scale-wise normalization. The essence of Fisher Vector lies on the accumulation of the first and second order differences. Hence, the scale-wise normalization followed by average pooling would balance the influence of each scale since different amount of features are extracted. Third, the Fisher vector representation based on the deep convolutional features is followed by a linear Supported Vector Machine, which is a simple yet efficient way to classify the scene categories. Experimental results show that the scale-specific feature extraction and normalization with CNNs trained on object-centric and scene-centric datasets can boost the results from 74.03% up to 79.43% on MIT Indoor67 when only two scales are used (compared to results at single scale). The result is comparable to state-of-art performance which proves that the representation can be applied to other visual recognition tasks.

Keywords: deep convolutional features, Fisher Vector, multiple scales, scale-specific normalization

Procedia PDF Downloads 331
142 Visualising Charles Bonnet Syndrome: Digital Co-Creation of Pseudohallucinations

Authors: Victoria H. Hamilton

Abstract:

Charles Bonnet Syndrome (CBS) is when a person experiences pseudohallucinations that fill in visual information from any type of sight loss. CBS arises from an epiphenomenal process, with the physical actions of sight resulting in the mental formations of images. These pseudohallucinations—referred to as visions by the CBS community—manifest in a wide range of forms, from complex scenes to simple geometric shapes. To share these unique visual experiences, a remote co-creation website was created where CBS participants communicated their lived experiences. This created a reflexive process, and we worked to produce true representations of these interesting and little-known phenomena. Digital reconstruction of the visions is utilised as it echoes the vivid, experiential movie-like nature of what is being perceived. This paper critically analyses co-creation as a method for making digital assets. The implications of the participants' vision impairments and the application of ethical safeguards are examined in this context. Important to note, this research is of a medical syndrome for a non-medical, practice-based design. CBS research to date is primarily conducted by the ophthalmic, neurological, and psychiatric fields and approached with the primary concerns of these specialties. This research contributes a distinct approach incorporating practice-based digital design, autoethnography, and phenomenology. Autoethnography and phenomenology combine as a foundation, with the first bringing understanding and insights, balanced by the second philosophical, bigger picture, and established approach. With further refining, it is anticipated that the research may be applied to other conditions. Conditions where articulating internal experiences proves challenging and the use of digital methods could aid communication. Both the research and CBS communities will benefit from the insights regarding the relationship between cognitive perceptions and the vision process. This research combines the digital visualising of visions with interest in the link between metaphor, embodied cognition, and image. The argument for a link between CBS visions and metaphor may appear evident due to the cross-category mapping of images that is necessary for comprehension. They both are— CBS visions and metaphors—the experience of picturing images, often with lateral connections and imaginative associations.

Keywords: Charles Bonnet Syndrome, digital design, visual hallucinations, visual perception

Procedia PDF Downloads 44
141 The Impact of Artificial Intelligence on Digital Factory

Authors: Mona Awad Wanis Gad

Abstract:

The method of factory making plans has changed loads, in particular, whilst it's miles approximately making plans the factory building itself. Factory making plans have the venture of designing merchandise, plants, tactics, organization, regions, and the construction of a factory. Ordinary restructuring is turning into greater essential for you to preserve the competitiveness of a manufacturing unit. Regulations in new regions, shorter lifestyle cycles of product and manufacturing era, in addition to a VUCA global (Volatility, Uncertainty, Complexity and Ambiguity) cause extra common restructuring measures inside a factory. A digital factory model is the planning foundation for rebuilding measures and turns into a critical device. Furthermore, digital building fashions are increasingly being utilized in factories to help facility management and manufacturing processes. First, exclusive styles of digital manufacturing unit fashions are investigated, and their residences and usabilities to be used instances are analyzed. Within the scope of research are point cloud fashions, building statistics fashions, photogrammetry fashions, and those enriched with sensor information are tested. It investigated which digital fashions permit a simple integration of sensor facts and in which the variations are. In the end, viable application areas of virtual manufacturing unit models are determined by a survey, and the respective digital manufacturing facility fashions are assigned to the application areas. Ultimately, an application case from upkeep is selected and implemented with the assistance of the best virtual factory version. It is shown how a completely digitalized preservation process can be supported by a digital manufacturing facility version by offering facts. Among different functions, the virtual manufacturing facility version is used for indoor navigation, facts provision, and display of sensor statistics. In summary, the paper suggests a structuring of virtual factory fashions that concentrates on the geometric representation of a manufacturing facility building and its technical facilities. A practical application case is proven and implemented. For that reason, the systematic selection of virtual manufacturing facility models with the corresponding utility cases is evaluated.

Keywords: augmented reality, digital factory model, factory planning, restructuring digital factory model, photogrammetry, factory planning, restructuring building information modeling, digital factory model, factory planning, maintenance

Procedia PDF Downloads 37
140 Polymeric Sustained Biodegradable Patch Formulation for Wound Healing

Authors: Abhay Asthana, Gyati Shilakari Asthana

Abstract:

It’s the patient compliance and stability in combination with controlled drug delivery and biocompatibility that forms the core feature in present research and development of sustained biodegradable patch formulation intended for wound healing. The aim was to impart sustained degradation, sterile formulation, significant folding endurance, elasticity, biodegradability, bio-acceptability and strength. The optimized formulation was developed using component including polymers including Hydroxypropyl methyl cellulose, Ethylcellulose, and Gelatin, and Citric Acid PEG Citric acid (CPEGC) triblock dendrimers and active Curcumin. Polymeric mixture dissolved in geometric order in suitable medium through continuous stirring under ambient conditions. With continued stirring Curcumin was added with aid of DCM and Methanol in optimized ratio to get homogenous dispersion. The dispersion was sonicated with optimum frequency and for given time and later casted to form a patch form. All steps were carried out under under strict aseptic conditions. The formulations obtained in the acceptable working range were decided based on thickness, uniformity of drug content, smooth texture and flexibility and brittleness. The patch kept on stability using butter paper in sterile pack displayed folding endurance in range of 20 to 23 times without any evidence of crack in an optimized formulation at room temperature (RT) (24 ± 2°C). The patch displayed acceptable parameters after stability study conducted in refrigerated conditions (8±0.2°C) and at RT (24 ± 2°C) upto 90 days. Further, no significant changes were observed in critical parameters such as elasticity, biodegradability, drug release and drug content during stability study conducted at RT 24±2°C for 45 and 90 days. The drug content was in range 95 to 102%, moisture content didn’t exceeded 19.2% and patch passed the content uniformity test. Percentage cumulative drug release was found to be 80% in 12h and matched the biodegradation rate as drug release with correlation factor R2>0.9. The biodegradable patch based formulation developed shows promising results in terms of stability and release profiles.

Keywords: sustained biodegradation, wound healing, polymers, stability

Procedia PDF Downloads 332
139 Simulation of Focusing of Diamagnetic Particles in Ferrofluid Microflows with a Single Set of Overhead Permanent Magnets

Authors: Shuang Chen, Zongqian Shi, Jiajia Sun, Mingjia Li

Abstract:

Microfluidics is a technology that small amounts of fluids are manipulated using channels with dimensions of tens to hundreds of micrometers. At present, this significant technology is required for several applications in some fields, including disease diagnostics, genetic engineering, and environmental monitoring, etc. Among these fields, manipulation of microparticles and cells in microfluidic device, especially separation, have aroused general concern. In magnetic field, the separation methods include positive and negative magnetophoresis. By comparison, negative magnetophoresis is a label-free technology. It has many advantages, e.g., easy operation, low cost, and simple design. Before the separation of particles or cells, focusing them into a single tight stream is usually a necessary upstream operation. In this work, the focusing of diamagnetic particles in ferrofluid microflows with a single set of overhead permanent magnets is investigated numerically. The geometric model of the simulation is based on the configuration of previous experiments. The straight microchannel is 24mm long and has a rectangular cross-section of 100μm in width and 50μm in depth. The spherical diamagnetic particles of 10μm in diameter are suspended into ferrofluid. The initial concentration of the ferrofluid c₀ is 0.096%, and the flow rate of the ferrofluid is 1.8mL/h. The magnetic field is induced by five identical rectangular neodymium−iron− boron permanent magnets (1/8 × 1/8 × 1/8 in.), and it is calculated by equivalent charge source (ECS) method. The flow of the ferrofluid is governed by the Navier–Stokes equations. The trajectories of particles are solved by the discrete phase model (DPM) in the ANSYS FLUENT program. The positions of diamagnetic particles are recorded by transient simulation. Compared with the results of the mentioned experiments, our simulation shows consistent results that diamagnetic particles are gradually focused in ferrofluid under magnetic field. Besides, the diamagnetic particle focusing is studied by varying the flow rate of the ferrofluid. It is in agreement with the experiment that the diamagnetic particle focusing is better with the increase of the flow rate. Furthermore, it is investigated that the diamagnetic particle focusing is affected by other factors, e.g., the width and depth of the microchannel, the concentration of the ferrofluid and the diameter of diamagnetic particles.

Keywords: diamagnetic particle, focusing, microfluidics, permanent magnet

Procedia PDF Downloads 130
138 Developing Pedagogy for Argumentation and Teacher Agency: An Educational Design Study in the UK

Authors: Zeynep Guler

Abstract:

Argumentation and the production of scientific arguments are essential components that are necessary for helping students become scientifically literate through engaging them in constructing and critiquing ideas. Incorporating argumentation into science classrooms is challenging and can be a long-term process for both students and teachers. Students have difficulty in engaging tasks that require them to craft arguments, evaluate them to seek weaknesses, and revise them. Teachers also struggle with facilitating argumentation when they have underdeveloped science practices, underdeveloped pedagogical knowledge for argumentation science teaching, or underdeveloped teaching practice with argumentation (or a combination of all three). Thus, there is a need to support teachers in developing pedagogy for science teaching as argumentation, planning and implementing teaching practice for facilitating argumentation and also in becoming more agentic in this regards. Looking specifically at the experience of agency within education, it is arguable that agency is necessary for teachers’ renegotiation of professional purposes and practices in the light of changing educational practices. This study investigated how science teachers develop pedagogy for argumentation both individually and with their colleagues and also how teachers become more agentic (or not) through the active engagement of their contexts-for-action that refer to this as an ecological understanding of agency in order to positively influence or change their practice and their students' engagement with argumentation over two academic years. Through educational design study, this study conducted with three secondary science teachers (key stage 3-year 7 students aged 11-12) in the UK to find out if similar or different patterns of developing pedagogy for argumentation and of becoming more agentic emerge as they engage in planning and implementing a cycle of activities during the practice of teaching science with argumentation. Data from video and audio-recording of classroom practice and open-ended interviews with the science teachers were analysed using content analysis. The findings indicated that all the science teachers perceived strong agency in their opportunities to develop and apply pedagogical practices within the classroom. The teachers were pro-actively shaping their practices and classroom contexts in ways that were over and above the amendments to their pedagogy. They demonstrated some outcomes in developing pedagogy for argumentation and becoming more agentic in their teaching in this regards as a result of the collaboration with their colleagues and researcher; some appeared more agentic than others. The role of the collaboration between their colleagues was seen crucial for the teachers’ practice in the schools: close collaboration and support from other teachers in planning and implementing new educational innovations were seen as crucial for the development of pedagogy and becoming more agentic in practice. They needed to understand the importance of scientific argumentation but also understand how it can be planned and integrated into classroom practice. They also perceived constraint emerged from their lack of competence and knowledge in posing appropriate questions to help the students engage in argumentation, providing support for the students' construction of oral and written arguments.

Keywords: argumentation, teacher professional development, teacher agency, students' construction of argument

Procedia PDF Downloads 133
137 A Four-Step Ortho-Rectification Procedure for Geo-Referencing Video Streams from a Low-Cost UAV

Authors: B. O. Olawale, C. R. Chatwin, R. C. D. Young, P. M. Birch, F. O. Faithpraise, A. O. Olukiran

Abstract:

Ortho-rectification is the process of geometrically correcting an aerial image such that the scale is uniform. The ortho-image formed from the process is corrected for lens distortion, topographic relief, and camera tilt. This can be used to measure true distances, because it is an accurate representation of the Earth’s surface. Ortho-rectification and geo-referencing are essential to pin point the exact location of targets in video imagery acquired at the UAV platform. This can only be achieved by comparing such video imagery with an existing digital map. However, it is only when the image is ortho-rectified with the same co-ordinate system as an existing map that such a comparison is possible. The video image sequences from the UAV platform must be geo-registered, that is, each video frame must carry the necessary camera information before performing the ortho-rectification process. Each rectified image frame can then be mosaicked together to form a seamless image map covering the selected area. This can then be used for comparison with an existing map for geo-referencing. In this paper, we present a four-step ortho-rectification procedure for real-time geo-referencing of video data from a low-cost UAV equipped with multi-sensor system. The basic procedures for the real-time ortho-rectification are: (1) Decompilation of video stream into individual frames; (2) Finding of interior camera orientation parameters; (3) Finding the relative exterior orientation parameters for each video frames with respect to each other; (4) Finding the absolute exterior orientation parameters, using self-calibration adjustment with the aid of a mathematical model. Each ortho-rectified video frame is then mosaicked together to produce a 2-D planimetric mapping, which can be compared with a well referenced existing digital map for the purpose of georeferencing and aerial surveillance. A test field located in Abuja, Nigeria was used for testing our method. Fifteen minutes video and telemetry data were collected using the UAV and the data collected were processed using the four-step ortho-rectification procedure. The results demonstrated that the geometric measurement of the control field from ortho-images are more reliable than those from original perspective photographs when used to pin point the exact location of targets on the video imagery acquired by the UAV. The 2-D planimetric accuracy when compared with the 6 control points measured by a GPS receiver is between 3 to 5 meters.

Keywords: geo-referencing, ortho-rectification, video frame, self-calibration

Procedia PDF Downloads 478
136 Generating 3D Battery Cathode Microstructures using Gaussian Mixture Models and Pix2Pix

Authors: Wesley Teskey, Vedran Glavas, Julian Wegener

Abstract:

Generating battery cathode microstructures is an important area of research, given the proliferation of the use of automotive batteries. Currently, finite element analysis (FEA) is often used for simulations of battery cathode microstructures before physical batteries can be manufactured and tested to verify the simulation results. Unfortunately, a key drawback of using FEA is that this method of simulation is very slow in terms of computational runtime. Generative AI offers the key advantage of speed when compared to FEA, and because of this, generative AI is capable of evaluating very large numbers of candidate microstructures. Given AI generated candidate microstructures, a subset of the promising microstructures can be selected for further validation using FEA. Leveraging the speed advantage of AI allows for a better final microstructural selection because high speed allows for the evaluation of many more candidate microstructures. For the approach presented, battery cathode 3D candidate microstructures are generated using Gaussian Mixture Models (GMMs) and pix2pix. This approach first uses GMMs to generate a population of spheres (representing the “active material” of the cathode). Once spheres have been sampled from the GMM, they are placed within a microstructure. Subsequently, the pix2pix sweeps over the 3D microstructure (iteratively) slice by slice and adds details to the microstructure to determine what portions of the microstructure will become electrolyte and what part of the microstructure will become binder. In this manner, each subsequent slice of the microstructure is evaluated using pix2pix, where the inputs into pix2pix are the previously processed layers of the microstructure. By feeding into pix2pix previously fully processed layers of the microstructure, pix2pix can be used to ensure candidate microstructures represent a realistic physical reality. More specifically, in order for the microstructure to represent a realistic physical reality, the locations of electrolyte and binder in each layer of the microstructure must reasonably match the locations of electrolyte and binder in previous layers to ensure geometric continuity. Using the above outlined approach, a 10x to 100x speed increase was possible when generating candidate microstructures using AI when compared to using a FEA only approach for this task. A key metric for evaluating microstructures was the battery specific power value that the microstructures would be able to produce. The best generative AI result obtained was a 12% increase in specific power for a candidate microstructure when compared to what a FEA only approach was capable of producing. This 12% increase in specific power was verified by FEA simulation.

Keywords: finite element analysis, gaussian mixture models, generative design, Pix2Pix, structural design

Procedia PDF Downloads 107
135 Skin-Dose Mapping for Patients Undergoing Interventional Radiology Procedures: Clinical Experimentations versus a Mathematical Model

Authors: Aya Al Masri, Stefaan Carpentier, Fabrice Leroy, Thibault Julien, Safoin Aktaou, Malorie Martin, Fouad Maaloul

Abstract:

Introduction: During an 'Interventional Radiology (IR)' procedure, the patient's skin-dose may become very high for a burn, necrosis and ulceration to appear. In order to prevent these deterministic effects, an accurate calculation of the patient skin-dose mapping is essential. For most machines, the 'Dose Area Product (DAP)' and fluoroscopy time are the only information available for the operator. These two parameters are a very poor indicator of the peak skin dose. We developed a mathematical model that reconstructs the magnitude (delivered dose), shape, and localization of each irradiation field on the patient skin. In case of critical dose exceeding, the system generates warning alerts. We present the results of its comparison with clinical studies. Materials and methods: Two series of comparison of the skin-dose mapping of our mathematical model with clinical studies were performed: 1. At a first time, clinical tests were performed on patient phantoms. Gafchromic films were placed on the table of the IR machine under of PMMA plates (thickness = 20 cm) that simulate the patient. After irradiation, the film darkening is proportional to the radiation dose received by the patient's back and reflects the shape of the X-ray field. After film scanning and analysis, the exact dose value can be obtained at each point of the mapping. Four experimentation were performed, constituting a total of 34 acquisition incidences including all possible exposure configurations. 2. At a second time, clinical trials were launched on real patients during real 'Chronic Total Occlusion (CTO)' procedures for a total of 80 cases. Gafchromic films were placed at the back of patients. We performed comparisons on the dose values, as well as the distribution, and the shape of irradiation fields between the skin dose mapping of our mathematical model and Gafchromic films. Results: The comparison between the dose values shows a difference less than 15%. Moreover, our model shows a very good geometric accuracy: all fields have the same shape, size and location (uncertainty < 5%). Conclusion: This study shows that our model is a reliable tool to warn physicians when a high radiation dose is reached. Thus, deterministic effects can be avoided.

Keywords: clinical experimentation, interventional radiology, mathematical model, patient's skin-dose mapping.

Procedia PDF Downloads 140
134 Fabrication and Characterisation of Additive Manufactured Ti-6Al-4V Parts by Laser Powder Bed Fusion Technique

Authors: Norica Godja, Andreas Schindel, Luka Payrits, Zsolt Pasztor, Bálint Hegedüs, Petr Homola, Jan Horňas, Jiří Běhal, Roman Ruzek, Martin Holzleitner, Sascha Senck

Abstract:

In order to reduce fuel consumption and CO₂ emissions in the aviation sector, innovative solutions are being sought to reduce the weight of aircraft, including additive manufacturing (AM). Of particular importance are the excellent mechanical properties that are required for aircraft structures. Ti6Al4V alloys, with their high mechanical properties in relation to weight, can reduce the weight of aircraft structures compared to structures made of steel and aluminium. Currently, conventional processes such as casting and CNC machining are used to obtain the desired structures, resulting in high raw material removal, which in turn leads to higher costs and impacts the environment. Additive manufacturing (AM) offers advantages in terms of weight, lead time, design, and functionality and enables the realisation of alternative geometric shapes with high mechanical properties. However, there are currently technological shortcomings that have led to AM not being approved for structural components with high safety requirements. An assessment of damage tolerance for AM parts is required, and quality control needs to be improved. Pores and other defects cannot be completely avoided at present, but they should be kept to a minimum during manufacture. The mechanical properties of the manufactured parts can be further improved by various treatments. The influence of different treatment methods (heat treatment, CNC milling, electropolishing, chemical polishing) and operating parameters were investigated by scanning electron microscopy with energy dispersive X-ray spectroscopy (SEM/EDX), X-ray diffraction (XRD), electron backscatter diffraction (EBSD) and measurements with a focused ion beam (FIB), taking into account surface roughness, possible anomalies in the chemical composition of the surface and possible cracks. The results of the characterisation of the constructed and treated samples are discussed and presented in this paper. These results were generated within the framework of the 3TANIUM project, which is financed by EU with the contract number 101007830.

Keywords: Ti6Al4V alloys, laser powder bed fusion, damage tolerance, heat treatment, electropolishing, potential cracking

Procedia PDF Downloads 85
133 Facies Sedimentology and Astronomic Calibration of the Reinech Member (Lutetian)

Authors: Jihede Haj Messaoud, Hamdi Omar, Hela Fakhfakh Ben Jemia, Chokri Yaich

Abstract:

The Upper Lutetian alternating marl–limestone succession of Reineche Member was deposited over a warm shallow carbonate platform that permits Nummulites proliferation. High-resolution studies of 30 meters thick Nummulites-bearing Reineche Member, cropping out in Central Tunisia (Jebel Siouf), have been undertaken, regarding pronounced cyclical sedimentary sequences, in order to investigate the periodicity of cycles and their related orbital-scale oceanic and climatic changes. The palaeoenvironmental and palaeoclimatic data are preserved in several proxies obtainable through high-resolution sampling and laboratories measurement and analysis as magnetic susceptibility (MS) and carbonates contents in conjunction with a wireline logging tools. The time series analysis of proxies permits to establish cyclicity orders present in the studied intervals which could be linked to the orbital cycles. MS records provide high-resolution proxies for relative sea level change in Late Lutetian strata. The spectral analysis of MS fluctuations confirmed the orbital forcing by the presence of the complete suite of orbital frequencies in the precession of 23 ka, the obliquity of 41 ka, and notably the two modes of eccentricity of 100 and 405 ka. Regarding the two periodic sedimentary cycles detected by wavelet analysis of proxy fluctuations which coincide with the long-term 405 ka eccentricity cycle, the Reineche Member spanned 0,8 Myr. Wireline logging tools as gamma ray and sonic were used as a proxies to decipher cyclicity and trends in sedimentation and contribute to identifying and correlate units. There are used to constraint the highest frequency cyclicity modulated by a long term wavelength cycling apparently controlled by clay content. Interpreted as a result of variations in carbonate productivity, it has been suggested that the marl-limestone couplets, represent the sedimentary response to the orbital forcing. The calculation of cycle durations through Reineche Member, is used as a geochronometer and permit the astronomical calibration of the geologic time scale. Furthermore, MS coupled with carbonate contents, and fossil occurrences provide strong evidence for combined detrital inputs and marine surface carbonate productivity cycles. These two synchronous processes were driven by the precession index and ‘fingerprinted’ in the basic marl–limestone couplets, modulated by orbital eccentricity.

Keywords: magnetic susceptibility, cyclostratigraphy, orbital forcing, spectral analysis, Lutetian

Procedia PDF Downloads 294
132 Reverse Engineering of a Secondary Structure of a Helicopter: A Study Case

Authors: Jose Daniel Giraldo Arias, Camilo Rojas Gomez, David Villegas Delgado, Gullermo Idarraga Alarcon, Juan Meza Meza

Abstract:

The reverse engineering processes are widely used in the industry with the main goal to determine the materials and the manufacture used to produce a component. There are a lot of characterization techniques and computational tools that are used in order to get this information. A study case of a reverse engineering applied to a secondary sandwich- hybrid type structure used in a helicopter is presented. The methodology used consists of five main steps, which can be applied to any other similar component: Collect information about the service conditions of the part, disassembly and dimensional characterization, functional characterization, material properties characterization and manufacturing processes characterization, allowing to obtain all the supports of the traceability of the materials and processes of the aeronautical products that ensure their airworthiness. A detailed explanation of each step is covered. Criticality and comprehend the functionalities of each part, information of the state of the art and information obtained from interviews with the technical groups of the helicopter’s operators were analyzed,3D optical scanning technique, standard and advanced materials characterization techniques and finite element simulation allow to obtain all the characteristics of the materials used in the manufacture of the component. It was found that most of the materials are quite common in the aeronautical industry, including Kevlar, carbon, and glass fibers, aluminum honeycomb core, epoxy resin and epoxy adhesive. The stacking sequence and volumetric fiber fraction are a critical issue for the mechanical behavior; a digestion acid method was used for this purpose. This also helps in the determination of the manufacture technique which for this case was Vacuum Bagging. Samples of the material were manufactured and submitted to mechanical and environmental tests. These results were compared with those obtained during reverse engineering, which allows concluding that the materials and manufacture were correctly determined. Tooling for the manufacture was designed and manufactured according to the geometry and manufacture process requisites. The part was manufactured and the mechanical, and environmental tests required were also performed. Finally, a geometric characterization and non-destructive techniques allow verifying the quality of the part.

Keywords: reverse engineering, sandwich-structured composite parts, helicopter, mechanical properties, prototype

Procedia PDF Downloads 418
131 Comparison of Methodologies to Compute the Probabilistic Seismic Hazard Involving Faults and Associated Uncertainties

Authors: Aude Gounelle, Gloria Senfaute, Ludivine Saint-Mard, Thomas Chartier

Abstract:

The long-term deformation rates of faults are not fully captured by Probabilistic Seismic Hazard Assessment (PSHA). PSHA that use catalogues to develop area or smoothed-seismicity sources is limited by the data available to constraint future earthquakes activity rates. The integration of faults in PSHA can at least partially address the long-term deformation. However, careful treatment of fault sources is required, particularly, in low strain rate regions, where estimated seismic hazard levels are highly sensitive to assumptions concerning fault geometry, segmentation and slip rate. When integrating faults in PSHA various constraints on earthquake rates from geologic and seismologic data have to be satisfied. For low strain rate regions where such data is scarce it would be especially challenging. Faults in PSHA requires conversion of the geologic and seismologic data into fault geometries, slip rates and then into earthquake activity rates. Several approaches exist for translating slip rates into earthquake activity rates. In the most frequently used approach, the background earthquakes are handled using a truncated approach, in which earthquakes with a magnitude lower or equal to a threshold magnitude (Mw) occur in the background zone, with a rate defined by the rate in the earthquake catalogue. Although magnitudes higher than the threshold are located on the fault with a rate defined using the average slip rate of the fault. As high-lighted by several research, seismic events with magnitudes stronger than the selected magnitude threshold may potentially occur in the background and not only at the fault, especially in regions of slow tectonic deformation. It also has been known that several sections of a fault or several faults could rupture during a single fault-to-fault rupture. It is then essential to apply a consistent modelling procedure to allow for a large set of possible fault-to-fault ruptures to occur aleatory in the hazard model while reflecting the individual slip rate of each section of the fault. In 2019, a tool named SHERIFS (Seismic Hazard and Earthquake Rates in Fault Systems) was published. The tool is using a methodology to calculate the earthquake rates in a fault system where the slip-rate budget of each fault is conversed into rupture rates for all possible single faults and faultto-fault ruptures. The objective of this paper is to compare the SHERIFS method with one other frequently used model to analyse the impact on the seismic hazard and through sensibility studies better understand the influence of key parameters and assumptions. For this application, a simplified but realistic case study was selected, which is in an area of moderate to hight seismicity (South Est of France) and where the fault is supposed to have a low strain.

Keywords: deformation rates, faults, probabilistic seismic hazard, PSHA

Procedia PDF Downloads 66