Search results for: shape error
988 Investigations on the Influence of Web Openings on the Load Bearing Behavior of Steel Beams
Authors: Felix Eyben, Simon Schaffrath, Markus Feldmann
Abstract:
A building should maximize the potential for use through its design. Therefore, flexible use is always important when designing a steel structure. To create flexibility, steel beams with web openings are increasingly used, because these offer the advantage that cables, pipes and other technical equipment can easily be routed through without detours, allowing for more space-saving and aesthetically pleasing construction. This can also significantly reduce the height of ceiling systems. Until now, beams with web openings were not explicitly considered in the European standard. However, this is to be done with the new EN 1993-1-13, in which design rules for different opening forms are defined. In order to further develop the design concepts, beams with web openings under bending are therefore to be investigated in terms of damage mechanics as part of a German national research project aiming to optimize the verifications for steel structures based on a wider database and a validated damage prediction. For this purpose, first, fundamental factors influencing the load-bearing behavior of girders with web openings under bending load were investigated numerically without taking material damage into account. Various parameter studies were carried out for this purpose. For example, the factors under study were the opening shape, size and position as well as structural aspects as the span length, arrangement of stiffeners and loading situation. The load-bearing behavior is evaluated using resulting load-deformation curves. These results are compared with the design rules and critically analyzed. Experimental tests are also planned based on these results. Moreover, the implementation of damage mechanics in the form of the modified Bai-Wierzbicki model was examined. After the experimental tests will have been carried out, the numerical models are validated and further influencing factors will be investigated on the basis of parametric studies.Keywords: damage mechanics, finite element, steel structures, web openings
Procedia PDF Downloads 174987 Stability Design by Geometrical Nonlinear Analysis Using Equivalent Geometric Imperfections
Authors: S. Fominow, C. Dobert
Abstract:
The present article describes the research that deals with the development of equivalent geometric imperfections for the stability design of steel members considering lateral-torsional buckling. The application of these equivalent imperfections takes into account the stiffness-reducing effects due to inelasticity and residual stresses, which lead to a reduction of the load carrying capacity of slender members and structures. This allows the application of a simplified design method, that is performed in three steps. Application of equivalent geometric imperfections, determination of internal forces using geometrical non-linear analysis (GNIA) and verification of the cross-section resistance at the most unfavourable location. All three verification steps are closely related and influence the results. The derivation of the equivalent imperfections was carried out in several steps. First, reference lateral-torsional buckling resistances for various rolled I-sections, slenderness grades, load shapes and steel grades were determined. This was done either with geometric and material non-linear analysis with geometrical imperfections and residual stresses (GMNIA) or for standard cases based on the equivalent member method. With the aim of obtaining identical lateral-torsional buckling resistances as the reference resistances from the application of the design method, the required sizes for equivalent imperfections were derived. For this purpose, a program based on the FEM method has been developed. Based on these results, several proposals for the specification of equivalent geometric imperfections have been developed. These differ in the shape of the applied equivalent geometric imperfection, the model of the cross-sectional resistance and the steel grade. The proposed design methods allow a wide range of applications and a reliable calculation of the lateral-torsional buckling resistances, as comparisons between the calculated resistances and the reference resistances have shown.Keywords: equivalent geometric imperfections, GMNIA, lateral-torsional buckling, non-linear finite element analysis
Procedia PDF Downloads 156986 Depth Camera Aided Dead-Reckoning Localization of Autonomous Mobile Robots in Unstructured GNSS-Denied Environments
Authors: David L. Olson, Stephen B. H. Bruder, Adam S. Watkins, Cleon E. Davis
Abstract:
In global navigation satellite systems (GNSS), denied settings such as indoor environments, autonomous mobile robots are often limited to dead-reckoning navigation techniques to determine their position, velocity, and attitude (PVA). Localization is typically accomplished by employing an inertial measurement unit (IMU), which, while precise in nature, accumulates errors rapidly and severely degrades the localization solution. Standard sensor fusion methods, such as Kalman filtering, aim to fuse precise IMU measurements with accurate aiding sensors to establish a precise and accurate solution. In indoor environments, where GNSS and no other a priori information is known about the environment, effective sensor fusion is difficult to achieve, as accurate aiding sensor choices are sparse. However, an opportunity arises by employing a depth camera in the indoor environment. A depth camera can capture point clouds of the surrounding floors and walls. Extracting attitude from these surfaces can serve as an accurate aiding source, which directly combats errors that arise due to gyroscope imperfections. This configuration for sensor fusion leads to a dramatic reduction of PVA error compared to traditional aiding sensor configurations. This paper provides the theoretical basis for the depth camera aiding sensor method, initial expectations of performance benefit via simulation, and hardware implementation, thus verifying its veracity. Hardware implementation is performed on the Quanser Qbot 2™ mobile robot, with a Vector-Nav VN-200™ IMU and Kinect™ camera from Microsoft.Keywords: autonomous mobile robotics, dead reckoning, depth camera, inertial navigation, Kalman filtering, localization, sensor fusion
Procedia PDF Downloads 207985 AI-Driven Solutions for Optimizing Master Data Management
Authors: Srinivas Vangari
Abstract:
In the era of big data, ensuring the accuracy, consistency, and reliability of critical data assets is crucial for data-driven enterprises. Master Data Management (MDM) plays a crucial role in this endeavor. This paper investigates the role of Artificial Intelligence (AI) in enhancing MDM, focusing on how AI-driven solutions can automate and optimize various stages of the master data lifecycle. By integrating AI (Quantitative and Qualitative Analysis) into processes such as data creation, maintenance, enrichment, and usage, organizations can achieve significant improvements in data quality and operational efficiency. Quantitative analysis is employed to measure the impact of AI on key metrics, including data accuracy, processing speed, and error reduction. For instance, our study demonstrates an 18% improvement in data accuracy and a 75% reduction in duplicate records across multiple systems post-AI implementation. Furthermore, AI’s predictive maintenance capabilities reduced data obsolescence by 22%, as indicated by statistical analyses of data usage patterns over a 12-month period. Complementing this, a qualitative analysis delves into the specific AI-driven strategies that enhance MDM practices, such as automating data entry and validation, which resulted in a 28% decrease in manual errors. Insights from case studies highlight how AI-driven data cleansing processes reduced inconsistencies by 25% and how AI-powered enrichment strategies improved data relevance by 24%, thus boosting decision-making accuracy. The findings demonstrate that AI significantly enhances data quality and integrity, leading to improved enterprise performance through cost reduction, increased compliance, and more accurate, real-time decision-making. These insights underscore the value of AI as a critical tool in modern data management strategies, offering a competitive edge to organizations that leverage its capabilities.Keywords: artificial intelligence, master data management, data governance, data quality
Procedia PDF Downloads 19984 Teacher Knowledge: Unbridling Teacher Agency in the Context of Professional Development for Transformative Teaching and Learning
Authors: Bernice Badal
Abstract:
This article addresses a persistent challenge related to teacher agency in knowledge acquisition in professional development (PD) workshops in contexts of educational change, given that scholarship identifies a need for more teacher involvement and amplification of teacher's voices. Theoretical concepts are drawn from Bandura’s Social cognitive theory, incorporating the triadic causation model of agency to examine the reciprocal nature of the context, teacher characteristics, and systemic influences that shape how knowledge is transmitted and acquired in PD workshops. This qualitative study, using a mix of classroom observations and interviews, explored the political, contextual, and personal characteristics of teacher agency in PD through an analysis of data extracted from a PhD study. The narratives of six teachers from three township schools are examined to show how PD efforts in South Africa have failed to take account of the holistic development of teacher agency in knowledge dissemination and how this shapes teacher self-efficacy beliefs about being able to masterfully apply the tenets of the reform. Agency, teacher voice, and contextual considerations were used as markers of the quality of the training provided to understand how knowledge is acquired and meaning is made. The findings suggest that systemic influences of institutionally imposed PD offer partial understandings of the reform, which is offered in traditional formats that do not consider teacher empowerment in knowledge production and the development of teacher agency. Common in all the participants’ responses is the need for more information and training on the prescribed approach for teaching English as a second language; however, this paper holds the view that more information may not solve teachers’ dilemmas. Accordingly, it recommends a restructuring of the programme with facilitators being more cognisant of teacher agency for the development of transformative teachers. The findings of the study contribute to the field of teacher knowledge, teacher training, and professional development in the context of educational reforms.Keywords: teacher professional development, teacher voice, teacher agency, educational reforms, teacher knowledge
Procedia PDF Downloads 71983 The Role of Metaphor in Communication
Authors: Fleura Shkëmbi, Valbona Treska
Abstract:
In elementary school, we discover that a metaphor is a decorative linguistic device just for poets. But now that we know, it's also a crucial tactic that individuals employ to understand the universe, from fundamental ideas like time and causation to the most pressing societal challenges today. Metaphor is the use of language to refer to something other than what it was originally intended for or what it "literally" means in order to suggest a similarity or establish a connection between the two. People do not identify metaphors as relevant in their decisions, according to a study on metaphor and its effect on decision-making; instead, they refer to more "substantive" (typically numerical) facts as the basis for their problem-solving decision. Every day, metaphors saturate our lives via language, cognition, and action. They argue that our conceptions shape our views and interactions with others and that concepts define our reality. Metaphor is thus a highly helpful tool for both describing our experiences to others and forming notions for ourselves. In therapeutic contexts, their shared goal appears to be twofold. The cognitivist approach to metaphor regards it as one of the fundamental foundations of human communication. The benefits and disadvantages of utilizing the metaphor differ depending on the target domain that the metaphor portrays. The challenge of creating messages and surroundings that affect customers' notions of abstract ideas in a variety of industries, including health, hospitality, romance, and money, has been studied for decades in marketing and consumer psychology. The aim of this study is to examine, through a systematic literature review, the role of the metaphor in communication and in advertising. This study offers a selected analysis of this literature, concentrating on research on customer attitudes and product appraisal. The analysis of the data identifies potential research questions. With theoretical and applied implications for marketing, design, and persuasion, this study sheds light on how, when, and for whom metaphoric communications are powerful.Keywords: metaphor, communication, advertising, cognition, action
Procedia PDF Downloads 99982 Water Droplet Impact on Vibrating Rigid Superhydrophobic Surfaces
Authors: Jingcheng Ma, Patricia B. Weisensee, Young H. Shin, Yujin Chang, Junjiao Tian, William P. King, Nenad Miljkovic
Abstract:
Water droplet impact on surfaces is a ubiquitous phenomenon in both nature and industry. The transfer of mass, momentum and energy can be influenced by the time of contact between droplet and surface. In order to reduce the contact time, we study the influence of substrate motion prior to impact on the dynamics of droplet recoil. Using optical high speed imaging, we investigated the impact dynamics of macroscopic water droplets (~ 2mm) on rigid nanostructured superhydrophobic surfaces vibrating at 60 – 300 Hz and amplitudes of 0 – 3 mm. In addition, we studied the influence of the phase of the substrate at the moment of impact on total contact time. We demonstrate that substrate vibration can alter droplet dynamics, and decrease total contact time by as much as 50% compared to impact on stationary rigid superhydrophobic surfaces. Impact analysis revealed that the vibration frequency mainly affected the maximum contact time, while the amplitude of vibration had little direct effect on the contact time. Through mathematical modeling, we show that the oscillation amplitude influences the possibility density function of droplet impact at a given phase, and thus indirectly influences the average contact time. We also observed more vigorous droplet splashing and breakup during impact at larger amplitudes. Through semi-empirical mathematical modeling, we describe the relationship between contact time and vibration frequency, phase, and amplitude of the substrate. We also show that the maximum acceleration during the impact process is better suited as a threshold parameter for the onset of splashing than a Weber-number criterion. This study not only provides new insights into droplet impact physics on vibrating surfaces, but develops guidelines for the rational design of surfaces to achieve controllable droplet wetting in applications utilizing vibration.Keywords: contact time, impact dynamics, oscillation, pear-shape droplet
Procedia PDF Downloads 454981 A Critical Exploration of Dominant Perspectives Regarding Inclusion and Disability: Shifts Toward Meaningful Approaches
Authors: Luigi Iannacci
Abstract:
This study critically explores how disability and disability are presently and problematically configured within education. As such, pedagogies, discourses, and practices that shape this configuration are examined to forward a reconceptualization of disability as it relates to education and the inclusion of students with special needs in mainstream classroom contexts. The study examines how the dominant medical/deficit model of disability positions students with special needs and advocates for a shift towards a social/critical model of disability as applied to education and classrooms. This is demonstrated through a critical look at how language, processes, and ‘interventions’ name and address deficits people who have a disability are presumed to have and, as such, conceptualize these deficits as inherent flaws that are in need of ‘fixing.’ The study will demonstrate the necessary shifts in thinking, language and practice required to forward a critical/social model of disability. The ultimate aim of this research is to offer a much-needed reconceptualization of inclusion that recognizes disability as epistemology, identity, and diversity through a critical exploration of dominant discourses that impact language, policy, instruction and ultimately, the experiences students with disabilities have within mainstream classrooms. The presentation seeks to explore disability as neurodiversity and therefore elucidate how people with disabilities can demonstrate these ways of knowing within inclusive education that avoids superficial approaches that are not responsive to their needs. This research is, therefore, of interest and use to educators teaching at the elementary, secondary, and in-service levels as well as graduate students and scholars working in the areas of inclusion, special education, and literacy. Ultimately the presentation attempts to foster a social justice and human rights-focused approach to inclusion that is responsive to students with disabilities and, as such ensures a reconceptualization of present language, understandings and practices that continue to configure disability in problematic ways.Keywords: inclusion, disability, critical approach, social justice
Procedia PDF Downloads 76980 The Relationships between Energy Consumption, Carbon Dioxide (CO2) Emissions, and GDP for Egypt: Time Series Analysis, 1980-2010
Authors: Jinhoa Lee
Abstract:
The relationships between environmental quality, energy use and economic output have created growing attention over the past decades among researchers and policy makers. Focusing on the empirical aspects of the role of carbon dioxide (CO2) emissions and energy use in affecting the economic output, this paper is an effort to fulfill the gap in a comprehensive case study at a country level using modern econometric techniques. To achieve the goal, this country-specific study examines the short-run and long-run relationships among energy consumption (using disaggregated energy sources: crude oil, coal, natural gas, electricity), CO2 emissions and gross domestic product (GDP) for Egypt using time series analysis from the year 1980-2010. To investigate the relationships between the variables, this paper employs the Augmented Dickey-Fuller (ADF) test for stationarity, Johansen maximum likelihood method for co-integration and a Vector Error Correction Model (VECM) for both short- and long-run causality among the research variables for the sample. The long-run equilibrium in the VECM suggests some negative impacts of the CO2 emissions and the coal and natural gas use on the GDP. Conversely, a positive long-run causality from the electricity consumption to the GDP is found to be significant in Egypt during the period. In the short-run, some positive unidirectional causalities exist, running from the coal consumption to the GDP, and the CO2 emissions and the natural gas use. Further, the GDP and the electricity use are positively influenced by the consumption of petroleum products and the direct combustion of crude oil. Overall, the results support arguments that there are relationships among environmental quality, energy use, and economic output in both the short term and long term; however, the effects may differ due to the sources of energy, such as in the case of Egypt for the period of 1980-2010.Keywords: CO2 emissions, Egypt, energy consumption, GDP, time series analysis
Procedia PDF Downloads 615979 Chromatographic Preparation and Performance on Zinc Ion Imprinted Monolithic Column and Its Adsorption Property
Authors: X. Han, S. Duan, C. Liu, C. Zhou, W. Zhu, L. Kong
Abstract:
The ionic imprinting technique refers to the three-dimensional rigid structure with the fixed pore sizes, which was formed by the binding interactions of ions and functional monomers and used ions as the template, it has a high level of recognition to the ionic template. The preparation of monolithic column by the in-situ polymerization need to put the compound of template, functional monomers, cross-linking agent and initiating agent into the solution, dissolve it and inject to the column tube, and then the compound will have a polymerization reaction at a certain temperature, after the synthetic reaction, we washed out the unread template and solution. The monolithic columns are easy to prepare, low consumption and cost-effective with fast mass transfer, besides, they have many chemical functions. But the monolithic columns have some problems in the practical application, such as low-efficiency, quantitative analysis cannot be performed accurately because of the peak shape is wide and has tailing phenomena; the choice of polymerization systems is limited and the lack of theoretical foundations. Thus the optimization of components and preparation methods is an important research direction. During the preparation of ionic imprinted monolithic columns, pore-forming agent can make the polymer generate the porous structure, which can influence the physical properties of polymer, what’ s more, it can directly decide the stability and selectivity of polymerization reaction. The compounds generated in the pre-polymerization reaction could directly decide the identification and screening capabilities of imprinted polymer; thus the choice of pore-forming agent is quite critical in the preparation of imprinted monolithic columns. This article mainly focuses on the research that when using different pore-forming agents, the impact of zinc ion imprinted monolithic column on the enrichment performance of zinc ion.Keywords: high performance liquid chromatography (HPLC), ionic imprinting, monolithic column, pore-forming agent
Procedia PDF Downloads 214978 “It Plays a Huge Role”: Examining Dual Language Teachers’ Conceptions of Language, Culture and Sociocultural Competence
Authors: Giselle Martinez Negrette
Abstract:
Language and culture mutually shape and reflect the human experience. In the learning process, this connection creates and sustains the shared world of learners and educators. Dual Language (DL) programs exemplify this relationship by placing language and culture at the center of their educational approach. These programs, originally conceived to advance social justice in education, aim to foster bilingualism, biliteracy, academic development and sociocultural competence, emphasizing the inseparability of linguistic and cultural growth. Furthermore, because DL programs serve children from diverse cultural, ethnic, and socioeconomic backgrounds, they operate as spaces where linguistic skills and sociocultural understandings are actively cultivated, negotiated, and celebrated. Against this background, this paper examines how two DL teachers see language and culture shaping and reflecting the educational experience, and how their understandings of the relationship influence their mediation of sociocultural competence in their classrooms. This qualitative study employs critical discourse analysis to study in detail participants’ narratives seeking to uncover their perspectives on the “politics” surrounding language use and cultural understandings in their school contexts. Our findings show that these educators are not only keenly aware of the pivotal role that language and culture play in multilingual students’ learning journeys, but they have identified the sociolinguistic “games” taking place in their classrooms. We contend these understandings are pivotal for the critical development of sociocultural competence in DL programs. This study provides DL educators with important conceptual and pedagogical insights regarding the intersection between language and culture in their classrooms and seeks to encourage them to analyze their roles as supporters or opponents of transformative rupture opportunities to contest inequities in educationKeywords: sociocultural competence, critical discourse analysis, dual language programs, language, culture
Procedia PDF Downloads 3977 Useful Characteristics of Pleurotus Mushroom Hybrids
Authors: Suvalux Chaichuchote, Ratchadaporn Thonghem
Abstract:
Pleurotus mushroom is one of popular edible mushrooms in Thailand. It is much favored by consumers due to its delicious taste and high nutrition. It is commonly used as an ingredient in several dishes. The commercially cultivated strain grown in most farms is the Pleurotus sp., Hed Bhutan, that is widely distributed to mushroom farms throughout the country and can be cultivated almost all year round. However, it demands different cultivated strains from mushroom growers, therefore, the improving mushroom strains should be done to their benefits. In this study, we used a di-mon mating method to hybrid production from Hed Bhutan (P-3) as dikaryon material and monokaryotic mycelium were isolated from basidiospores of other three Pleurotus sp. by single spore isolation. The 3 hybrids: P-3XSA-6, P-3XSB-24 and P-3XSE-5 were recognized from the 12 hybridized successfully. They were appropriate hybridized in terms of fruiting body performance in the three time cycles of cultivation such as the number of days until growing, time for pinning, color and shape of fruiting bodies and yield. For genetic study, genomic DNAs of both Hed Bhutan (P-3) and three hybrids were extracted. A couple of primer ITS1 and ITS4 were used to amplify the gene coding for ITS1, ITS2 and 5.8S rRNA. The similarities between these amplified genes and databases of DNA revealed that Hed Bhutan (P-3) was the Pleurotus pulmonarius as well as P-3XSA-6, P-3XSB-24 and P-3XSE-5 hybrids. Furthermore, Hed Bhutan (P3) and three hybrids were distributed to 3 small-scale farms, with mushroom farming experience, in the countryside. To address this, one hundred and twenty mushroom bags of each strain were supplied to them. The findings, by interview, indicated two mushroom farmers were satisfied with P-3XSA-6 hybrid and P-3XSB-24 hybrid, thanks to their simultaneous fruiting time and good yield. While the other was satisfied with P-3XSB-24 hybrid due to its good yield and P-3XSE-5 hybrids thanks to its gradually fruiting body, benefiting in frequent harvest. Overall, farmers adopted all hybrids to grow as commercially cultivated strains as well as Hed Bhutan (P-3) strain.Keywords: dikaryon, monokaryon, pleurotus, strain improvement
Procedia PDF Downloads 254976 Flexible PVC Based Nanocomposites With the Incorporation of Electric and Magnetic Nanofillers for the Shielding Against EMI and Thermal Imaging Signals
Authors: H. M. Fayzan Shakir, Khadija Zubair, Tingkai Zhao
Abstract:
Electromagnetic (EM) waves are being used widely now a days. Cell phone signals, WIFI signals, wireless telecommunications etc everything uses EM waves which then create EM pollution. EM pollution can cause serious effects on both human health and nearby electronic devices. EM waves have electric and magnetic components that disturb the flow of charged particles in both human nervous system and electronic devices. The shielding of both humans and electronic devices are a prime concern today. EM waves can cause headaches, anxiety, suicide and depression, nausea, fatigue and loss of libido in humans and malfunctioning in electronic devices. Polyaniline (PANI) and polypyrrole (PPY) were successfully synthesized using chemical polymerizing using ammonium persulfate and DBSNa as oxidant respectively. Barium ferrites (BaFe) were also prepared using co-precipitation method and calcinated at 10500C for 8h. Nanocomposite thin films with various combinations and compositions of Polyvinylchloride, PANI, PPY and BaFe were prepared. X-ray diffraction technique was first used to confirm the successful fabrication of all nano fillers and particle size analyzer to measure the exact size and scanning electron microscopy is used for the shape. According to Electromagnetic Interference theory, electrical conductivity is the prime property required for the Electromagnetic Interference shielding. 4-probe technique is then used to evaluate DC conductivity of all samples. Samples with high concentration of PPY and PANI exhibit remarkable increased electrical conductivity due to fabrication of interconnected network structure inside the Polyvinylchloride matrix that is also confirmed by SEM analysis. Less than 1% transmission was observed in whole NIR region (700 nm – 2500 nm). Also, less than -80 dB Electromagnetic Interference shielding effectiveness was observed in microwave region (0.1 GHz to 20 GHz).Keywords: nanocomposites, polymers, EMI shielding, thermal imaging
Procedia PDF Downloads 106975 Formulation and Characterization of NaCS-PDMDAAC Capsules with Immobilized Chlorella vulgaris for Phycoremediation of Palm Oil Mill Effluent
Authors: Quin Emparan, Razif Harun, Dayang R. A. Biak, Rozita Omar, Michael K. Danquah
Abstract:
Cultivation of immobilized microalgae cells is on the rise for biotechnological applications. In this study, cultivation of Chlorella vulgaris was carried out in the form of suspended free-cell and immobilized cells system. NaCS-PDMDAAC capsules were used to immobilize C. vulgaris. Initially, the synthesized NaCS with C. vulgaris culture were prepared at various concentration of 5- 20% (w/v) using a 6% hardening solution (PDMDAAC) to investigate the capsules' gel stability and suitability for microalgae cells growth. Then, the capsules produced from 15% NaCS with C. vulgaris culture were furthered investigated using 5%, 10%, and 15% (w/v) of PDMDAAC solution. The capsules' gel stability was evaluated through dissolution time and loss of uniform spherical shape of capsules, while suitability for microalgae cells growth was evaluated through the optical density of microalgae. In this study, the 15% NaCS-10% PDMDAAC capsules were found to be the most suitable to sustain the capsules' gel stability and microalgae cells growth in MLA. For that reason, the C. vulgaris immobilized in the 15% NaCS-10% PDMDAAC capsules were further characterized using physicochemical analysis in terms of morphological, carbon (C), hydrogen (H) and nitrogen (N), Fourier transform-infrared (FT-IR), scanning electron microscopy-energy dispersive X-ray (SEM-EDX), zeta potential and Brunauer-Emmet-Teller (BET) analyses. The results revealed that the presence of sulfonates in the synthesized NaCS and NaCS-PDMDAAC capsules without and with C. vulgaris proves that cellulose alcohol group was successfully bonded by sulfo group. Besides that, immobilized microalgae cells have a smaller cell size of 6.29 ± 1.09 µm and zeta potential of -11.93 ± 0.91 mV than suspended free-cells microalgae culture. It can be summarized that immobilization of C. vulgaris in the 15% NaCS-10% PDMDAAC capsules are relevant as a bioremediator for wastewater treatment purposes due to its suitable size of pore and capsules as well as structural and compositional properties.Keywords: biological capsules, immobilized cultivation, microalgae, physico-chemical analysis
Procedia PDF Downloads 172974 Analysis of a IncResU-Net Model for R-Peak Detection in ECG Signals
Authors: Beatriz Lafuente Alcázar, Yash Wani, Amit J. Nimunkar
Abstract:
Cardiovascular Diseases (CVDs) are the leading cause of death globally, and around 80% of sudden cardiac deaths are due to arrhythmias or irregular heartbeats. The majority of these pathologies are revealed by either short-term or long-term alterations in the electrocardiogram (ECG) morphology. The ECG is the main diagnostic tool in cardiology. It is a non-invasive, pain free procedure that measures the heart’s electrical activity and that allows the detecting of abnormal rhythms and underlying conditions. A cardiologist can diagnose a wide range of pathologies based on ECG’s form alterations, but the human interpretation is subjective and it is contingent to error. Moreover, ECG records can be quite prolonged in time, which can further complicate visual diagnosis, and deeply retard disease detection. In this context, deep learning methods have risen as a promising strategy to extract relevant features and eliminate individual subjectivity in ECG analysis. They facilitate the computation of large sets of data and can provide early and precise diagnoses. Therefore, the cardiology field is one of the areas that can most benefit from the implementation of deep learning algorithms. In the present study, a deep learning algorithm is trained following a novel approach, using a combination of different databases as the training set. The goal of the algorithm is to achieve the detection of R-peaks in ECG signals. Its performance is further evaluated in ECG signals with different origins and features to test the model’s ability to generalize its outcomes. Performance of the model for detection of R-peaks for clean and noisy ECGs is presented. The model is able to detect R-peaks in the presence of various types of noise, and when presented with data, it has not been trained. It is expected that this approach will increase the effectiveness and capacity of cardiologists to detect divergences in the normal cardiac activity of their patients.Keywords: arrhythmia, deep learning, electrocardiogram, machine learning, R-peaks
Procedia PDF Downloads 186973 Modeling of Bipolar Charge Transport through Nanocomposite Films for Energy Storage
Authors: Meng H. Lean, Wei-Ping L. Chu
Abstract:
The effects of ferroelectric nanofiller size, shape, loading, and polarization, on bipolar charge injection, transport, and recombination through amorphous and semicrystalline polymers are studied. A 3D particle-in-cell model extends the classical electrical double layer representation to treat ferroelectric nanoparticles. Metal-polymer charge injection assumes Schottky emission and Fowler-Nordheim tunneling, migration through field-dependent Poole-Frenkel mobility, and recombination with Monte Carlo selection based on collision probability. A boundary integral equation method is used for solution of the Poisson equation coupled with a second-order predictor-corrector scheme for robust time integration of the equations of motion. The stability criterion of the explicit algorithm conforms to the Courant-Friedrichs-Levy limit. Trajectories for charge that make it through the film are curvilinear paths that meander through the interspaces. Results indicate that charge transport behavior depends on nanoparticle polarization with anti-parallel orientation showing the highest leakage conduction and lowest level of charge trapping in the interaction zone. Simulation prediction of a size range of 80 to 100 nm to minimize attachment and maximize conduction is validated by theory. Attached charge fractions go from 2.2% to 97% as nanofiller size is decreased from 150 nm to 60 nm. Computed conductivity of 0.4 x 1014 S/cm is in agreement with published data for plastics. Charge attachment is increased with spheroids due to the increase in surface area, and especially so for oblate spheroids showing the influence of larger cross-sections. Charge attachment to nanofillers and nanocrystallites increase with vol.% loading or degree of crystallinity, and saturate at about 40 vol.%.Keywords: nanocomposites, nanofillers, electrical double layer, bipolar charge transport
Procedia PDF Downloads 354972 Comparison of Agree Method and Shortest Path Method for Determining the Flow Direction in Basin Morphometric Analysis: Case Study of Lower Tapi Basin, Western India
Authors: Jaypalsinh Parmar, Pintu Nakrani, Bhaumik Shah
Abstract:
Digital Elevation Model (DEM) is elevation data of the virtual grid on the ground. DEM can be used in application in GIS such as hydrological modelling, flood forecasting, morphometrical analysis and surveying etc.. For morphometrical analysis the stream flow network plays a very important role. DEM lacks accuracy and cannot match field data as it should for accurate results of morphometrical analysis. The present study focuses on comparing the Agree method and the conventional Shortest path method for finding out morphometric parameters in the flat region of the Lower Tapi Basin which is located in the western India. For the present study, open source SRTM (Shuttle Radar Topography Mission with 1 arc resolution) and toposheets issued by Survey of India (SOI) were used to determine the morphometric linear aspect such as stream order, number of stream, stream length, bifurcation ratio, mean stream length, mean bifurcation ratio, stream length ratio, length of overland flow, constant of channel maintenance and aerial aspect such as drainage density, stream frequency, drainage texture, form factor, circularity ratio, elongation ratio, shape factor and relief aspect such as relief ratio, gradient ratio and basin relief for 53 catchments of Lower Tapi Basin. Stream network was digitized from the available toposheets. Agree DEM was created by using the SRTM and stream network from the toposheets. The results obtained were used to demonstrate a comparison between the two methods in the flat areas.Keywords: agree method, morphometric analysis, lower Tapi basin, shortest path method
Procedia PDF Downloads 239971 Comparison of the Anthropometric Obesity Indices in Prediction of Cardiovascular Disease Risk: Systematic Review and Meta-analysis
Authors: Saeed Pourhassan, Nastaran Maghbouli
Abstract:
Statement of the problem: The relationship between obesity and cardiovascular diseases has been studied widely(1). The distribution of fat tissue gained attention in relation to cardiovascular risk factors during lang-time research (2). American College of Cardiology/American Heart Association (ACC/AHA) is widely and the most reliable tool to be used as a cardiovascular risk (CVR) assessment tool(3). This study aimed to determine which anthropometric index is better in discrimination of high CVR patients from low risks using ACC/AHA score in addition to finding the best index as a CVR predictor among both genders in different races and countries. Methodology & theoretical orientation: The literature in PubMed, Scopus, Embase, Web of Science, and Google Scholar were searched by two independent investigators using the keywords "anthropometric indices," "cardiovascular risk," and "obesity." The search strategy was limited to studies published prior to Jan 2022 as full-texts in the English language. Studies using ACC/AHA risk assessment tool as CVR and those consisted at least 2 anthropometric indices (ancient ones and novel ones) are included. Study characteristics and data were extracted. The relative risks were pooled with the use of the random-effect model. Analysis was repeated in subgroups. Findings: Pooled relative risk for 7 studies with 16,348 participants were 1.56 (1.35-1.72) for BMI, 1.67(1.36-1.83) for WC [waist circumference], 1.72 (1.54-1.89) for WHR [waist-to-hip ratio], 1.60 (1.44-1.78) for WHtR [waist-to-height ratio], 1.61 (1.37-1.82) for ABSI [A body shape index] and 1.63 (1.32-1.89) for CI [Conicity index]. Considering gender, WC among females and WHR among men gained the highest RR. The heterogeneity of studies was moderate (α²: 56%), which was not decreased by subgroup analysis. Some indices such as VAI and LAP were evaluated just in one study. Conclusion & significance: This meta-analysis showed WHR could predict CVR better in comparison to BMI or WHtR. Some new indices like CI and ABSI are less accurate than WHR and WC. Among women, WC seems to be a better choice to predict cardiovascular disease risk.Keywords: obesity, cardiovascular disease, risk assessment, anthropometric indices
Procedia PDF Downloads 102970 Optimization of Biomass Components from Rice Husk Treated with Trichophyton Soudanense and Trichophyton Mentagrophyte and Effect of Yeast on the Bio-Ethanol Yield
Authors: Chukwuma S. Ezeonu, Ikechukwu N. E. Onwurah, Uchechukwu U. Nwodo, Chibuike S. Ubani, Chigozie M. Ejikeme
Abstract:
Trichophyton soudanense and Trichophyton mentagrophyte were isolated from the rice mill environment, cultured and used singly and as di-culture in the treatment of measure quantities of preheated rice husk. Optimized conditions studied showed that carboxymethylcellulase (CMCellulase) activity of 57.61 µg/ml/min was optimum for Trichophyton mentagrophyte heat pretreated rice husk crude enzymes at 50oC and 80oC respectively. Duration of 120 hours (5 days) gave the highest CMcellulase activity of 75.84 µg/ml/min for crude enzyme of Trichophyton mentagrophyte heat pretreated rice husk. However, 96 hours (4 days) duration gave maximum activity of 58.21 µg/ml/min for crude enzyme of Trichophyton soudanense heat pretreated rice husk. Highest CMCellulase activities of 67.02 µg/ml/min and 69.02 µg/ml/min at pH of 5 were recorded for crude enzymes of monocultures of Trichophyton soudanense (TS) and Trichophyton mentagrophyte (TM) heat pretreated rice husk respectively. Biomass components showed that rice husk cooled after heating followed by treatment with Trichophyton mentagrophyte gave 44.50 ± 10.90 (% ± Standard Error of Mean) cellulose as the highest yield. Maximum total lignin value of 28.90 ± 1.80 (% ± SEM) was obtained from pre-heated rice husk treated with di-culture of Trichophyton soudanense and Trichophyton mentagrophyte (TS+TM). The hemicellulose content of 30.50 ± 2.12 (% ± SEM) from pre-heated rice husk treated with Trichophyton soudanense (TS); lignin value of 28.90 ± 1.80 from pre-heated rice husk treated with di-culture of Trichophyton soudanense and Trichophyton mentagrophyte (TS+TM); also carbohydrate content of 16.79 ± 9.14 (% ± SEM) , reducing and non-reducing sugar values of 2.66 ± 0.45 and 14.13 ± 8.69 (% ± SEM) were all obtained from for pre- heated rice husk treated with Trichophyton mentagrophyte (TM). All the values listed above were the highest values obtained from each rice husk treatment. The pre-heated rice husk treated with Trichophyton mentagrophyte (TM) fermented with palmwine yeast gave bio-ethanol value of 11.11 ± 0.21 (% ± Standard Deviation) as the highest yield.Keywords: Trichophyton soudanense, Trichophyton mentagrophyte, biomass, bioethanol, rice husk
Procedia PDF Downloads 680969 Preparation and Characterization of Calcium Phosphate Cement
Authors: W. Thepsuwan, N. Monmaturapoj
Abstract:
Calcium phosphate cements (CPCs) is one of the most attractive bioceramics due to its moldable and shape ability to fill complicated bony cavities or small dental defect positions. In this study, CPCs were produced by using mixtures of tetracalcium phosphate (TTCP, Ca4O(PO4)2) and dicalcium phosphate anhydrous (DCPA, CaHPO4) in equimolar ratio (1/1) with aqueous solutions of acetic acid (C2H4O2) and disodium hydrogen phosphate dehydrate (Na2HPO4.2H2O) in combination with sodium alginate in order to improve theirs moldable characteristic. The concentrations of the aqueous solutions and sodium alginate were varied to investigate the effects of different aqueous solution and alginate on properties of the cements. The cement paste was prepared by mixing cement powder (P) with aqueous solution (L) in a P/L ratio of 1.0 g/ 0.35 ml. X-ray diffraction (XRD) was used to analyses phase formation of the cements. Setting times and compressive strength of the set CPCs were measured using the Gilmore apparatus and Universal testing machine, respectively. The results showed that CPCs could be produced by using both basic (Na2HPO4.2H2O) and acidic (C2H4O2) solutions. XRD results show the precipitation of hydroxyapatite in all cement samples. No change in phase formation among cements using difference concentrations of Na2HPO4.2H2O solutions. With increasing concentration of acidic solutions, samples obtained less hydroxyapatite with a high dicalcium phosphate dehydrate leaded to a shorter setting time. Samples with sodium alginate exhibited higher crystallization of hydroxyapatite than that of without alginate as a result of shorten setting time in basic solution but a longer setting time in acidic solution. The stronger cement was attained from samples using acidic solution with sodium alginate; however it was lower than using the basic solution.Keywords: calcium phosphate cements, TTCP, DCPA, hydroxyapatite, properties
Procedia PDF Downloads 390968 Prospective Cohort Study on Sequential Use of Catheter with Misoprostol vs Misoprostol Alone for Second Trimester Medical Abortion
Authors: Hanna Teklu Gebregziabher
Abstract:
Background: A variety of techniques for medical termination of second-trimester pregnancy can be used, but there is no consensus about which is the best. Even though most evidence suggests the combined use of intracervical Foley catheter and vaginal misoprostol is safe, effective, and acceptable method for termination of second-trimester pregnancy, which is comparable to mifepristone-misoprostol combination regimen with lower cost and no additional maternal risks. The use of mifepristone and misoprostol alone with no other procedure is still the most common procedure in different institutions for 2nd-trimester pregnancy. Methods: A cross-sectional comparative prospective study design is employed on women who were admitted for 2nd-trimester medical abortion and medical abortion failed or if there was no change in cervical status after 24 hours of 1st dose of misoprostol. The study was conducted at St. Paulose Hospital Millennium Medical College. A sample of 44 participants in each arm was necessary to give a two-tailed test, a type 1 error of 5%, 80% statistical power, and a 1:1 ratio among groups. Thus, a total of 94 cases, 47 from each arm, were recruited. Data was entered and cleaned by using Epi-info and analyzed using SPSS version 29.0 statistical software and was presented in descriptive and tabular forms. Different variables were cross-tabulated and compared for significant differences and statistical analysis using the chi-square test and independent t-test, to conclude. Result: There was a significant difference between the two groups on induction to expulsion time and number of doses used. The mean ± SD of induction to expulsion time for those used misoprostol alone was 48.09 ± 11.86 and those who used trans-cervical catheter sequentially with misoprostol were 36.7 ±6.772. Conclusion: The use of a trans-cervical Foley catheter in conjunction with misoprostol in a sequential manner is a more effective, safe, and easily accessible procedure. In addition, the cost of utilizing the catheter is less compared to the cost of misoprostol and is readily available. As a good substitute, we advised using Trans-cervical Catether even for medical abortions performed in the second trimester.Keywords: second trimester, medical abortion, catheter, misoprostol
Procedia PDF Downloads 46967 Determinants of Success of University Industry Collaboration in the Science Academic Units at Makerere University
Authors: Mukisa Simon Peter Turker, Etomaru Irene
Abstract:
This study examined factors determining the success of University-Industry Collaboration (UIC) in the science academic units (SAUs) at Makerere University. This was prompted by concerns about weak linkages between industry and the academic units at Makerere University. The study examined institutional, relational, output, and framework factors determining the success of UIC in the science academic units at Makerere University. The study adopted a predictive cross-sectional survey design. Data was collected using a questionnaire survey from 172 academic staff from the six SAUs at Makerere University. Stratified, proportionate, and simple random sampling techniques were used to select the samples. The study used descriptive statistics and linear multiple regression analysis to analyze data. The study findings reveal a coefficient of determination (R-square) of 0.403 at a significance level of 0.000, suggesting that UIC success was 40.3% at a standardized error of estimate of 0.60188. The strength of association between Institutional factors, Relational factors, Output factors, and Framework factors, taking into consideration all interactions among the study variables, was at 64% (R= 0.635). Institutional, Relational, Output and Framework factors accounted for 34% of the variance in the level of UIC success (adjusted R2 = 0.338). The remaining variance of 66% is explained by factors other than Institutional, Relational, Output, and Framework factors. The standardized coefficient statistics revealed that Relational factors (β = 0.454, t = 5.247, p = 0.000) and Framework factors (β = 0.311, t = 3.770, p = 0.000) are the only statistically significant determinants of the success of UIC in the SAU in Makerere University. Output factors (β = 0.082, t =1.096, p = 0.275) and Institutional factors β = 0.023, t = 0.292, p = 0.771) turned out to be statistically insignificant determinants of the success of UIC in the science academic units at Makerere University. The study concludes that Relational Factors and Framework Factors positively and significantly determine the success of UIC, but output factors and institutional factors are not statistically significant determinants of UIC in the SAUs at Makerere University. The study recommends strategies to consolidate Relational and Framework Factors to enhance UIC at Makerere University and further research on the effects of Institutional and Output factors on the success of UIC in universities.Keywords: university-industry collaboration, output factors, relational factors, framework factors, institutional factors
Procedia PDF Downloads 61966 Model for Calculating Traffic Mass and Deceleration Delays Based on Traffic Field Theory
Authors: Liu Canqi, Zeng Junsheng
Abstract:
This study identifies two typical bottlenecks that occur when a vehicle cannot change lanes: car following and car stopping. The ideas of traffic field and traffic mass are presented in this work. When there are other vehicles in front of the target vehicle within a particular distance, a force is created that affects the target vehicle's driving speed. The characteristics of the driver and the vehicle collectively determine the traffic mass; the driving speed of the vehicle and external variables have no bearing on this. From a physical level, this study examines the vehicle's bottleneck when following a car, identifies the outside factors that have an impact on how it drives, takes into account that the vehicle will transform kinetic energy into potential energy during deceleration, and builds a calculation model for traffic mass. The energy-time conversion coefficient is created from an economic standpoint utilizing the social average wage level and the average cost of motor fuel. Vissim simulation program measures the vehicle's deceleration distance and delays under the Wiedemann car-following model. The difference between the measured value of deceleration delay acquired by simulation and the theoretical value calculated by the model is compared using the conversion calculation model of traffic mass and deceleration delay. The experimental data demonstrate that the model is reliable since the error rate between the theoretical calculation value of the deceleration delay obtained by the model and the measured value of simulation results is less than 10%. The article's conclusion is that the traffic field has an impact on moving cars on the road and that physical and socioeconomic factors should be taken into account while studying vehicle-following behavior. The deceleration delay value of a vehicle's driving and traffic mass have a socioeconomic relationship that can be utilized to calculate the energy-time conversion coefficient when dealing with the bottleneck of cars stopping and starting.Keywords: traffic field, social economics, traffic mass, bottleneck, deceleration delay
Procedia PDF Downloads 67965 Tool for Maxillary Sinus Quantification in Computed Tomography Exams
Authors: Guilherme Giacomini, Ana Luiza Menegatti Pavan, Allan Felipe Fattori Alves, Marcela de Oliveira, Fernando Antonio Bacchim Neto, José Ricardo de Arruda Miranda, Seizo Yamashita, Diana Rodrigues de Pina
Abstract:
The maxillary sinus (MS), part of the paranasal sinus complex, is one of the most enigmatic structures in modern humans. The literature has suggested that MSs function as olfaction accessories, to heat or humidify inspired air, for thermoregulation, to impart resonance to the voice and others. Thus, the real function of the MS is still uncertain. Furthermore, the MS anatomy is complex and varies from person to person. Many diseases may affect the development process of sinuses. The incidence of rhinosinusitis and other pathoses in the MS is comparatively high, so, volume analysis has clinical value. Providing volume values for MS could be helpful in evaluating the presence of any abnormality and could be used for treatment planning and evaluation of the outcome. The computed tomography (CT) has allowed a more exact assessment of this structure, which enables a quantitative analysis. However, this is not always possible in the clinical routine, and if possible, it involves much effort and/or time. Therefore, it is necessary to have a convenient, robust, and practical tool correlated with the MS volume, allowing clinical applicability. Nowadays, the available methods for MS segmentation are manual or semi-automatic. Additionally, manual methods present inter and intraindividual variability. Thus, the aim of this study was to develop an automatic tool to quantity the MS volume in CT scans of paranasal sinuses. This study was developed with ethical approval from the authors’ institutions and national review panels. The research involved 30 retrospective exams of University Hospital, Botucatu Medical School, São Paulo State University, Brazil. The tool for automatic MS quantification, developed in Matlab®, uses a hybrid method, combining different image processing techniques. For MS detection, the algorithm uses a Support Vector Machine (SVM), by features such as pixel value, spatial distribution, shape and others. The detected pixels are used as seed point for a region growing (RG) segmentation. Then, morphological operators are applied to reduce false-positive pixels, improving the segmentation accuracy. These steps are applied in all slices of CT exam, obtaining the MS volume. To evaluate the accuracy of the developed tool, the automatic method was compared with manual segmentation realized by an experienced radiologist. For comparison, we used Bland-Altman statistics, linear regression, and Jaccard similarity coefficient. From the statistical analyses for the comparison between both methods, the linear regression showed a strong association and low dispersion between variables. The Bland–Altman analyses showed no significant differences between the analyzed methods. The Jaccard similarity coefficient was > 0.90 in all exams. In conclusion, the developed tool to quantify MS volume proved to be robust, fast, and efficient, when compared with manual segmentation. Furthermore, it avoids the intra and inter-observer variations caused by manual and semi-automatic methods. As future work, the tool will be applied in clinical practice. Thus, it may be useful in the diagnosis and treatment determination of MS diseases. Providing volume values for MS could be helpful in evaluating the presence of any abnormality and could be used for treatment planning and evaluation of the outcome. The computed tomography (CT) has allowed a more exact assessment of this structure which enables a quantitative analysis. However, this is not always possible in the clinical routine, and if possible, it involves much effort and/or time. Therefore, it is necessary to have a convenient, robust and practical tool correlated with the MS volume, allowing clinical applicability. Nowadays, the available methods for MS segmentation are manual or semi-automatic. Additionally, manual methods present inter and intraindividual variability. Thus, the aim of this study was to develop an automatic tool to quantity the MS volume in CT scans of paranasal sinuses. This study was developed with ethical approval from the authors’ institutions and national review panels. The research involved 30 retrospective exams of University Hospital, Botucatu Medical School, São Paulo State University, Brazil. The tool for automatic MS quantification, developed in Matlab®, uses a hybrid method, combining different image processing techniques. For MS detection, the algorithm uses a Support Vector Machine (SVM), by features such as pixel value, spatial distribution, shape and others. The detected pixels are used as seed point for a region growing (RG) segmentation. Then, morphological operators are applied to reduce false-positive pixels, improving the segmentation accuracy. These steps are applied in all slices of CT exam, obtaining the MS volume. To evaluate the accuracy of the developed tool, the automatic method was compared with manual segmentation realized by an experienced radiologist. For comparison, we used Bland-Altman statistics, linear regression and Jaccard similarity coefficient. From the statistical analyses for the comparison between both methods, the linear regression showed a strong association and low dispersion between variables. The Bland–Altman analyses showed no significant differences between the analyzed methods. The Jaccard similarity coefficient was > 0.90 in all exams. In conclusion, the developed tool to automatically quantify MS volume proved to be robust, fast and efficient, when compared with manual segmentation. Furthermore, it avoids the intra and inter-observer variations caused by manual and semi-automatic methods. As future work, the tool will be applied in clinical practice. Thus, it may be useful in the diagnosis and treatment determination of MS diseases.Keywords: maxillary sinus, support vector machine, region growing, volume quantification
Procedia PDF Downloads 504964 A Survey on Students' Intentions to Dropout and Dropout Causes in Higher Education of Mongolia
Authors: D. Naranchimeg, G. Ulziisaikhan
Abstract:
Student dropout problem has not been recently investigated within the Mongolian higher education. A student dropping out is a personal decision, but it may cause unemployment and other social problems including low quality of life because students who are not completed a degree cannot find better-paid jobs. The research aims to determine percentage of at-risk students, and understand reasons for dropouts and to find a way to predict. The study based on the students of the Mongolian National University of Education including its Arkhangai branch school, National University of Mongolia, Mongolian University of Life Sciences, Mongolian University of Science and Technology, Mongolian National University of Medical Science, Ikh Zasag International University, and Dornod University. We conducted the paper survey by method of random sampling and have surveyed about 100 students per university. The margin of error - 4 %, confidence level -90%, and sample size was 846, but we excluded 56 students from this study. Causes for exclusion were missing data on the questionnaire. The survey has totally 17 questions, 4 of which was demographic questions. The survey shows that 1.4% of the students always thought to dropout whereas 61.8% of them thought sometimes. Also, results of the research suggest that students’ dropouts from university do not have relationships with their sex, marital and social status, and peer and faculty climate, whereas it slightly depends on their chosen specialization. Finally, the paper presents the reasons for dropping out provided by the students. The main two reasons for dropouts are personal reasons related with choosing wrong study program, not liking the course they had chosen (50.38%), and financial difficulties (42.66%). These findings reveal the importance of early prevention of dropout where possible, combined with increased attention to high school students in choosing right for them study program, and targeted financial support for those who are at risk.Keywords: at risk students, dropout, faculty climate, Mongolian universities, peer climate
Procedia PDF Downloads 397963 Development of Wave-Dissipating Block Installation Simulation for Inexperienced Worker Training
Authors: Hao Min Chuah, Tatsuya Yamazaki, Ryosui Iwasawa, Tatsumi Suto
Abstract:
In recent years, with the advancement of digital technology, the movement to introduce so-called ICT (Information and Communication Technology), such as computer technology and network technology, to civil engineering construction sites and construction sites is accelerating. As part of this movement, attempts are being made in various situations to reproduce actual sites inside computers and use them for designing and construction planning, as well as for training inexperienced engineers. The installation of wave-dissipating blocks on coasts, etc., is a type of work that has been carried out by skilled workers based on their years of experience and is one of the tasks that is difficult for inexperienced workers to carry out on site. Wave-dissipating blocks are structures that are designed to protect coasts, beaches, and so on from erosion by reducing the energy of ocean waves. Wave-dissipating blocks usually weigh more than 1 t and are installed by being suspended by a crane, so it would be time-consuming and costly for inexperienced workers to train on-site. In this paper, therefore, a block installation simulator is developed based on Unity 3D, a game development engine. The simulator computes porosity. Porosity is defined as the ratio of the total volume of the wave breaker blocks inside the structure to the final shape of the ideal structure. Using the evaluation of porosity, the simulator can determine how well the user is able to install the blocks. The voxelization technique is used to calculate the porosity of the structure, simplifying the calculations. Other techniques, such as raycasting and box overlapping, are employed for accurate simulation. In the near future, the simulator will install an automatic block installation algorithm based on combinatorial optimization solutions and compare the user-demonstrated block installation and the appropriate installation solved by the algorithm.Keywords: 3D simulator, porosity, user interface, voxelization, wave-dissipating blocks
Procedia PDF Downloads 103962 Mike Hat: Coloured-Tape-in-Hat as a Head Circumference Measuring Instrument for Early Detection of Hydrocephalus in an Infant
Authors: Nyimas Annissa Mutiara Andini
Abstract:
Every year, children develop hydrocephalus during the first year of life. If it is not treated, hydrocephalus can lead to brain damage, a loss in mental and physical abilities, and even death. To be treated, first, we have to do a proper diagnosis using some examinations especially to detect hydrocephalus earlier. One of the examination that could be done is using a head circumference measurement. Increased head circumference is a first and main sign of hydrocephalus, especially in infant (0-1 year age). Head circumference is a measurement of a child's head largest area. In this measurement, we want to get the distance from above the eyebrows and ears and around the back of the head using a measurement tape. If the head circumference of an infant is larger than normal, this infant might potentially suffer hydrocephalus. If early diagnosis and timely treatment of hydrocephalus could be done most children can recover successfully. There are some problems with early detection of hydrocephalus using regular tape for head circumference measurement. One of the problem is the infant’s comfort. We need to make the infant feel comfort along the head circumference measurement to get a proper result of the examination. For that, we can use a helpful stuff, like a hat. This paper is aimed to describe the possibility of using a head circumference measuring instrument for early detection of hydrocephalus in an infant with a mike hat, coloured-tape-in-hat. In the first life, infants’ head size is about 35 centimeters. First three months after that infants will gain 2 centimeters each month. The second three months, infant’s head circumference will increase 1 cm each month. And for the six months later, the rate is 0.5 cm per month, and end up with an average of 47 centimeters. This formula is compared to the WHO’s head circumference growth chart. The shape of this tape-in-hat is alike an upper arm measurement. This tape-in-hat diameter is about 47 centimeters. It contains twelve different colours range by age. If it is out of the normal colour, the infant potentially suffers hydrocephalus. This examination should be done monthly. If in two times of measurement there still in the same range abnormal of head circumference, or a rapid growth of the head circumference size, the infant should be referred to a pediatrician. There are the pink hat for girls and blue hat for boys. Based on this paper, we know that this measurement can be used to help early detection of hydrocephalus in an infant.Keywords: head circumference, hydrocephalus, infant, mike hat
Procedia PDF Downloads 267961 Selection of Optimal Reduced Feature Sets of Brain Signal Analysis Using Heuristically Optimized Deep Autoencoder
Authors: Souvik Phadikar, Nidul Sinha, Rajdeep Ghosh
Abstract:
In brainwaves research using electroencephalogram (EEG) signals, finding the most relevant and effective feature set for identification of activities in the human brain is a big challenge till today because of the random nature of the signals. The feature extraction method is a key issue to solve this problem. Finding those features that prove to give distinctive pictures for different activities and similar for the same activities is very difficult, especially for the number of activities. The performance of a classifier accuracy depends on this quality of feature set. Further, more number of features result in high computational complexity and less number of features compromise with the lower performance. In this paper, a novel idea of the selection of optimal feature set using a heuristically optimized deep autoencoder is presented. Using various feature extraction methods, a vast number of features are extracted from the EEG signals and fed to the autoencoder deep neural network. The autoencoder encodes the input features into a small set of codes. To avoid the gradient vanish problem and normalization of the dataset, a meta-heuristic search algorithm is used to minimize the mean square error (MSE) between encoder input and decoder output. To reduce the feature set into a smaller one, 4 hidden layers are considered in the autoencoder network; hence it is called Heuristically Optimized Deep Autoencoder (HO-DAE). In this method, no features are rejected; all the features are combined into the response of responses of the hidden layer. The results reveal that higher accuracy can be achieved using optimal reduced features. The proposed HO-DAE is also compared with the regular autoencoder to test the performance of both. The performance of the proposed method is validated and compared with the other two methods recently reported in the literature, which reveals that the proposed method is far better than the other two methods in terms of classification accuracy.Keywords: autoencoder, brainwave signal analysis, electroencephalogram, feature extraction, feature selection, optimization
Procedia PDF Downloads 114960 Finite Element Modeling of Aortic Intramural Haematoma Shows Size Matters
Authors: Aihong Zhao, Priya Sastry, Mark L Field, Mohamad Bashir, Arvind Singh, David Richens
Abstract:
Objectives: Intramural haematoma (IMH) is one of the pathologies, along with acute aortic dissection, that present as Acute Aortic Syndrome (AAS). Evidence suggests that unlike aortic dissection, some intramural haematomas may regress with medical management. However, intramural haematomas have been traditionally managed like acute aortic dissections. Given that some of these pathologies may regress with conservative management, it would be useful to be able to identify which of these may not need high risk emergency intervention. A computational aortic model was used in this study to try and identify intramural haematomas with risk of progression to aortic dissection. Methods: We created a computational model of the aorta with luminal blood flow. Reports in the literature have identified 11 mm as the radial clot thickness that is associated with heightened risk of progression of intramural haematoma. Accordingly, haematomas of varying sizes were implanted in the modeled aortic wall to test this hypothesis. The model was exposed to physiological blood flows and the stresses and strains in each layer of the aortic wall were recorded. Results: Size and shape of clot were seen to affect the magnitude of aortic stresses. The greatest stresses and strains were recorded in the intima of the model. When the haematoma exceeded 10 mm in all dimensions, the stress on the intima reached breaking point. Conclusion: Intramural clot size appears to be a contributory factor affecting aortic wall stress. Our computer simulation corroborates clinical evidence in the literature proposing that IMH diameter greater than 11 mm may be predictive of progression. This preliminary report suggests finite element modelling of the aortic wall may be a useful process by which to examine putative variables important in predicting progression or regression of intramural haematoma.Keywords: intramural haematoma, acute aortic syndrome, finite element analysis,
Procedia PDF Downloads 431959 Active Control Effects on Dynamic Response of Elevated Water Storage Tanks
Authors: Ali Etemadi, Claudia Fernanda Yasar
Abstract:
Elevated water storage tank structures (EWSTs) are high elevated-ponderous structural systems and very vulnerable to seismic vibrations. In past earthquake events, many of these structures exhibit poor performance and experienced severe damage. The dynamic analysis of the EWSTs under earthquake loads is, therefore, of significant importance for the design of the structure and a key issue for the development of modern methods, such as active control design. In this study, a reduced model of the EWSTs is explained, which is based on a tuned mass damper model (TMD). Vibration analysis of a structure under seismic excitation is presented and then used to propose an active vibration controller. MATLAB/Simulink is employed for dynamic analysis of the system and control of the seismic response. A single degree of freedom (SDOF) and two degree of freedom (2DOF) models of ELSTs are going to be used to study the concept of active vibration control. Lab-scale experimental models similar to pendulum are applied to suppress vibrations in ELST under seismic excitation. One of the most important phenomena in liquid storage tanks is the oscillation of fluid due to the movements of the tank body because of its base motions during an earthquake. Simulation results illustrate that the EWSTs vibration can be reduced by means of an input shaping technique that takes into account the dominant mode shape of the structure. Simulations with which to guide many of our designs are presented in detail. A simple and effective real-time control for seismic vibration damping can be, therefore, design and built-in practice.Keywords: elevated water storage tank, tuned mass damper model, real time control, shaping control, seismic vibration control, the laplace transform
Procedia PDF Downloads 152