Search results for: advanced numerical modelling
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 7039

Search results for: advanced numerical modelling

889 Numerical Response of Coaxial HPGe Detector for Skull and Knee Measurement

Authors: Pabitra Sahu, M. Manohari, S. Priyadharshini, R. Santhanam, S. Chandrasekaran, B. Venkatraman

Abstract:

Radiation workers of reprocessing plants have a potential for internal exposure due to actinides and fission products. Radionuclides like Americium, lead, Polonium and Europium are bone seekers and get accumulated in the skeletal part. As the major skeletal content is in the skull (13%) and knee (22%), measurements of old intake have to be carried out in the skull and knee. At the Indira Gandhi Centre for Atomic Research, a twin HPGe-based actinide monitor is used for the measurement of actinides present in bone. Efficiency estimation, which is one of the prerequisites for the quantification of radionuclides, requires anthropomorphic phantoms. Such phantoms are very limited. Hence, in this study, efficiency curves for a Twin HPGe-based actinide monitoring system are established theoretically using the FLUKA Monte Carlo method and ICRP adult male voxel phantom. In the case of skull measurement, the detector is placed over the forehead, and for knee measurement, one detector is placed over each knee. The efficiency values of radionuclides present in the knee and skull vary from 3.72E-04 to 4.19E-04 CPS/photon and 5.22E-04 to 7.07E-04 CPS/photon, respectively, for the energy range 17 to 3000keV. The efficiency curves for the measurement are established, and it is found that initially, the efficiency value increases up to 100 keV and then starts decreasing. It is found that the skull efficiency values are 4% to 63% higher than that of the knee, depending on the energy for all the energies except 17.74 keV. The reason is the closeness of the detector to the skull compared to the knee. But for 17.74 keV the efficiency of the knee is more than the skull due to the higher attenuation caused in the skull bones because of its greater thickness. The Minimum Detectable Activity (MDA) for 241Am present in the skull and knee is 9 Bq. 239Pu has a MDA of 950 Bq and 1270 Bq for knee and skull, respectively, for a counting time of 1800 sec. This paper discusses the simulation method and the results obtained in the study.

Keywords: FLUKA Monte Carlo Method, ICRP adult male voxel phantom, knee, Skull.

Procedia PDF Downloads 38
888 The Preceptorship Experience and Clinical Competence of Final Year Nursing Students

Authors: Susan Ka Yee Chow

Abstract:

Effective clinical preceptorship is affecting students’ competence and fostering their growth in applying theoretical knowledge and skills in clinical settings. Any difference between the expected and actual learning experience will reduce nursing students’ interest in clinical practices and having a negative consequence with their clinical performance. This cross-sectional study is an attempt to compare the differences between preferred and actual preceptorship experience of final year nursing students, and to examine the relationship between the actual preceptorship experience and perceived clinical competence of the students in a tertiary institution. Participants of the study were final year bachelor nursing students of a self-financing tertiary institution in Hong Kong. The instruments used to measure the effectiveness of clinical preceptorship was developed by the participating institution. The scale consisted of five items in a 5-point likert scale. The questions including goals development, critical thinking, learning objectives, asking questions and providing feedback to students. The “Clinical Competence Questionnaire” by Liou & Cheng (2014) was used to examine students’ perceived clinical competences. The scale consisted of 47 items categorized into four domains, namely nursing professional behaviours; skill competence: general performance; skill competence: core nursing skills and skill competence: advanced nursing skills. There were 193 questionnaires returned with a response rate of 89%. The paired t-test was used to compare the differences between preferred and actual preceptorship experiences of students. The results showed significant differences (p<0.001) for the five questions. The mean for the preferred scores is higher than the actual scores resulting statistically significance. The maximum mean difference was accepted goal and the highest mean different was giving feedback. The Pearson Correlation Coefficient was used to examine the relationship. The results showed moderate correlations between nursing professional behaviours with asking questions and providing feedback. Providing useful feedback to students is having moderate correlations with all domains of the Clinical Competence Questionnaire (r=0.269 – 0.345). It is concluded that nursing students do not have a positive perception of the clinical preceptorship. Their perceptions are significantly different from their expected preceptorship. If students were given more opportunities to ask questions in a pedagogical atmosphere, their perceived clinical competence and learning outcomes could be improved as a result.

Keywords: clinical preceptor, clinical competence, clinical practicum, nursing students

Procedia PDF Downloads 119
887 Improving Fingerprinting-Based Localization System Using Generative AI

Authors: Getaneh Berie Tarekegn

Abstract:

A precise localization system is crucial for many artificial intelligence Internet of Things (AI-IoT) applications in the era of smart cities. Their applications include traffic monitoring, emergency alarming, environmental monitoring, location-based advertising, intelligent transportation, and smart health care. The most common method for providing continuous positioning services in outdoor environments is by using a global navigation satellite system (GNSS). Due to nonline-of-sight, multipath, and weather conditions, GNSS systems do not perform well in dense urban, urban, and suburban areas.This paper proposes a generative AI-based positioning scheme for large-scale wireless settings using fingerprinting techniques. In this article, we presented a semi-supervised deep convolutional generative adversarial network (S-DCGAN)-based radio map construction method for real-time device localization. It also employed a reliable signal fingerprint feature extraction method with t-distributed stochastic neighbor embedding (t-SNE), which extracts dominant features while eliminating noise from hybrid WLAN and long-term evolution (LTE) fingerprints. The proposed scheme reduced the workload of site surveying required to build the fingerprint database by up to 78.5% and significantly improved positioning accuracy. The results show that the average positioning error of GAILoc is less than 0.39 m, and more than 90% of the errors are less than 0.82 m. According to numerical results, SRCLoc improves positioning performance and reduces radio map construction costs significantly compared to traditional methods.

Keywords: location-aware services, feature extraction technique, generative adversarial network, long short-term memory, support vector machine

Procedia PDF Downloads 40
886 Application of Artificial Intelligence in Market and Sales Network Management: Opportunities, Benefits, and Challenges

Authors: Mohamad Mahdi Namdari

Abstract:

In today's rapidly changing and evolving business competition, companies and organizations require advanced and efficient tools to manage their markets and sales networks. Big data analysis, quick response in competitive markets, process and operations optimization, and forecasting customer behavior are among the concerns of executive managers. Artificial intelligence, as one of the emerging technologies, has provided extensive capabilities in this regard. The use of artificial intelligence in market and sales network management can lead to improved efficiency, increased decision-making accuracy, and enhanced customer satisfaction. Specifically, AI algorithms can analyze vast amounts of data, identify complex patterns, and offer strategic suggestions to improve sales performance. However, many companies are still distant from effectively leveraging this technology, and those that do face challenges in fully exploiting AI's potential in market and sales network management. It appears that the general public's and even the managerial and academic communities' lack of knowledge of this technology has caused the managerial structure to lag behind the progress and development of artificial intelligence. Additionally, high costs, fear of change and employee resistance, lack of quality data production processes, the need for updating structures and processes, implementation issues, the need for specialized skills and technical equipment, and ethical and privacy concerns are among the factors preventing widespread use of this technology in organizations. Clarifying and explaining this technology, especially to the academic, managerial, and elite communities, can pave the way for a transformative beginning. The aim of this research is to elucidate the capacities of artificial intelligence in market and sales network management, identify its opportunities and benefits, and examine the existing challenges and obstacles. This research aims to leverage AI capabilities to provide a framework for enhancing market and sales network performance for managers. The results of this research can help managers and decision-makers adopt more effective strategies for business growth and development by better understanding the capabilities and limitations of artificial intelligence.

Keywords: artificial intelligence, market management, sales network, big data analysis, decision-making, digital marketing

Procedia PDF Downloads 16
885 Current Aspects of 21st Century Primary School Music Education in South Korea: Zoltán Kodály Concept

Authors: Kyung Hwa Shin

Abstract:

Primary school music education plays a crucial role in nurturing students' musical abilities and fostering a lifelong appreciation for music. As we embark on the 21st century, it becomes imperative to explore advanced approaches that can effectively engage and empower students in the realm of music. This study aims to shed light on the aspects of primary school music education in South Korea, with a specific focus on the incorporation of the Zoltán Kodály Concept. The Zoltán Kodály Concept, developed by Hungarian composer and educator Zoltán Kodály (Kodály, 1974) advocates for a holistic music education that integrates singing, movement, and music literacy. This concept has gained recognition worldwide for its effectiveness in developing musicianship and enhancing music learning experiences. This study will delve into the ways in which the Zoltán Kodály Concept has been adapted and implemented in the context of South Korean primary school music education. It will highlight the benefits of this approach in nurturing students' musical skills, fostering creativity, and promoting cultural understanding through music. Furthermore, it will enhance the delivery of the Kodály-based curriculum challenges posed by the 21st-century digital age. Drawing on this research, pedagogical practices, and case studies, this study will provide valuable insights into the practical applications of the Zoltán Kodály Concept in South Korean primary school music education. It will discuss the impact of this approach on student engagement, motivation, and achievement, as well as the role of teachers in facilitating effective implementation. Additionally, it will address the professional development opportunities available to music educators to enhance their pedagogical skills in line with the Kodály philosophy. Ultimately, it aims to inspire and empower educators, policymakers, and researchers to embrace the Zoltán Kodály Concept as a transformative and forward-thinking approach to primary school music education in the 21st century. By embracing current aspects and progressive methodologies, South Korea can continue to strengthen its music education system and cultivate a generation of musically literate and culturally enriched individuals.

Keywords: primary school music education, Zoltán Kodály concept, 21st century, South Korea, music literacy, pedagogy, curriculum

Procedia PDF Downloads 75
884 Geomatic Techniques to Filter Vegetation from Point Clouds

Authors: M. Amparo Núñez-Andrés, Felipe Buill, Albert Prades

Abstract:

More and more frequently, geomatics techniques such as terrestrial laser scanning or digital photogrammetry, either terrestrial or from drones, are being used to obtain digital terrain models (DTM) used for the monitoring of geological phenomena that cause natural disasters, such as landslides, rockfalls, debris-flow. One of the main multitemporal analyses developed from these models is the quantification of volume changes in the slopes and hillsides, either caused by erosion, fall, or land movement in the source area or sedimentation in the deposition zone. To carry out this task, it is necessary to filter the point clouds of all those elements that do not belong to the slopes. Among these elements, vegetation stands out as it is the one we find with the greatest presence and its constant change, both seasonal and daily, as it is affected by factors such as wind. One of the best-known indexes to detect vegetation on the image is the NVDI (Normalized Difference Vegetation Index), which is obtained from the combination of the infrared and red channels. Therefore it is necessary to have a multispectral camera. These cameras are generally of lower resolution than conventional RGB cameras, while their cost is much higher. Therefore we have to look for alternative indices based on RGB. In this communication, we present the results obtained in Georisk project (PID2019‐103974RB‐I00/MCIN/AEI/10.13039/501100011033) by using the GLI (Green Leaf Index) and ExG (Excessive Greenness), as well as the change to the Hue-Saturation-Value (HSV) color space being the H coordinate the one that gives us the most information for vegetation filtering. These filters are applied both to the images, creating binary masks to be used when applying the SfM algorithms, and to the point cloud obtained directly by the photogrammetric process without any previous filter or the one obtained by TLS (Terrestrial Laser Scanning). In this last case, we have also tried to work with a Riegl VZ400i sensor that allows the reception, as in the aerial LiDAR, of several returns of the signal. Information to be used for the classification on the point cloud. After applying all the techniques in different locations, the results show that the color-based filters allow correct filtering in those areas where the presence of shadows is not excessive and there is a contrast between the color of the slope lithology and the vegetation. As we have advanced in the case of using the HSV color space, it is the H coordinate that responds best for this filtering. Finally, the use of the various returns of the TLS signal allows filtering with some limitations.

Keywords: RGB index, TLS, photogrammetry, multispectral camera, point cloud

Procedia PDF Downloads 134
883 Finite Element and Experimental Investigation on Vibration Analysis of Laminated Composite Plates

Authors: Azad Mohammed Ali Saber, Lanja Saeed Omer

Abstract:

The present study deals with numerical method (FE) and experimental investigations on the vibration behavior of carbon fiber-polyester laminated plates. Finite element simulation is done using APDL (Ansys Parametric Design Language) macro codes software version 19. Solid185 layered structural element, including eight nodes, is adopted in this analysis. The experimental work is carried out using (Hand Layup method) to fabricate different layers and orientation angles of composite laminate plates. Symmetric samples include four layers (00/900)s and six layers (00/900/00)s, (00/00/900)s. Antisymmetric samples include one layer (00), (450), two layers (00/900), (-450/450), three layers (00/900/00), four layers (00/900)2, (-450/450)2, five layers (00/900)2.5, and six layers (00/900)3, (-450/450)3. An experimental investigation is carried out using a modal analysis technique with a Fast Fourier Transform Analyzer (FFT), Pulse platform, impact hammer, and accelerometer to obtain the frequency response functions. The influences of different parameters such as the number of layers, aspect ratio, modulus ratio, ply orientation, and different boundary conditions on the dynamic behavior of the CFRPs are studied, where the 1st, 2nd, and 3rd natural frequencies are observed to be the minimum for cantilever boundary condition (CFFF) and the maximum for full clamped boundary condition (CCCC). Experimental results show that the natural frequencies of laminated plates are significantly reliant on the type of boundary conditions due to the restraint effect at the edges. Good agreement is achieved among the finite element and experimental results. All results indicate that any increase in aspect ratio causes a decrease in the natural frequency of the CFRPs plate, while any increase in the modulus ratio or number of layers causes an increase in the fundamental natural frequency of vibration.

Keywords: vibration, composite materials, finite element, APDL ANSYS

Procedia PDF Downloads 29
882 Thermo-Hydro-Mechanical-Chemical Coupling in Enhanced Geothermal Systems: Challenges and Opportunities

Authors: Esmael Makarian, Ayub Elyasi, Fatemeh Saberi, Olusegun Stanley Tomomewo

Abstract:

Geothermal reservoirs (GTRs) have garnered global recognition as a sustainable energy source. The Thermo-Hydro-Mechanical-Chemical (THMC) integration coupling proves to be a practical and effective method for optimizing production in GTRs. The study outcomes demonstrate that THMC coupling serves as a versatile and valuable tool, offering in-depth insights into GTRs and enhancing their operational efficiency. This is achieved through temperature analysis and pressure changes and their impacts on mechanical properties, structural integrity, fracture aperture, permeability, and heat extraction efficiency. Moreover, THMC coupling facilitates potential benefits assessment and risks associated with different geothermal technologies, considering the complex thermal, hydraulic, mechanical, and chemical interactions within the reservoirs. However, THMC-coupling utilization in GTRs presents a multitude of challenges. These challenges include accurately modeling and predicting behavior due to the interconnected nature of processes, limited data availability leading to uncertainties, induced seismic events risks to nearby communities, scaling and mineral deposition reducing operational efficiency, and reservoirs' long-term sustainability. In addition, material degradation, environmental impacts, technical challenges in monitoring and control, accurate assessment of resource potential, and regulatory and social acceptance further complicate geothermal projects. Addressing these multifaceted challenges is crucial for successful geothermal energy resources sustainable utilization. This paper aims to illuminate the challenges and opportunities associated with THMC coupling in enhanced geothermal systems. Practical solutions and strategies for mitigating these challenges are discussed, emphasizing the need for interdisciplinary approaches, improved data collection and modeling techniques, and advanced monitoring and control systems. Overcoming these challenges is imperative for unlocking the full potential of geothermal energy making a substantial contribution to the global energy transition and sustainable development.

Keywords: geothermal reservoirs, THMC coupling, interdisciplinary approaches, challenges and opportunities, sustainable utilization

Procedia PDF Downloads 53
881 Linguistic Politeness in Higher Education Teaching Chinese as an Additional Language

Authors: Leei Wong

Abstract:

Changes in globalized contexts precipitate changing perceptions concerning linguistic politeness practices. Within these changing contexts, misunderstanding or stereotypification of politeness norms may lead to negative consequences such as hostility or even communication breakdown. With China’s rising influence, the country is offering a vast potential market for global economic development and diplomatic relations and opportunities for intercultural interaction, and many outside China are subsequently learning Chinese. These trends bring both opportunities and pitfalls for intercultural communication, including within the important field of politeness awareness. One internationally recognized benchmark for the study and classification of languages – the updated 2018 CEFR (Common European Framework of Reference for Language) Companion Volume New Descriptors (CEFR/CV) – classifies politeness as a B1 (or intermediate) level descriptor on the scale of Politeness Conventions. This provides some indication of the relevance of politeness awareness within new globalized contexts for fostering better intercultural communication. This study specifically examines Bald on record politeness strategies presented in current beginner TCAL textbooks used in Australian tertiary education through content-analysis. The investigation in this study involves the purposive sampling of commercial textbooks published in America and China followed by interpretive content analysis. The philosophical position of this study is therefore located within an interpretivist ontology, with a subjectivist epistemological perspective. It sets out with the aim to illuminate the characteristics of Chinese Bald on record strategies that are deemed significant in the present-world context through Chinese textbook writers and curriculum designers. The data reveals significant findings concerning politeness strategies in beginner stage curriculum, and also opens the way for further research on politeness strategies in intermediate and advanced level textbooks for additional language learners. This study will be useful for language teachers, and language teachers-in-training, by generating awareness and providing insights and advice into the teaching and learning of Bald on record politeness strategies. Authors of textbooks may also benefit from the findings of this study, as awareness is raised of the need to include reference to understanding politeness in language, and how this might be approached.

Keywords: linguistic politeness, higher education, Chinese language, additional language

Procedia PDF Downloads 94
880 Comparison of Methodologies to Compute the Probabilistic Seismic Hazard Involving Faults and Associated Uncertainties

Authors: Aude Gounelle, Gloria Senfaute, Ludivine Saint-Mard, Thomas Chartier

Abstract:

The long-term deformation rates of faults are not fully captured by Probabilistic Seismic Hazard Assessment (PSHA). PSHA that use catalogues to develop area or smoothed-seismicity sources is limited by the data available to constraint future earthquakes activity rates. The integration of faults in PSHA can at least partially address the long-term deformation. However, careful treatment of fault sources is required, particularly, in low strain rate regions, where estimated seismic hazard levels are highly sensitive to assumptions concerning fault geometry, segmentation and slip rate. When integrating faults in PSHA various constraints on earthquake rates from geologic and seismologic data have to be satisfied. For low strain rate regions where such data is scarce it would be especially challenging. Faults in PSHA requires conversion of the geologic and seismologic data into fault geometries, slip rates and then into earthquake activity rates. Several approaches exist for translating slip rates into earthquake activity rates. In the most frequently used approach, the background earthquakes are handled using a truncated approach, in which earthquakes with a magnitude lower or equal to a threshold magnitude (Mw) occur in the background zone, with a rate defined by the rate in the earthquake catalogue. Although magnitudes higher than the threshold are located on the fault with a rate defined using the average slip rate of the fault. As high-lighted by several research, seismic events with magnitudes stronger than the selected magnitude threshold may potentially occur in the background and not only at the fault, especially in regions of slow tectonic deformation. It also has been known that several sections of a fault or several faults could rupture during a single fault-to-fault rupture. It is then essential to apply a consistent modelling procedure to allow for a large set of possible fault-to-fault ruptures to occur aleatory in the hazard model while reflecting the individual slip rate of each section of the fault. In 2019, a tool named SHERIFS (Seismic Hazard and Earthquake Rates in Fault Systems) was published. The tool is using a methodology to calculate the earthquake rates in a fault system where the slip-rate budget of each fault is conversed into rupture rates for all possible single faults and faultto-fault ruptures. The objective of this paper is to compare the SHERIFS method with one other frequently used model to analyse the impact on the seismic hazard and through sensibility studies better understand the influence of key parameters and assumptions. For this application, a simplified but realistic case study was selected, which is in an area of moderate to hight seismicity (South Est of France) and where the fault is supposed to have a low strain.

Keywords: deformation rates, faults, probabilistic seismic hazard, PSHA

Procedia PDF Downloads 46
879 Advanced Technology for Natural Gas Liquids (NGL) Recovery Using Residue Gas Split

Authors: Riddhiman Sherlekar, Umang Paladia, Rachit Desai, Yash Patel

Abstract:

The competitive scenario of the oil and gas market is a challenge for today’s plant designers to achieve designs that meet client expectations with shrinking budgets, safety requirements, and operating flexibility. Natural Gas Liquids have three main industrial uses. They can be used as fuels, or as petrochemical feedstock or as refinery blends that can be further processed and sold as straight run cuts, such as naphtha, kerosene and gas oil. NGL extraction is not a chemical reaction. It involves the separation of heavier hydrocarbons from the main gas stream through pressure as temperature reduction, which depending upon the degree of NGL extraction may involve cryogenic process. Previous technologies i.e. short cycle dry desiccant absorption, Joule-Thompson or Low temperature refrigeration, lean oil absorption have been giving results of only 40 to 45% ethane recoveries, which were unsatisfying depending upon the current scenario of down turn market. Here new technology has been suggested for boosting up the recoveries of ethane+ up to 95% and up to 99% for propane+ components. Cryogenic plants provide reboiling to demethanizers by using part of inlet feed gas, or inlet feed split. If the two stream temperatures are not similar, there is lost work in the mixing operation unless the designer has access to some proprietary design. The concept introduced in this process consists of reboiling the demethanizer with the residue gas, or residue gas split. The innovation of this process is that it does not use the typical inlet gas feed split type of flow arrangement to reboil the demethanizer or deethanizer column, but instead uses an open heat pump scheme to that effect. The residue gas compressor provides the heat pump effect. The heat pump stream is then further cooled and entered in the top section of the column as a cold reflux. Because of the nature of this design, this process offers the opportunity to operate at full ethane rejection or recovery. The scheme is also very adaptable to revamp existing facilities. This advancement can be proven not only in enhancing the results but also provides operational flexibility, optimize heat exchange, introduces equipment cost reduction, opens a future for the innovative designs while keeping execution costs low.

Keywords: deethanizer, demethanizer, residue gas, NGL

Procedia PDF Downloads 257
878 Avoidance and Selectivity in the Acquisition of Arabic as a Second/Foreign Language

Authors: Abeer Heider

Abstract:

This paper explores and classifies the different kinds of avoidances that students commonly make in the acquisition of Arabic as a second/foreign language, and suggests specific strategies to help students lessen their avoidance trends in hopes of streamlining the learning process. Students most commonly use avoidance strategies in grammar, and word choice. These different types of strategies have different implications and naturally require different approaches. Thus the question remains as to the most effective way to help students improve their Arabic, and how teachers can efficiently utilize these techniques. It is hoped that this research will contribute to understand the role of avoidance in the field of the second language acquisition in general, and as a type of input. Yet some researchers also note that similarity between L1 and L2 may be problematic as well since the learner may doubt that such similarity indeed exists and consequently avoid the identical constructions or elements (Jordens, 1977; Kellermann, 1977, 1978, 1986). In an effort to resolve this issue, a case study is being conducted. The present case study attempts to provide a broader analysis of what is acquired than is usually the case, analyzing the learners ‘accomplishments in terms of three –part framework of the components of communicative competence suggested by Michele Canale: grammatical competence, sociolinguistic competence and discourse competence. The subjects of this study are 15 students’ 22th year who came to study Arabic at Qatar University of Cairo. The 15 students are in the advanced level. They were complete intermediate level in Arabic when they arrive in Qatar for the first time. The study used discourse analytic method to examine how the first language affects students’ production and output in the second language, and how and when students use avoidance methods in their learning. The study will be conducted through Fall 2015 through analyzing audio recordings that are recorded throughout the entire semester. The recordings will be around 30 clips. The students are using supplementary listening and speaking materials. The group will be tested at the end of the term to assess any measurable difference between the techniques. Questionnaires will be administered to teachers and students before and after the semester to assess any change in attitude toward avoidance and selectivity methods. Responses to these questionnaires are analyzed and discussed to assess the relative merits of the aforementioned strategies to avoidance and selectivity to further support on. Implications and recommendations for teacher training are proposed.

Keywords: the second language acquisition, learning languages, selectivity, avoidance

Procedia PDF Downloads 271
877 The Strategies and Mediating Processes of Learning the Inflectional Morphology in English: A Case Study for Taiwanese English Learners

Authors: Hsiu-Ling Hsu, En-Minh (John) Lan

Abstract:

Pronunciation has received more and more language researchers’ and teachers’ attention because it is important for effective or even successful communication. How to consistently and correctly orally produce verbal morphology, such as English regular past tense inflection, has been a big challenge and troublesome for FL learners. The research aims to explore EFL (English as a foreign language) learners’ developmental trajectory of the inflectional morphology, that is, what mediating processes and strategies EFL learners use, to attain native-like prosodic structure of inflectional morphemes (e.g., –ed and –s suffixes) by comparing the differences among EFL learners at different English levels. This research adopted a self-repair analysis and Prosodic Transfer Hypothesis with three developmental stages as a theoretical framework. To answer the research questions, we conducted two experiments, grammatical tense test written production (Experiment 1) and read-aloud oral production (Experiment 2), and recruited 30 participants who were divided into three groups, low-, middle-, and advanced EFL learners. Experiment 1 was conducted to ensure that participants had learned the knowledge of forming the English regular past tense rules and Experiment 2 was carried out to compare the data across FL English learner groups at different English levels. The EFL learners’ self-repair data showed at least four interesting findings. First, low achievers were more sensitive to the plural suffix -s than the past tense suffix -ed. Middle achievers exhibited a greater responsiveness to the past tense suffix, while high achievers demonstrated equal sensitivity to both suffixes. Additionally, two strategies used by EFL English learners to produce verbs and nouns with inflectional morphemes were to delete internal syllable and to divide a four-syllable verb (e.g., ‘graduated’) into two prosodic structures (e.g., ‘gradu’ and ‘ated’ or ‘gradua’ and ‘ted’). Third, true vowel epenthesis was found only in the low EFL achievers. Moreover fortition (native-like sound) was observed in the low and middle EFL achievers. These findings and self-repair data disclosed mediating processes between the developmental stages and provided insight on how Taiwan EFL learners attained the adjunction prosodic structures of inflectional Morphemes in English.

Keywords: inflectional morphology, prosodic structure, developmental trajectory, strategies and mediating processes, English as a foreign language

Procedia PDF Downloads 56
876 Estimation of Mobility Parameters and Threshold Voltage of an Organic Thin Film Transistor Using an Asymmetric Capacitive Test Structure

Authors: Rajesh Agarwal

Abstract:

Carrier mobility at the organic/insulator interface is essential to the performance of organic thin film transistors (OTFT). The present work describes estimation of field dependent mobility (FDM) parameters and the threshold voltage of an OTFT using a simple, easy to fabricate two terminal asymmetric capacitive test structure using admittance measurements. Conventionally, transfer characteristics are used to estimate the threshold voltage in an OTFT with field independent mobility (FIDM). Yet, this technique breaks down to give accurate results for devices with high contact resistance and having field dependent mobility. In this work, a new technique is presented for characterization of long channel organic capacitor (LCOC). The proposed technique helps in the accurate estimation of mobility enhancement factor (γ), the threshold voltage (V_th) and band mobility (µ₀) using capacitance-voltage (C-V) measurement in OTFT. This technique also helps to get rid of making short channel OTFT or metal-insulator-metal (MIM) structures for making C-V measurements. To understand the behavior of devices and ease of analysis, transmission line compact model is developed. The 2-D numerical simulation was carried out to illustrate the correctness of the model. Results show that proposed technique estimates device parameters accurately even in the presence of contact resistance and field dependent mobility. Pentacene/Poly (4-vinyl phenol) based top contact bottom-gate OTFT’s are fabricated to illustrate the operation and advantages of the proposed technique. Small signal of frequency varying from 1 kHz to 5 kHz and gate potential ranging from +40 V to -40 V have been applied to the devices for measurement.

Keywords: capacitance, mobility, organic, thin film transistor

Procedia PDF Downloads 154
875 Airborne CO₂ Lidar Measurements for Atmospheric Carbon and Transport: America (ACT-America) Project and Active Sensing of CO₂ Emissions over Nights, Days, and Seasons 2017-2018 Field Campaigns

Authors: Joel F. Campbell, Bing Lin, Michael Obland, Susan Kooi, Tai-Fang Fan, Byron Meadows, Edward Browell, Wayne Erxleben, Doug McGregor, Jeremy Dobler, Sandip Pal, Christopher O'Dell, Ken Davis

Abstract:

The Active Sensing of CO₂ Emissions over Nights, Days, and Seasons (ASCENDS) CarbonHawk Experiment Simulator (ACES) is a NASA Langley Research Center instrument funded by NASA’s Science Mission Directorate that seeks to advance technologies critical to measuring atmospheric column carbon dioxide (CO₂ ) mixing ratios in support of the NASA ASCENDS mission. The ACES instrument, an Intensity-Modulated Continuous-Wave (IM-CW) lidar, was designed for high-altitude aircraft operations and can be directly applied to space instrumentation to meet the ASCENDS mission requirements. The ACES design demonstrates advanced technologies critical for developing an airborne simulator and spaceborne instrument with lower platform consumption of size, mass, and power, and with improved performance. The Atmospheric Carbon and Transport – America (ACT-America) is an Earth Venture Suborbital -2 (EVS-2) mission sponsored by the Earth Science Division of NASA’s Science Mission Directorate. A major objective is to enhance knowledge of the sources/sinks and transport of atmospheric CO₂ through the application of remote and in situ airborne measurements of CO₂ and other atmospheric properties on spatial and temporal scales. ACT-America consists of five campaigns to measure regional carbon and evaluate transport under various meteorological conditions in three regional areas of the Continental United States. Regional CO₂ distributions of the lower atmosphere were observed from the C-130 aircraft by the Harris Corp. Multi-Frequency Fiber Laser Lidar (MFLL) and the ACES lidar. The airborne lidars provide unique data that complement the more traditional in situ sensors. This presentation shows the applications of CO₂ lidars in support of these science needs.

Keywords: CO₂ measurement, IMCW, CW lidar, laser spectroscopy

Procedia PDF Downloads 148
874 A One-Dimensional Modeling Analysis of the Influence of Swirl and Tumble Coefficient in a Single-Cylinder Research Engine

Authors: Mateus Silva Mendonça, Wender Pereira de Oliveira, Gabriel Heleno de Paula Araújo, Hiago Tenório Teixeira Santana Rocha, Augusto César Teixeira Malaquias, José Guilherme Coelho Baeta

Abstract:

The stricter legislation and the greater demand of the population regard to gas emissions and their effects on the environment as well as on human health make the automotive industry reinforce research focused on reducing levels of contamination. This reduction can be achieved through the implementation of improvements in internal combustion engines in such a way that they promote the reduction of both specific fuel consumption and air pollutant emissions. These improvements can be obtained through numerical simulation, which is a technique that works together with experimental tests. The aim of this paper is to build, with support of the GT-Suite software, a one-dimensional model of a single-cylinder research engine to analyze the impact of the variation of swirl and tumble coefficients on the performance and on the air pollutant emissions of an engine. Initially, the discharge coefficient is calculated through the software Converge CFD 3D, given that it is an input parameter in GT-Power. Mesh sensitivity tests are made in 3D geometry built for this purpose, using the mass flow rate in the valve as a reference. In the one-dimensional simulation is adopted the non-predictive combustion model called Three Pressure Analysis (TPA) is, and then data such as mass trapped in cylinder, heat release rate, and accumulated released energy are calculated, aiming that the validation can be performed by comparing these data with those obtained experimentally. Finally, the swirl and tumble coefficients are introduced in their corresponding objects so that their influences can be observed when compared to the results obtained previously.

Keywords: 1D simulation, single-cylinder research engine, swirl coefficient, three pressure analysis, tumble coefficient

Procedia PDF Downloads 91
873 An Extended Domain-Specific Modeling Language for Marine Observatory Relying on Enterprise Architecture

Authors: Charbel Aoun, Loic Lagadec

Abstract:

A Sensor Network (SN) is considered as an operation of two phases: (1) the observation/measuring, which means the accumulation of the gathered data at each sensor node; (2) transferring the collected data to some processing center (e.g., Fusion Servers) within the SN. Therefore, an underwater sensor network can be defined as a sensor network deployed underwater that monitors underwater activity. The deployed sensors, such as Hydrophones, are responsible for registering underwater activity and transferring it to more advanced components. The process of data exchange between the aforementioned components perfectly defines the Marine Observatory (MO) concept which provides information on ocean state, phenomena and processes. The first step towards the implementation of this concept is defining the environmental constraints and the required tools and components (Marine Cables, Smart Sensors, Data Fusion Server, etc). The logical and physical components that are used in these observatories perform some critical functions such as the localization of underwater moving objects. These functions can be orchestrated with other services (e.g. military or civilian reaction). In this paper, we present an extension to our MO meta-model that is used to generate a design tool (ArchiMO). We propose new constraints to be taken into consideration at design time. We illustrate our proposal with an example from the MO domain. Additionally, we generate the corresponding simulation code using our self-developed domain-specific model compiler. On the one hand, this illustrates our approach in relying on Enterprise Architecture (EA) framework that respects: multiple views, perspectives of stakeholders, and domain specificity. On the other hand, it helps reducing both complexity and time spent in design activity, while preventing from design modeling errors during porting this activity in the MO domain. As conclusion, this work aims to demonstrate that we can improve the design activity of complex system based on the use of MDE technologies and a domain-specific modeling language with the associated tooling. The major improvement is to provide an early validation step via models and simulation approach to consolidate the system design.

Keywords: smart sensors, data fusion, distributed fusion architecture, sensor networks, domain specific modeling language, enterprise architecture, underwater moving object, localization, marine observatory, NS-3, IMS

Procedia PDF Downloads 164
872 An Exploratory Study of Vocational High School Students’ Needs in Learning English

Authors: Yi-Hsuan Gloria Lo

Abstract:

The educational objective of vocational high schools (VHSs) is to equip VHS students with practical skills and knowledge that can be applied in the job-related market. However, with the increasing number of technological universities over the past two decades, the majority of VHS students have chosen to receive higher education rather than enter the job market. VHS English education has been confronting a dilemma: Should an English for specific purposes (ESP) approach, which aligns with the educational goal of VHS education, be taken or should an English for general purposes (EGP) approach, which prepares VHS students for advanced studies in universities, be followed? While ESP theorists proposed that that ESP can be taught to secondary learners, little was known about VHS students’ perspective on this ESP-versus-EGP dilemma. Scant research has investigated different facets of students’ needs (necessities, wants, and lacks) for both ESP and EGP in terms of the four language skills and the factors that contribute to any differences. To address the gap in the literature, 100 VHS students responded to statements related to their necessities, wants, and lacks in learning ESP and EGP on a 6-point Likert scale. Six VHS students were interviewed to tap into the reasons for different facets of the needs for learning EGP and ESP. The statistical analysis indicates that at this stage of learning English, VHS subjects believed that EGP was more necessary than ESP; EGP was more desirable than ESP. However, they reported that they were more lacking in ESP than in EGP learning. Regarding EGP, the results show that the VHS subjects rated speaking as their most necessary skill, speaking as the most desirable skill, and writing as the most lacking skill. A significant difference was found between perceived learning necessities and lacks and between perceived wants and lacks. No statistical difference was found between necessities and wants. In the aspect of ESP, the results indicate that the VHS subjects marked reading as their most necessary skill, speaking as the most desirable skill, and writing as the most lacking skill. A significant difference exists between their perceived necessities and lacks and between their wants and lacks. However, there is no statistically significant difference between their perceived lacks and wants. Despite the lack of a significant difference between learning necessities and wants, the qualitative interview data reveal that the reasons for their perceived necessities and wants were different. The findings of the study confirm previous research that demonstrates that ‘needs’ is a multiple and conflicting construct. What VHS students felt most lacking was not necessarily what they believed they should learn or would like to learn. Although no statistical difference was found, different reasons were attributed to their perceived necessities and wants. Both theoretical and practical implications have been drawn and discussed for ESP research in general and teaching ESP in VHSs in particular.

Keywords: vocational high schools (VHSs), English for General Purposes (EGP), English for Specific Purposes (ESP), needs analysis

Procedia PDF Downloads 163
871 Realistic Modeling of the Preclinical Small Animal Using Commercial Software

Authors: Su Chul Han, Seungwoo Park

Abstract:

As the increasing incidence of cancer, the technology and modality of radiotherapy have advanced and the importance of preclinical model is increasing in the cancer research. Furthermore, the small animal dosimetry is an essential part of the evaluation of the relationship between the absorbed dose in preclinical small animal and biological effect in preclinical study. In this study, we carried out realistic modeling of the preclinical small animal phantom possible to verify irradiated dose using commercial software. The small animal phantom was modeling from 4D Digital Mouse whole body phantom. To manipulate Moby phantom in commercial software (Mimics, Materialise, Leuven, Belgium), we converted Moby phantom to DICOM image file of CT by Matlab and two- dimensional of CT images were converted to the three-dimensional image and it is possible to segment and crop CT image in Sagittal, Coronal and axial view). The CT images of small animals were modeling following process. Based on the profile line value, the thresholding was carried out to make a mask that was connection of all the regions of the equal threshold range. Using thresholding method, we segmented into three part (bone, body (tissue). lung), to separate neighboring pixels between lung and body (tissue), we used region growing function of Mimics software. We acquired 3D object by 3D calculation in the segmented images. The generated 3D object was smoothing by remeshing operation and smoothing operation factor was 0.4, iteration value was 5. The edge mode was selected to perform triangle reduction. The parameters were that tolerance (0.1mm), edge angle (15 degrees) and the number of iteration (5). The image processing 3D object file was converted to an STL file to output with 3D printer. We modified 3D small animal file using 3- Matic research (Materialise, Leuven, Belgium) to make space for radiation dosimetry chips. We acquired 3D object of realistic small animal phantom. The width of small animal phantom was 2.631 cm, thickness was 2.361 cm, and length was 10.817. Mimics software supported efficiency about 3D object generation and usability of conversion to STL file for user. The development of small preclinical animal phantom would increase reliability of verification of absorbed dose in small animal for preclinical study.

Keywords: mimics, preclinical small animal, segmentation, 3D printer

Procedia PDF Downloads 358
870 Treating Voxels as Words: Word-to-Vector Methods for fMRI Meta-Analyses

Authors: Matthew Baucum

Abstract:

With the increasing popularity of fMRI as an experimental method, psychology and neuroscience can greatly benefit from advanced techniques for summarizing and synthesizing large amounts of data from brain imaging studies. One promising avenue is automated meta-analyses, in which natural language processing methods are used to identify the brain regions consistently associated with certain semantic concepts (e.g. “social”, “reward’) across large corpora of studies. This study builds on this approach by demonstrating how, in fMRI meta-analyses, individual voxels can be treated as vectors in a semantic space and evaluated for their “proximity” to terms of interest. In this technique, a low-dimensional semantic space is built from brain imaging study texts, allowing words in each text to be represented as vectors (where words that frequently appear together are near each other in the semantic space). Consequently, each voxel in a brain mask can be represented as a normalized vector sum of all of the words in the studies that showed activation in that voxel. The entire brain mask can then be visualized in terms of each voxel’s proximity to a given term of interest (e.g., “vision”, “decision making”) or collection of terms (e.g., “theory of mind”, “social”, “agent”), as measured by the cosine similarity between the voxel’s vector and the term vector (or the average of multiple term vectors). Analysis can also proceed in the opposite direction, allowing word cloud visualizations of the nearest semantic neighbors for a given brain region. This approach allows for continuous, fine-grained metrics of voxel-term associations, and relies on state-of-the-art “open vocabulary” methods that go beyond mere word-counts. An analysis of over 11,000 neuroimaging studies from an existing meta-analytic fMRI database demonstrates that this technique can be used to recover known neural bases for multiple psychological functions, suggesting this method’s utility for efficient, high-level meta-analyses of localized brain function. While automated text analytic methods are no replacement for deliberate, manual meta-analyses, they seem to show promise for the efficient aggregation of large bodies of scientific knowledge, at least on a relatively general level.

Keywords: FMRI, machine learning, meta-analysis, text analysis

Procedia PDF Downloads 436
869 Numerical Investigation into Capture Efficiency of Fibrous Filters

Authors: Jayotpaul Chaudhuri, Lutz Goedeke, Torsten Hallenga, Peter Ehrhard

Abstract:

Purification of gases from aerosols or airborne particles via filters is widely applied in the industry and in our daily lives. This separation especially in the micron and submicron size range is a necessary step to protect the environment and human health. Fibrous filters are often employed due to their low cost and high efficiency. For designing any filter the two most important performance parameters are capture efficiency and pressure drop. Since the capture efficiency is directly proportional to the pressure drop which leads to higher operating costs, a detailed investigation of the separation mechanism is required to optimize the filter designing, i.e., to have a high capture efficiency with a lower pressure drop. Therefore a two-dimensional flow simulation around a single fiber using Ansys CFX and Matlab is used to get insight into the separation process. Instead of simulating a solid fiber, the present Ansys CFX model uses a fictitious domain approach for the fiber by implementing a momentum loss model. This approach has been chosen to avoid creating a new mesh for different fiber sizes, thereby saving time and effort for re-meshing. In a first step, only the flow of the continuous fluid around the fiber is simulated in Ansys CFX and the flow field data is extracted and imported into Matlab and the particle trajectory is calculated in a Matlab routine. This calculation is a Lagrangian, one way coupled approach for particles with all relevant forces acting on it. The key parameters for the simulation in both Ansys CFX and Matlab are the porosity ε, the diameter ratio of particle and fiber D, the fluid Reynolds number Re, the Reynolds particle number Rep, the Stokes number St, the Froude number Fr and the density ratio of fluid and particle ρf/ρp. The simulation results were then compared to the single fiber theory from the literature.

Keywords: BBO-equation, capture efficiency, CFX, Matlab, fibrous filter, particle trajectory

Procedia PDF Downloads 194
868 Identification and Management of Septic Arthritis of the Untouched Glenohumeral Joint

Authors: Sumit Kanwar, Manisha Chand, Gregory Gilot

Abstract:

Background: Septic arthritis of the shoulder has infrequently been discussed. Focus on infection of the untouched shoulder has not heretofore been described. We present four patients with glenohumeral septic arthritis. Methods: Case 1: A 59 year old male with left shoulder pain in the anterior, posterior and superior aspects. Case 2: A 60 year old male with fever, chills, and generalized muscle aches. Case 3: A 70 year old male with right shoulder pain about the anterior and posterior aspects. Case 4: A 55 year old male with global right shoulder pain, swelling, and limited ROM. Results: In case 1, the left shoulder was affected. Physical examination, swelling was notable, there was global tenderness with a painful range of motion (ROM). The lab values indicated an erythrocyte sedimentation rate (ESR) of 96, and a C-reactive protein (CRP) of 304.30. Imaging studies were performed and MRI indicated a high suspicion for an abscess with osteomyelitis of the humeral head. Our second case’s left arm was affected. He had swelling, global tenderness and painful ROM. His ESR was 38, CRP was 14.9. X-ray showed severe arthritis. Case 3 differed with the right arm being affected. Again, global tenderness and painful ROM was observed. His ESR was 94, and CRP was 10.6. X-ray displayed an eroded glenoid space. Our fourth case’s right shoulder was affected. He had global tenderness and painful, limited ROM. ESR was 108 and CRP was 2.4. X-ray was non-significant. Discussion: Monoarticular septic arthritis of the virgin glenohumeral joint is seldom diagnosed in clinical practice. Common denominators include elevated ESR, painful, limited ROM, and involvement of the dominant arm. The male population is more frequently affected with an average age of 57. Septic arthritis is managed with incision and drainage or needle aspiration of synovial fluid supplemented with 3-6 weeks of intravenous antibiotics. Due to better irrigation and joint visualization, arthroscopy is preferred. Open surgical drainage may be indicated if the above methods fail. Conclusion: If a middle-aged male presents with vague anterior or posterior shoulder pain, elevated inflammatory markers and a low grade fever, an x-ray should be performed. If this displays degenerative joint disease, the complete further workup with advanced imaging, such as an MRI, CT scan, or an ultrasound. If these imaging modalities display anterior space joint effusion with soft tissue involvement, we can suspect septic arthritis of the untouched glenohumeral joint and surgery is indicated.

Keywords: glenohumeral joint, identification, infection, septic arthritis, shoulder

Procedia PDF Downloads 412
867 Vibration Analysis of FGM Sandwich Panel with Cut-Outs Using Refined Higher-Order Shear Deformation Theory (HSDT) Based on Isogeometric Analysis

Authors: Lokanath Barik, Abinash Kumar Swain

Abstract:

This paper presents vibration analysis of FGM sandwich structure with a complex profile governed by refined higher-order shear deformation theory (RHSDT) using isogeometric analysis (IGA). Functionally graded sandwich plates provide a wide range of applications in aerospace, defence, and aircraft industries due to their ability to distribute material functions to influence the thermo-mechanical properties as desired. In practical applications, these structures generally have intrinsic profiles, and their response to loads is significantly affected due to cut-outs. IGA is primarily a NURBS-based technique that is effective in solving higher-order differential equations due to its inherent C1 continuity imposition in solution space for a single patch. Complex structures generally require multiple patches to accurately represent the geometry, and hence, there is a loss of continuity at adjoining patch junctions. Therefore, patch coupling is desired to maintain continuity requirements throughout the domain. In this work, a novel strong coupling approach is provided that generates a well-defined NURBS-based model while achieving continuity. The methodology is validated by free vibration analysis of sandwich plates with present literature. The results are in good agreement with the analytical solution for different plate configurations and power law indexes. Numerical examples of rectangular and annular plates are discussed with variable boundary conditions. Additionally, parametric studies are provided by varying the aspect ratio, porosity ratio and their influence on the natural frequency of the plate.

Keywords: vibration analysis, FGM sandwich structure, multipatch geometry, patch coupling, IGA

Procedia PDF Downloads 63
866 Chinese Students’ Use of Corpus Tools in an English for Academic Purposes Writing Course: Influence on Learning Behaviour, Performance Outcomes and Perceptions

Authors: Jingwen Ou

Abstract:

Writing for academic purposes in a second or foreign language poses a significant challenge for non-native speakers, particularly at the tertiary level, where English academic writing for L2 students is often hindered by difficulties in academic discourse, including vocabulary, academic register, and organization. The past two decades have witnessed a rising popularity in the application of the data-driven learning (DDL) approach in EAP writing instruction. In light of such a trend, this study aims to enhance the integration of DDL into English for academic purposes (EAP) writing classrooms by investigating the perception of Chinese college students regarding the use of corpus tools for improving EAP writing. Additionally, the research explores their corpus consultation behaviors during training to provide insights into corpus-assisted EAP instruction for DDL practitioners. Given the uprising popularity of DDL, this research aims to investigate Chinese university students’ use of corpus tools with three main foci: 1) the influence of corpus tools on learning behaviours, 2) the influence of corpus tools on students’ academic writing performance outcomes, and 3) students’ perceptions and potential perceptional changes towards the use of such tools. Three corpus tools, CQPWeb, Sketch Engine, and LancsBox X, are selected for investigation due to the scarcity of empirical research on patterns of learners’ engagement with a combination of multiple corpora. The research adopts a pre-test / post-test design for the evaluation of students’ academic writing performance before and after the intervention. Twenty participants will be divided into two groups: an intervention and a non-intervention group. Three corpus training workshops will be delivered at the beginning, middle, and end of a semester. An online survey and three separate focus group interviews are designed to investigate students’ perceptions of the use of corpus tools for improving academic writing skills, particularly the rhetorical functions in different essay sections. Insights from students’ consultation sessions indicated difficulties with DDL practice, including insufficiency of time to complete all tasks, struggle with technical set-up, unfamiliarity with the DDL approach and difficulty with some advanced corpus functions. Findings from the main study aim to provide pedagogical insights and training resources for EAP practitioners and learners.

Keywords: corpus linguistics, data-driven learning, English for academic purposes, tertiary education in China

Procedia PDF Downloads 44
865 Impact of Intelligent Transportation System on Planning, Operation and Safety of Urban Corridor

Authors: Sourabh Jain, S. S. Jain

Abstract:

Intelligent transportation system (ITS) is the application of technologies for developing a user–friendly transportation system to extend the safety and efficiency of urban transportation systems in developing countries. These systems involve vehicles, drivers, passengers, road operators, managers of transport services; all interacting with each other and the surroundings to boost the security and capacity of road systems. The goal of urban corridor management using ITS in road transport is to achieve improvements in mobility, safety, and the productivity of the transportation system within the available facilities through the integrated application of advanced monitoring, communications, computer, display, and control process technologies, both in the vehicle and on the road. Intelligent transportation system is a product of the revolution in information and communications technologies that is the hallmark of the digital age. The basic ITS technology is oriented on three main directions: communications, information, integration. Information acquisition (collection), processing, integration, and sorting are the basic activities of ITS. In the paper, attempts have been made to present the endeavor that was made to interpret and evaluate the performance of the 27.4 Km long study corridor having eight intersections and four flyovers. The corridor consisting of six lanes as well as eight lanes divided road network. Two categories of data have been collected such as traffic data (traffic volume, spot speed, delay) and road characteristics data (no. of lanes, lane width, bus stops, mid-block sections, intersections, flyovers). The instruments used for collecting the data were video camera, stop watch, radar gun, and mobile GPS (GPS tracker lite). From the analysis, the performance interpretations incorporated were the identification of peak and off-peak hours, congestion and level of service (LOS) at midblock sections and delay followed by plotting the speed contours. The paper proposed the urban corridor management strategies based on sensors integrated into both vehicles and on the roads that those have to be efficiently executable, cost-effective, and familiar to road users. It will be useful to reduce congestion, fuel consumption, and pollution so as to provide comfort, safety, and efficiency to the users.

Keywords: ITS strategies, congestion, planning, mobility, safety

Procedia PDF Downloads 171
864 Conduction Transfer Functions for the Calculation of Heat Demands in Heavyweight Facade Systems

Authors: Mergim Gasia, Bojan Milovanovica, Sanjin Gumbarevic

Abstract:

Better energy performance of the building envelope is one of the most important aspects of energy savings if the goals set by the European Union are to be achieved in the future. Dynamic heat transfer simulations are being used for the calculation of building energy consumption because they give more realistic energy demands compared to the stationary calculations that do not take the building’s thermal mass into account. Software used for these dynamic simulation use methods that are based on the analytical models since numerical models are insufficient for longer periods. The analytical models used in this research fall in the category of the conduction transfer functions (CTFs). Two methods for calculating the CTFs covered by this research are the Laplace method and the State-Space method. The literature review showed that the main disadvantage of these methods is that they are inadequate for heavyweight façade elements and shorter time periods used for the calculation. The algorithms for both the Laplace and State-Space methods are implemented in Mathematica, and the results are compared to the results from EnergyPlus and TRNSYS since these software use similar algorithms for the calculation of the building’s energy demand. This research aims to check the efficiency of the Laplace and the State-Space method for calculating the building’s energy demand for heavyweight building elements and shorter sampling time, and it also gives the means for the improvement of the algorithms used by these methods. As the reference point for the boundary heat flux density, the finite difference method (FDM) is used. Even though the dynamic heat transfer simulations are superior to the calculation based on the stationary boundary conditions, they have their limitations and will give unsatisfactory results if not properly used.

Keywords: Laplace method, state-space method, conduction transfer functions, finite difference method

Procedia PDF Downloads 118
863 Ocean Planner: A Web-Based Decision Aid to Design Measures to Best Mitigate Underwater Noise

Authors: Thomas Folegot, Arnaud Levaufre, Léna Bourven, Nicolas Kermagoret, Alexis Caillard, Roger Gallou

Abstract:

Concern for negative impacts of anthropogenic noise on the ocean’s ecosystems has increased over the recent decades. This concern leads to a similar increased willingness to regulate noise-generating activities, of which shipping is one of the most significant. Dealing with ship noise requires not only knowledge about the noise from individual ships, but also how the ship noise is distributed in time and space within the habitats of concern. Marine mammals, but also fish, sea turtles, larvae and invertebrates are mostly dependent on the sounds they use to hunt, feed, avoid predators, during reproduction to socialize and communicate, or to defend a territory. In the marine environment, sight is only useful up to a few tens of meters, whereas sound can propagate over hundreds or even thousands of kilometers. Directive 2008/56/EC of the European Parliament and of the Council of June 17, 2008 called the Marine Strategy Framework Directive (MSFD) require the Member States of the European Union to take the necessary measures to reduce the impacts of maritime activities to achieve and maintain a good environmental status of the marine environment. The Ocean-Planner is a web-based platform that provides to regulators, managers of protected or sensitive areas, etc. with a decision support tool that enable to anticipate and quantify the effectiveness of management measures in terms of reduction or modification the distribution of underwater noise, in response to Descriptor 11 of the MSFD and to the Marine Spatial Planning Directive. Based on the operational sound modelling tool Quonops Online Service, Ocean-Planner allows the user via an intuitive geographical interface to define management measures at local (Marine Protected Area, Natura 2000 sites, Harbors, etc.) or global (Particularly Sensitive Sea Area) scales, seasonal (regulation over a period of time) or permanent, partial (focused to some maritime activities) or complete (all maritime activities), etc. Speed limit, exclusion area, traffic separation scheme (TSS), and vessel sound level limitation are among the measures supported be the tool. Ocean Planner help to decide on the most effective measure to apply to maintain or restore the biodiversity and the functioning of the ecosystems of the coastal seabed, maintain a good state of conservation of sensitive areas and maintain or restore the populations of marine species.

Keywords: underwater noise, marine biodiversity, marine spatial planning, mitigation measures, prediction

Procedia PDF Downloads 113
862 A Simple and Empirical Refraction Correction Method for UAV-Based Shallow-Water Photogrammetry

Authors: I GD Yudha Partama, A. Kanno, Y. Akamatsu, R. Inui, M. Goto, M. Sekine

Abstract:

The aerial photogrammetry of shallow water bottoms has the potential to be an efficient high-resolution survey technique for shallow water topography, thanks to the advent of convenient UAV and automatic image processing techniques Structure-from-Motion (SfM) and Multi-View Stereo (MVS)). However, it suffers from the systematic overestimation of the bottom elevation, due to the light refraction at the air-water interface. In this study, we present an empirical method to correct for the effect of refraction after the usual SfM-MVS processing, using common software. The presented method utilizes the empirical relation between the measured true depth and the estimated apparent depth to generate an empirical correction factor. Furthermore, this correction factor was utilized to convert the apparent water depth into a refraction-corrected (real-scale) water depth. To examine its effectiveness, we applied the method to two river sites, and compared the RMS errors in the corrected bottom elevations with those obtained by three existing methods. The result shows that the presented method is more effective than the two existing methods: The method without applying correction factor and the method utilizes the refractive index of water (1.34) as correction factor. In comparison with the remaining existing method, which used the additive terms (offset) after calculating correction factor, the presented method performs well in Site 2 and worse in Site 1. However, we found this linear regression method to be unstable when the training data used for calibration are limited. It also suffers from a large negative bias in the correction factor when the apparent water depth estimated is affected by noise, according to our numerical experiment. Overall, the good accuracy of refraction correction method depends on various factors such as the locations, image acquisition, and GPS measurement conditions. The most effective method can be selected by using statistical selection (e.g. leave-one-out cross validation).

Keywords: bottom elevation, MVS, river, SfM

Procedia PDF Downloads 295
861 Evaluation of the Impact of Community Based Disaster Risk Management Applied In Landslide Prone Area; Reference to Badulla District

Authors: S. B. D. Samarasinghe, Malini Herath

Abstract:

Participatory planning is a very important process for decision making and choosing the best alternative options for community welfare, development of the society and its interactions among community and professionals. People’s involvement is considered as the key guidance in participatory planning. Presently, Participatory planning is being used in many fields. It's not only limited to planning but also to disaster management, poverty, housing, etc. In the past, Disaster management practice was a top-down approach, but it raised many issues as it was converted to a bottom-up approach. There are several approaches that can aid disaster management. Community-Based Disaster Risk Management (CBDRM) is a very successful participatory approach to risk management that is often successfully applied by other disaster-prone countries. In the local context, CBDRM has been applied to prevent Diseases as well as to prevent disasters such as landslides, tsunamis and floods. From three years before, Sri Lanka has initiated the CBDRM approach to minimize landslide vulnerability. Hence, this study mainly focuses on the impact of CBDRM approaches on landslide hazards. Also to identify their successes and failures from both implementing parties and community. This research is carried out based on a qualitative method combined with a descriptive research approach. A successful framework was prepared via a literature review. Case studies were selected considering landslide CBDRM programs which were implemented by Disaster Management Center and National Building Research Organization in Badulla. Their processes were evaluated. Data collection is done through interviews and informal discussions. Then their ideas were quantified by using the Relative Effectiveness index. The resulting numerical value was used to rank the program effectiveness and their success, failures and impacting factors. Results show that there are several failures among implementing parties and the community. Overcoming those factors can make way for better conduction of future CBDRM programs.

Keywords: community-based disaster risk management, disaster management, preparedness, landslide

Procedia PDF Downloads 131
860 Mathematics as the Foundation for the STEM Disciplines: Different Pedagogical Strategies Addressed

Authors: Marion G. Ben-Jacob, David Wang

Abstract:

There is a mathematics requirement for entry level college and university students, especially those who plan to study STEM (Science, Technology, Engineering and Mathematics). Most of them take College Algebra, and to continue their studies, they need to succeed in this course. Different pedagogical strategies are employed to promote the success of our students. There is, of course, the Traditional Method of teaching- lecture, examples, problems for students to solve. The Emporium Model, another pedagogical approach, replaces traditional lectures with a learning resource center model featuring interactive software and on-demand personalized assistance. This presentation will compare these two methods of pedagogy and the study done with its results on this comparison. Math is the foundation for science, technology, and engineering. Its work is generally used in STEM to find patterns in data. These patterns can be used to test relationships, draw general conclusions about data, and model the real world. In STEM, solutions to problems are analyzed, reasoned, and interpreted using math abilities in a assortment of real-world scenarios. This presentation will examine specific examples of how math is used in the different STEM disciplines. Math becomes practical in science when it is used to model natural and artificial experiments to identify a problem and develop a solution for it. As we analyze data, we are using math to find the statistical correlation between the cause of an effect. Scientists who use math include the following: data scientists, scientists, biologists and geologists. Without math, most technology would not be possible. Math is the basis of binary, and without programming, you just have the hardware. Addition, subtraction, multiplication, and division is also used in almost every program written. Mathematical algorithms are inherent in software as well. Mechanical engineers analyze scientific data to design robots by applying math and using the software. Electrical engineers use math to help design and test electrical equipment. They also use math when creating computer simulations and designing new products. Chemical engineers often use mathematics in the lab. Advanced computer software is used to aid in their research and production processes to model theoretical synthesis techniques and properties of chemical compounds. Mathematics mastery is crucial for success in the STEM disciplines. Pedagogical research on formative strategies and necessary topics to be covered are essential.

Keywords: emporium model, mathematics, pedagogy, STEM

Procedia PDF Downloads 61