Search results for: ground motion modeling
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 6912

Search results for: ground motion modeling

672 Argos-Linked Fastloc GPS Reveals the Resting Activity of Migrating Sea Turtles

Authors: Gail Schofield, Antoine M. Dujon, Nicole Esteban, Rebecca M. Lester, Graeme C. Hays

Abstract:

Variation in diel movement patterns during migration provides information on the strategies used by animals to maximize energy efficiency and ensure the successful completion of migration. For instance, many flying and land-based terrestrial species stop to rest and refuel at regular intervals along the migratory route, or at transitory ‘stopover’ sites, depending on resource availability. However, in cases where stopping is not possible (such as over–or through deep–open oceans, or over deserts and mountains), non-stop travel is required, with animals needing to develop strategies to rest while actively traveling. Recent advances in biologging technologies have identified mid-flight micro sleeps by swifts in Africa during the 10-month non-breeding period, and the use of lateralized sleep behavior in orca and bottlenose dolphins during migration. Here, highly accurate locations obtained by Argos-linked Fastloc-GPS transmitters of adult green (n=8 turtles, 9487 locations) and loggerhead (n=46 turtles, 47,588 locations) sea turtles migrating around thousand kilometers (over several weeks) from breeding to foraging grounds across the Indian and Mediterranean oceans were used to identify potential resting strategies. Stopovers were only documented for seven turtles, lasting up to 6 days; thus, this strategy was not commonly used, possibly due to the lack of potential ‘shallow’ ( < 100 m seabed depth) sites along routes. However, observations of the day versus night speed of travel indicated that turtles might use other mechanisms to rest. For instance, turtles traveled an average 31% slower at night compared to day during oceanic crossings. Slower travel speeds at night might be explained by turtles swimming in a less direct line at night and/or deeper dives reducing their forward motion, as indicated through studies using Argos-linked transmitters and accelerometers. Furthermore, within the first 24 h of entering waters shallower than 100 m towards the end of migration (the depth at which sea turtles can swim and rest on the seabed), some individuals travelled 72% slower at night, repeating this behavior intermittently (each time for a one-night duration at 3–6-day intervals) until reaching the foraging grounds. If the turtles were, in fact, resting on the seabed at this point, they could be inactive for up to 8-hours, facilitating protracted periods of rest after several weeks of constant swimming. Turtles might not rest every night once within these shallower depths, due to the time constraints of reaching foraging grounds and restoring depleted energetic reserves (as sea turtles are capital breeders, they tend not to feed for several months during migration to and from the breeding grounds and while breeding). In conclusion, access to data-rich, highly accurate Argos-linked Fastloc-GPS provided information about differences in the day versus night activity at different stages of migration, allowing us, for the first time, to compare the strategies used by a marine vertebrate with terrestrial land-based and flying species. However, the question of what resting strategies are used by individuals that remain in oceanic waters to forage, with combinations of highly accurate Argos-linked Fastloc-GPS transmitters and accelerometry or time-depth recorders being required for sufficient numbers of individuals.

Keywords: argos-linked fastloc GPS, data loggers, migration, resting strategy, telemetry

Procedia PDF Downloads 154
671 TRAC: A Software Based New Track Circuit for Traffic Regulation

Authors: Jérôme de Reffye, Marc Antoni

Abstract:

Following the development of the ERTMS system, we think it is interesting to develop another software-based track circuit system which would fit secondary railway lines with an easy-to-work implementation and a low sensitivity to rail-wheel impedance variations. We called this track circuit 'Track Railway by Automatic Circuits.' To be internationally implemented, this system must not have any mechanical component and must be compatible with existing track circuit systems. For example, the system is independent from the French 'Joints Isolants Collés' that isolate track sections from one another, and it is equally independent from component used in Germany called 'Counting Axles,' in French 'compteur d’essieux.' This track circuit is fully interoperable. Such universality is obtained by replacing the train detection mechanical system with a space-time filtering of train position. The various track sections are defined by the frequency of a continuous signal. The set of frequencies related to the track sections is a set of orthogonal functions in a Hilbert Space. Thus the failure probability of track sections separation is precisely calculated on the basis of signal-to-noise ratio. SNR is a function of the level of traction current conducted by rails. This is the reason why we developed a very powerful algorithm to reject noise and jamming to obtain an SNR compatible with the precision required for the track circuit and SIL 4 level. The SIL 4 level is thus reachable by an adjustment of the set of orthogonal functions. Our major contributions to railway engineering signalling science are i) Train space localization is precisely defined by a calibration system. The operation bypasses the GSM-R radio system of the ERTMS system. Moreover, the track circuit is naturally protected against radio-type jammers. After the calibration operation, the track circuit is autonomous. ii) A mathematical topology adapted to train space localization by following the train through a linear time filtering of the received signal. Track sections are numerically defined and can be modified with a software update. The system was numerically simulated, and results were beyond our expectations. We achieved a precision of one meter. Rail-ground and rail-wheel impedance sensitivity analysis gave excellent results. Results are now complete and ready to be published. This work was initialised as a research project of the French Railways developed by the Pi-Ramses Company under SNCF contract and required five years to obtain the results. This track circuit is already at Level 3 of the ERTMS system, and it will be much cheaper to implement and to work. The traffic regulation is based on variable length track sections. As the traffic growths, the maximum speed is reduced, and the track section lengths are decreasing. It is possible if the elementary track section is correctly defined for the minimum speed and if every track section is able to emit with variable frequencies.

Keywords: track section, track circuits, space-time crossing, adaptive track section, automatic railway signalling

Procedia PDF Downloads 331
670 Gauging Floral Resources for Pollinators Using High Resolution Drone Imagery

Authors: Nicholas Anderson, Steven Petersen, Tom Bates, Val Anderson

Abstract:

Under the multiple-use management regime established in the United States for federally owned lands, government agencies have come under pressure from commercial apiaries to grant permits for the summer pasturing of honeybees on government lands. Federal agencies have struggled to integrate honeybees into their management plans and have little information to make regulations that resolve how many colonies should be allowed in a single location and at what distance sets of hives should be placed. Many conservation groups have voiced their concerns regarding the introduction of honeybees to these natural lands, as they may outcompete and displace native pollinating species. Assessing the quality of an area in regard to its floral resources, pollen, and nectar can be important when attempting to create regulations for the integration of commercial honeybee operations into a native ecosystem. Areas with greater floral resources may be able to support larger numbers of honeybee colonies, while poorer resource areas may be less resilient to introduced disturbances. Attempts are made in this study to determine flower cover using high resolution drone imagery to help assess the floral resource availability to pollinators in high elevation, tall forb communities. This knowledge will help in determining the potential that different areas may have for honeybee pasturing and honey production. Roughly 700 images were captured at 23m above ground level using a drone equipped with a Sony QX1 RGB 20-megapixel camera. These images were stitched together using Pix4D, resulting in a 60m diameter high-resolution mosaic of a tall forb meadow. Using the program ENVI, a supervised maximum likelihood classification was conducted to calculate the percentage of total flower cover and flower cover by color (blue, white, and yellow). A complete vegetation inventory was taken on site, and the major flowers contributing to each color class were noted. An accuracy assessment was performed on the classification yielding an 89% overall accuracy and a Kappa Statistic of 0.855. With this level of accuracy, drones provide an affordable and time efficient method for the assessment of floral cover in large areas. The proximal step of this project will now be to determine the average pollen and nectar loads carried by each flower species. The addition of this knowledge will result in a quantifiable method of measuring pollen and nectar resources of entire landscapes. This information will not only help land managers determine stocking rates for honeybees on public lands but also has applications in the agricultural setting, aiding producers in the determination of the number of honeybee colonies necessary for proper pollination of fruit and nut crops.

Keywords: honeybee, flower, pollinator, remote sensing

Procedia PDF Downloads 140
669 Optimum Dimensions of Hydraulic Structures Foundation and Protections Using Coupled Genetic Algorithm with Artificial Neural Network Model

Authors: Dheyaa W. Abbood, Rafa H. AL-Suhaili, May S. Saleh

Abstract:

A model using the artificial neural networks and genetic algorithm technique is developed for obtaining optimum dimensions of the foundation length and protections of small hydraulic structures. The procedure involves optimizing an objective function comprising a weighted summation of the state variables. The decision variables considered in the optimization are the upstream and downstream cutoffs length sand their angles of inclination, the foundation length, and the length of the downstream soil protection. These were obtained for a given maximum difference in head, depth of impervious layer and degree of anisotropy.The optimization carried out subjected to constraints that ensure a safe structure against the uplift pressure force and sufficient protection length at the downstream side of the structure to overcome an excessive exit gradient. The Geo-studios oft ware, was used to analyze 1200 different cases. For each case the length of protection and volume of structure required to satisfy the safety factors mentioned previously were estimated. An ANN model was developed and verified using these cases input-output sets as its data base. A MatLAB code was written to perform a genetic algorithm optimization modeling coupled with this ANN model using a formulated optimization model. A sensitivity analysis was done for selecting the cross-over probability, the mutation probability and level ,the number of population, the position of the crossover and the weights distribution for all the terms of the objective function. Results indicate that the most factor that affects the optimum solution is the number of population required. The minimum value that gives stable global optimum solution of this parameters is (30000) while other variables have little effect on the optimum solution.

Keywords: inclined cutoff, optimization, genetic algorithm, artificial neural networks, geo-studio, uplift pressure, exit gradient, factor of safety

Procedia PDF Downloads 323
668 Formation of Mg-Silicate Scales and Inhibition of Their Scale Formation at Injection Wells in Geothermal Power Plant

Authors: Samuel Abebe Ebebo

Abstract:

Scale precipitation causes a major issue for geothermal power plants because it reduces the production rate of geothermal energy. Each geothermal power plant's different chemical and physical conditions can cause the scale to precipitate under a particular set of fluid-rock interactions. Depending on the mineral, it is possible to have scale in the production well, steam separators, heat exchangers, reinjection wells, and everywhere in between. The scale consists mainly of smectite and trace amounts of chlorite, magnetite, quartz, hematite, dolomite, aragonite, and amorphous silica. The smectite scale is one of the difficult scales at injection wells in geothermal power plants. X-ray diffraction and chemical composition identify this smectite as Stevensite. The characteristics and the scale of each injection well line are different depending on the fluid chemistry. The smectite scale has been widely distributed in pipelines and surface plants. Mineral water equilibrium showed that the main factors controlling the saturation indices of smectite increased pH and dissolved Mg concentration due to the precipitate on the equipment surface. This study aims to characterize the scales and geothermal fluids collected from the Onuma geothermal power plant in Akita Prefecture, Japan. Field tests were conducted on October 30–November 3, 2021, at Onuma to determine the pH control methods for preventing magnesium silicate scaling, and as exemplified, the formation of magnesium silicate hydrates (M-S-H) with MgO to SiO2 ratios of 1.0 and pH values of 10 for one day has been studied at 25 °C. As a result, M-S-H scale formation could be suppressed, and stevensite formation could also be suppressed when we can decrease the pH of the fluid by less than 8.1, 7.4, and 8 (at 97 °C) in the fluid from O-3Rb and O-6Rb, O-10Rg, and O-12R, respectively. In this context, the scales and fluids collected from injection wells at a geothermal power plant in Japan were analyzed and characterized to understand the formation conditions of Mg-silicate scales with on-site synthesis experiments. From the results of the characterizations and on-site synthesis experiments, the inhibition method of their scale formation is discussed based on geochemical modeling in this study.

Keywords: magnesium silicate, scaling, inhibitor, geothermal power plant

Procedia PDF Downloads 62
667 An Analysis of the Performances of Various Buoys as the Floats of Wave Energy Converters

Authors: İlkay Özer Erselcan, Abdi Kükner, Gökhan Ceylan

Abstract:

The power generated by eight point absorber type wave energy converters each having a different buoy are calculated in order to investigate the performances of buoys in this study. The calculations are carried out by modeling three different sea states observed in two different locations in the Black Sea. The floats analyzed in this study have two basic geometries and four different draft/radius (d/r) ratios. The buoys possess the shapes of a semi-ellipsoid and a semi-elliptic paraboloid. Additionally, the draft/radius ratios range from 0.25 to 1 by an increment of 0.25. The radiation forces acting on the buoys due to the oscillatory motions of these bodies are evaluated by employing a 3D panel method along with a distribution of 3D pulsating sources in frequency domain. On the other hand, the wave forces acting on the buoys which are taken as the sum of Froude-Krylov forces and diffraction forces are calculated by using linear wave theory. Furthermore, the wave energy converters are assumed to be taut-moored to the seabed so that the secondary body which houses a power take-off system oscillates with much smaller amplitudes compared to the buoy. As a result, it is assumed that there is not any significant contribution to the power generation from the motions of the housing body and the only contribution to power generation comes from the buoy. The power take-off systems of the wave energy converters are high pressure oil hydraulic systems which are identical in terms of their characteristic parameters. The results show that the power generated by wave energy converters which have semi-ellipsoid floats is higher than that of those which have semi elliptic paraboloid floats in both locations and in all sea states. It is also determined that the power generated by the wave energy converters follow an unsteady pattern such that they do not decrease or increase with changing draft/radius ratios of the floats. Although the highest power level is obtained with a semi-ellipsoid float which has a draft/radius ratio equal to 1, other floats of which the draft/radius ratio is 0.25 delivered higher power that the floats with a draft/radius ratio equal to 1 in some cases.

Keywords: Black Sea, buoys, hydraulic power take-off system, wave energy converters

Procedia PDF Downloads 350
666 Machine Learning Approach for Predicting Students’ Academic Performance and Study Strategies Based on Their Motivation

Authors: Fidelia A. Orji, Julita Vassileva

Abstract:

This research aims to develop machine learning models for students' academic performance and study strategy prediction, which could be generalized to all courses in higher education. Key learning attributes (intrinsic, extrinsic, autonomy, relatedness, competence, and self-esteem) used in building the models are chosen based on prior studies, which revealed that the attributes are essential in students’ learning process. Previous studies revealed the individual effects of each of these attributes on students’ learning progress. However, few studies have investigated the combined effect of the attributes in predicting student study strategy and academic performance to reduce the dropout rate. To bridge this gap, we used Scikit-learn in python to build five machine learning models (Decision Tree, K-Nearest Neighbour, Random Forest, Linear/Logistic Regression, and Support Vector Machine) for both regression and classification tasks to perform our analysis. The models were trained, evaluated, and tested for accuracy using 924 university dentistry students' data collected by Chilean authors through quantitative research design. A comparative analysis of the models revealed that the tree-based models such as the random forest (with prediction accuracy of 94.9%) and decision tree show the best results compared to the linear, support vector, and k-nearest neighbours. The models built in this research can be used in predicting student performance and study strategy so that appropriate interventions could be implemented to improve student learning progress. Thus, incorporating strategies that could improve diverse student learning attributes in the design of online educational systems may increase the likelihood of students continuing with their learning tasks as required. Moreover, the results show that the attributes could be modelled together and used to adapt/personalize the learning process.

Keywords: classification models, learning strategy, predictive modeling, regression models, student academic performance, student motivation, supervised machine learning

Procedia PDF Downloads 127
665 Physical Activity Self-Efficacy among Pregnant Women with High Risk for Gestational Diabetes Mellitus: A Cross-Sectional Study

Authors: Xiao Yang, Ji Zhang, Yingli Song, Hui Huang, Jing Zhang, Yan Wang, Rongrong Han, Zhixuan Xiang, Lu Chen, Lingling Gao

Abstract:

Aim and Objectives: To examine physical activity self-efficacy, identify its predictors, and further explore the mechanism of action among the predictors in mainland Chinese pregnant women with high risk for gestational diabetes mellitus (GDM). Background: Physical activity could protect pregnant women from developing GDM. Physical activity self-efficacy was the key predictor of physical activity. Design: A cross-sectional study was conducted from October 2021 to May 2022 in Zhengzhou, China. Methods: 252 eligible pregnant women completed the Pregnancy Physical Activity Self-efficacy Scale, the Social Support for Physical Activity Scale, the Knowledge on Physical Activity Questionnaire, the 7-item Generalized Anxiety Disorder scale, the Edinburgh Postnatal Depression Scale, and a socio-demographic data sheet. Multiple linear regression was applied to explore the predictors of physical activity self-efficacy. Structural equation modeling was used to explore the mechanism of action among the predictors. Results: Chinese pregnant women with a high risk for GDM reported a moderate level of physical activity self-efficacy. The best-fit regression analysis revealed four variables explained 17.5% of the variance in physical activity self-efficacy. Social support for physical activity was the strongest predictor, followed by knowledge of the physical activity, intention to do physical activity, and anxiety symptoms. The model analysis indicated that knowledge of physical activity could release anxiety and depressive symptoms and then increase physical activity self-efficacy. Conclusion: The present study revealed a moderate level of physical activity self-efficacy. Interventions targeting pregnant women with high risk for GDM need to include the predictors of physical activity self-efficacy. Relevance to clinical practice: To facilitate pregnant women with high risk for GDM to engage in physical activity, healthcare professionals may find assess physical activity self-efficacy and intervene as soon as possible on their first antenatal visit. Physical activity intervention programs focused on self-efficacy may be conducted in further research.

Keywords: physical activity, gestational diabetes, self-efficacy, predictors

Procedia PDF Downloads 99
664 Surprise Fraudsters Before They Surprise You: A South African Telecommunications Case Study

Authors: Ansoné Human, Nantes Kirsten, Tanja Verster, Willem D. Schutte

Abstract:

Every year the telecommunications industry suffers huge losses due to fraud. Mobile fraud, or generally, telecommunications fraud is the utilisation of telecommunication products or services to acquire money illegally from or failing to pay a telecommunication company. A South African telecommunication operator developed two internal fraud scorecards to mitigate future risks of application fraud events. The scorecards aim to predict the likelihood of an application being fraudulent and surprise fraudsters before they surprise the telecommunication operator by identifying fraud at the time of application. The scorecards are utilised in the vetting process to evaluate the applicant in terms of the fraud risk the applicant would present to the telecommunication operator. Telecommunication providers can utilise these scorecards to profile customers, as well as isolate fraudulent and/or high-risk applicants. We provide the complete methodology utilised in the development of the scorecards. Furthermore, a Determination and Discrimination (DD) ratio is provided in the methodology to select the most influential variables from a group of related variables. Throughout the development of these scorecards, the following was revealed regarding fraudulent cases and fraudster behaviour within the telecommunications industry: Fraudsters typically target high-value handsets. Furthermore, debit order dates scheduled for the end of the month have the highest fraud probability. The fraudsters target specific stores. Applicants who acquire an expensive package and receive a medium-income, as well as applicants who obtain an expensive package and receive a high income, have higher fraud percentages. If one month prior to application, the status of an account is already in arrears (two months or more), the applicant has a high probability of fraud. The applicants with the highest average spend on calls have a higher probability of fraud. If the amount collected changes from month to month, the likelihood of fraud is higher. Lastly, young and middle-aged applicants have an increased probability of being targeted by fraudsters than other ages.

Keywords: application fraud scorecard, predictive modeling, regression, telecommunications

Procedia PDF Downloads 119
663 Improving Fingerprinting-Based Localization (FPL) System Using Generative Artificial Intelligence (GAI)

Authors: Getaneh Berie Tarekegn, Li-Chia Tai

Abstract:

With the rapid advancement of artificial intelligence, low-power built-in sensors on Internet of Things devices, and communication technologies, location-aware services have become increasingly popular and have permeated every aspect of people’s lives. Global navigation satellite systems (GNSSs) are the default method of providing continuous positioning services for ground and aerial vehicles, as well as consumer devices (smartphones, watches, notepads, etc.). However, the environment affects satellite positioning systems, particularly indoors, in dense urban and suburban cities enclosed by skyscrapers, or when deep shadows obscure satellite signals. This is because (1) indoor environments are more complicated due to the presence of many objects surrounding them; (2) reflection within the building is highly dependent on the surrounding environment, including the positions of objects and human activity; and (3) satellite signals cannot be reached in an indoor environment, and GNSS doesn't have enough power to penetrate building walls. GPS is also highly power-hungry, which poses a severe challenge for battery-powered IoT devices. Due to these challenges, IoT applications are limited. Consequently, precise, seamless, and ubiquitous Positioning, Navigation and Timing (PNT) systems are crucial for many artificial intelligence Internet of Things (AI-IoT) applications in the era of smart cities. Their applications include traffic monitoring, emergency alarming, environmental monitoring, location-based advertising, intelligent transportation, and smart health care. This paper proposes a generative AI-based positioning scheme for large-scale wireless settings using fingerprinting techniques. In this article, we presented a novel semi-supervised deep convolutional generative adversarial network (S-DCGAN)-based radio map construction method for real-time device localization. We also employed a reliable signal fingerprint feature extraction method with t-distributed stochastic neighbor embedding (t-SNE), which extracts dominant features while eliminating noise from hybrid WLAN and long-term evolution (LTE) fingerprints. The proposed scheme reduced the workload of site surveying required to build the fingerprint database by up to 78.5% and significantly improved positioning accuracy. The results show that the average positioning error of GAILoc is less than 0.39 m, and more than 90% of the errors are less than 0.82 m. According to numerical results, SRCLoc improves positioning performance and reduces radio map construction costs significantly compared to traditional methods.

Keywords: location-aware services, feature extraction technique, generative adversarial network, long short-term memory, support vector machine

Procedia PDF Downloads 46
662 Assessment of Heavy Metals Contamination Levels in Groundwater: A Case Study of the Bafia Agricultural Area, Centre Region Cameroon

Authors: Carine Enow-Ayor Tarkang, Victorine Neh Akenji, Dmitri Rouwet, Jodephine Njdma, Andrew Ako Ako, Franco Tassi, Jules Remy Ngoupayou Ndam

Abstract:

Groundwater is the major water resource in the whole of Bafia used for drinking, domestic, poultry and agricultural purposes, and being an area of intense agriculture, there is a great necessity to do a quality assessment. Bafia is one of the main food suppliers in the Centre region of Cameroon, and so to meet their demands, the farmers make use of fertilizers and other agrochemicals to increase their yield. Less than 20% of the population in Bafia has access to piped-borne water due to the national shortage, according to the authors best knowledge very limited studies have been carried out in the area to increase awareness of the groundwater resources. The aim of this study was to assess heavy metal contamination levels in ground and surface waters and to evaluate the effects of agricultural inputs on water quality in the Bafia area. 57 water samples (including 31 wells, 20 boreholes, 4 rivers and 2 springs) were analyzed for their physicochemical parameters, while collected samples were filtered, acidified with HNO3 and analyzed by ICP-MS for their heavy metal content (Fe, Ti, Sr, Al, Mn). Results showed that most of the water samples are acidic to slightly neutral and moderately mineralized. Ti concentration was significantly high in the area (mean value 130µg/L), suggesting another Ti source besides the natural input from Titanium oxides. The high amounts of Mn and Al in some cases also pointed to additional input, probably from fertilizers that are used in the farmlands. Most of the water samples were found to be significantly contaminated with heavy metals exceeding the WHO allowable limits (Ti-94.7%, Al-19.3%, Mn-14%, Fe-5.2% and Sr-3.5% above limits), especially around farmlands and topographic low areas. The heavy metal concentration was evaluated using the heavy metal pollution index (HPI), heavy metal evaluation index (HEI) and degree of contamination (Cd), while the Ficklin diagram was used for the water based on changes in metal content and pH. The high mean values of HPI and Cd (741 and 5, respectively), which exceeded the critical limit, indicate that the water samples are highly contaminated, with intense pollution from Ti, Al and Mn. Based on the HPI and Cd, 93% and 35% of the samples, respectively, are unacceptable for drinking purposes. The lowest HPI value point also had the lowest EC (50 µS/cm), indicating lower mineralization and less anthropogenic influence. According to the Ficklin diagram, 89% of the samples fell within the near-neutral low-metal domain, while 9% fell in the near-neutral extreme-metal domain. Two significant factors were extracted from the PCA, explaining 70.6% of the total variance. The first factor revealed intense anthropogenic activity (especially from fertilizers), while the second factor revealed water-rock interactions. Agricultural activities thus have an impact on the heavy metal content of groundwater in the area; hence, much attention should be given to the affected areas in order to protect human health/life and thus sustainably manage this precious resource.

Keywords: Bafia, contamination, degree of contamination, groundwater, heavy metal pollution index

Procedia PDF Downloads 85
661 Mathematics Bridging Theory and Applications for a Data-Driven World

Authors: Zahid Ullah, Atlas Khan

Abstract:

In today's data-driven world, the role of mathematics in bridging the gap between theory and applications is becoming increasingly vital. This abstract highlights the significance of mathematics as a powerful tool for analyzing, interpreting, and extracting meaningful insights from vast amounts of data. By integrating mathematical principles with real-world applications, researchers can unlock the full potential of data-driven decision-making processes. This abstract delves into the various ways mathematics acts as a bridge connecting theoretical frameworks to practical applications. It explores the utilization of mathematical models, algorithms, and statistical techniques to uncover hidden patterns, trends, and correlations within complex datasets. Furthermore, it investigates the role of mathematics in enhancing predictive modeling, optimization, and risk assessment methodologies for improved decision-making in diverse fields such as finance, healthcare, engineering, and social sciences. The abstract also emphasizes the need for interdisciplinary collaboration between mathematicians, statisticians, computer scientists, and domain experts to tackle the challenges posed by the data-driven landscape. By fostering synergies between these disciplines, novel approaches can be developed to address complex problems and make data-driven insights accessible and actionable. Moreover, this abstract underscores the importance of robust mathematical foundations for ensuring the reliability and validity of data analysis. Rigorous mathematical frameworks not only provide a solid basis for understanding and interpreting results but also contribute to the development of innovative methodologies and techniques. In summary, this abstract advocates for the pivotal role of mathematics in bridging theory and applications in a data-driven world. By harnessing mathematical principles, researchers can unlock the transformative potential of data analysis, paving the way for evidence-based decision-making, optimized processes, and innovative solutions to the challenges of our rapidly evolving society.

Keywords: mathematics, bridging theory and applications, data-driven world, mathematical models

Procedia PDF Downloads 75
660 From Text to Data: Sentiment Analysis of Presidential Election Political Forums

Authors: Sergio V Davalos, Alison L. Watkins

Abstract:

User generated content (UGC) such as website post has data associated with it: time of the post, gender, location, type of device, and number of words. The text entered in user generated content (UGC) can provide a valuable dimension for analysis. In this research, each user post is treated as a collection of terms (words). In addition to the number of words per post, the frequency of each term is determined by post and by the sum of occurrences in all posts. This research focuses on one specific aspect of UGC: sentiment. Sentiment analysis (SA) was applied to the content (user posts) of two sets of political forums related to the US presidential elections for 2012 and 2016. Sentiment analysis results in deriving data from the text. This enables the subsequent application of data analytic methods. The SASA (SAIL/SAI Sentiment Analyzer) model was used for sentiment analysis. The application of SASA resulted with a sentiment score for each post. Based on the sentiment scores for the posts there are significant differences between the content and sentiment of the two sets for the 2012 and 2016 presidential election forums. In the 2012 forums, 38% of the forums started with positive sentiment and 16% with negative sentiment. In the 2016 forums, 29% started with positive sentiment and 15% with negative sentiment. There also were changes in sentiment over time. For both elections as the election got closer, the cumulative sentiment score became negative. The candidate who won each election was in the more posts than the losing candidates. In the case of Trump, there were more negative posts than Clinton’s highest number of posts which were positive. KNIME topic modeling was used to derive topics from the posts. There were also changes in topics and keyword emphasis over time. Initially, the political parties were the most referenced and as the election got closer the emphasis changed to the candidates. The performance of the SASA method proved to predict sentiment better than four other methods in Sentibench. The research resulted in deriving sentiment data from text. In combination with other data, the sentiment data provided insight and discovery about user sentiment in the US presidential elections for 2012 and 2016.

Keywords: sentiment analysis, text mining, user generated content, US presidential elections

Procedia PDF Downloads 190
659 Subsidiary Entrepreneurial Orientation, Trust in Headquarters and Performance: The Mediating Role of Autonomy

Authors: Zhang Qingzhong

Abstract:

Though there exists an increasing number of research studies on the headquarters-subsidiary relationship, and within this context, there is a focus on subsidiaries' contributory role to multinational corporations (MNC), subsidiary autonomy, and the conditions under which autonomy exerts an effect on subsidiary performance still constitute a subject of debate in the literature. The objective of this research is to study the MNC subsidiary autonomy and performance relationship and the effect of subsidiary entrepreneurial orientation and trust on subsidiary autonomy in the China environment, a phenomenon that has not yet been studied. The research addresses the following three questions: (i) Is subsidiary autonomy associated with MNC subsidiary performance in the China environment? (ii) How do subsidiary entrepreneurship and its trust in headquarters affect the level of subsidiary autonomy and its relationship with subsidiary performance? (iii) Does subsidiary autonomy have a mediating effect on subsidiary performance with subsidiary’s entrepreneurship and trust in headquarters? In the present study, we have reviewed literature and conducted semi-structured interviews with multinational corporation (MNC) subsidiary senior executives in China. Building on our insights from the interviews and taking perspectives from four theories, namely the resource-based view (RBV), resource dependency theory, integration-responsiveness framework, and social exchange theory, as well as the extant articles on subsidiary autonomy, entrepreneurial orientation, trust, and subsidiary performance, we have developed a model and have explored the direct and mediating effects of subsidiary autonomy on subsidiary performance within the framework of the MNC. To test the model, we collected and analyzed data based on cross-industry two waves of an online survey from 102 subsidiaries of MNCs in China. We used structural equation modeling to test measurement, direct effect model, and conceptual framework with hypotheses. Our findings confirm that (a) subsidiary autonomy is positively related to subsidiary performance; (b) subsidiary entrepreneurial orientation is positively related to subsidiary autonomy; (c) subsidiary’s trust in headquarters has a positive effect on subsidiary autonomy; (d) subsidiary autonomy mediates the relationship between entrepreneurial orientation and subsidiary performance; (e) subsidiary autonomy mediates the relationship between trust and subsidiary performance. Our study highlights the important role of subsidiary autonomy in leveraging the resource of subsidiary entrepreneurial orientation and its trust relationship with headquarters to achieve high performance. We discuss the theoretical and managerial implications of the findings and propose directions for future research.

Keywords: subsidiary entrepreneurial orientation, trust, subsidiary autonomy, subsidiary performance

Procedia PDF Downloads 186
658 Optimizing the Location of Parking Areas Adapted for Dangerous Goods in the European Road Transport Network

Authors: María Dolores Caro, Eugenio M. Fedriani, Ángel F. Tenorio

Abstract:

The transportation of dangerous goods by lorries throughout Europe must be done by using the roads conforming the European Road Transport Network. In this network, there are several parking areas where lorry drivers can park to rest according to the regulations. According to the "European Agreement concerning the International Carriage of Dangerous Goods by Road", parking areas where lorries transporting dangerous goods can park to rest, must follow several security stipulations to keep safe the rest of road users. At this respect, these lorries must be parked in adapted areas with strict and permanent surveillance measures. Moreover, drivers must satisfy several restrictions about resting and driving time. Under these facts, one may expect that there exist enough parking areas for the transport of this type of goods in order to obey the regulations prescribed by the European Union and its member countries. However, the already-existing parking areas are not sufficient to cover all the stops required by drivers transporting dangerous goods. Our main goal is, starting from the already-existing parking areas and the loading-and-unloading location, to provide an optimal answer to the following question: how many additional parking areas must be built and where must they be located to assure that lorry drivers can transport dangerous goods following all the stipulations about security and safety for their stops? The sense of the word “optimal” is due to the fact that we give a global solution for the location of parking areas throughout the whole European Road Transport Network, adjusting the number of additional areas to be as lower as possible. To do so, we have modeled the problem using graph theory since we are working with a road network. As nodes, we have considered the locations of each already-existing parking area, each loading-and-unloading area each road bifurcation. Each road connecting two nodes is considered as an edge in the graph whose weight corresponds to the distance between both nodes in the edge. By applying a new efficient algorithm, we have found the additional nodes for the network representing the new parking areas adapted for dangerous goods, under the fact that the distance between two parking areas must be less than or equal to 400 km.

Keywords: trans-european transport network, dangerous goods, parking areas, graph-based modeling

Procedia PDF Downloads 280
657 3D Codes for Unsteady Interaction Problems of Continuous Mechanics in Euler Variables

Authors: M. Abuziarov

Abstract:

The designed complex is intended for the numerical simulation of fast dynamic processes of interaction of heterogeneous environments susceptible to the significant formability. The main challenges in solving such problems are associated with the construction of the numerical meshes. Currently, there are two basic approaches to solve this problem. One is using of Lagrangian or Lagrangian Eulerian grid associated with the boundaries of media and the second is associated with the fixed Eulerian mesh, boundary cells of which cut boundaries of the environment medium and requires the calculation of these cut volumes. Both approaches require the complex grid generators and significant time for preparing the code’s data for simulation. In this codes these problems are solved using two grids, regular fixed and mobile local Euler Lagrange - Eulerian (ALE approach) accompanying the contact and free boundaries, the surfaces of shock waves and phase transitions, and other possible features of solutions, with mutual interpolation of integrated parameters. For modeling of both liquids and gases, and deformable solids the Godunov scheme of increased accuracy is used in Lagrangian - Eulerian variables, the same for the Euler equations and for the Euler- Cauchy, describing the deformation of the solid. The increased accuracy of the scheme is achieved by using 3D spatial time dependent solution of the discontinuity problem (3D space time dependent Riemann's Problem solver). The same solution is used to calculate the interaction at the liquid-solid surface (Fluid Structure Interaction problem). The codes does not require complex 3D mesh generators, only the surfaces of the calculating objects as the STL files created by means of engineering graphics are given by the user, which greatly simplifies the preparing the task and makes it convenient to use directly by the designer at the design stage. The results of the test solutions and applications related to the generation and extension of the detonation and shock waves, loading the constructions are presented.

Keywords: fluid structure interaction, Riemann's solver, Euler variables, 3D codes

Procedia PDF Downloads 437
656 Detecting Natural Fractures and Modeling Them to Optimize Field Development Plan in Libyan Deep Sandstone Reservoir (Case Study)

Authors: Tarek Duzan

Abstract:

Fractures are a fundamental property of most reservoirs. Despite their abundance, they remain difficult to detect and quantify. The most effective characterization of fractured reservoirs is accomplished by integrating geological, geophysical, and engineering data. Detection of fractures and defines their relative contribution is crucial in the early stages of exploration and later in the production of any field. Because fractures could completely change our thoughts, efforts, and planning to produce a specific field properly. From the structural point of view, all reservoirs are fractured to some point of extent. North Gialo field is thought to be a naturally fractured reservoir to some extent. Historically, natural fractured reservoirs are more complicated in terms of their exploration and production efforts, and most geologists tend to deny the presence of fractures as an effective variable. Our aim in this paper is to determine the degree of fracturing, and consequently, our evaluation and planning can be done properly and efficiently from day one. The challenging part in this field is that there is no enough data and straightforward well testing that can let us completely comfortable with the idea of fracturing; however, we cannot ignore the fractures completely. Logging images, available well testing, and limited core studies are our tools in this stage to evaluate, model, and predict possible fracture effects in this reservoir. The aims of this study are both fundamental and practical—to improve the prediction and diagnosis of natural-fracture attributes in N. Gialo hydrocarbon reservoirs and accurately simulate their influence on production. Moreover, the production of this field comes from 2-phase plan; a self depletion of oil and then gas injection period for pressure maintenance and increasing ultimate recovery factor. Therefore, well understanding of fracturing network is essential before proceeding with the targeted plan. New analytical methods will lead to more realistic characterization of fractured and faulted reservoir rocks. These methods will produce data that can enhance well test and seismic interpretations, and that can readily be used in reservoir simulators.

Keywords: natural fracture, sandstone reservoir, geological, geophysical, and engineering data

Procedia PDF Downloads 92
655 Studying Language of Immediacy and Language of Distance from a Corpus Linguistic Perspective: A Pilot Study of Evaluation Markers in French Television Weather Reports

Authors: Vince Liégeois

Abstract:

Language of immediacy and distance: Within their discourse theory, Koch & Oesterreicher establish a distinction between a language of immediacy and a language of distance. The former refers to those discourses which are oriented more towards a spoken norm, whereas the latter entails discourses oriented towards a written norm, regardless of whether they are realised phonically or graphically. This means that an utterance can be realised phonically but oriented more towards the written language norm (e.g., a scientific presentation or eulogy) or realised graphically but oriented towards a spoken norm (e.g., a scribble or chat messages). Research desiderata: The methodological approach from Koch & Oesterreicher has often been criticised for not providing a corpus-linguistic methodology, which makes it difficult to work with quantitative data or address large text collections within this research paradigm. Consequently, the Koch & Oesterreicher approach has difficulties gaining ground in those research areas which rely more on corpus linguistic research models, like text linguistics and LSP-research. A combinatory approach: Accordingly, we want to establish a combinatory approach with corpus-based linguistic methodology. To this end, we propose to (i) include data about the context of an utterance (e.g., monologicity/dialogicity, familiarity with the speaker) – which were called “conditions of communication” in the original work of Koch & Oesterreicher – and (ii) correlate the linguistic phenomenon at the centre of the inquiry (e.g., evaluation markers) to a group of linguistic phenomena deemed typical for either distance- or immediacy-language. Based on these two parameters, linguistic phenomena and texts could then be mapped on an immediacy-distance continuum. Pilot study: To illustrate the benefits of this approach, we will conduct a pilot study on evaluation phenomena in French television weather reports, a form of domain-sensitive discourse which has often been cited as an example of a “text genre”. Within this text genre, we will look at so-called “evaluation markers,” e.g., fixed strings like bad weather, stifling hot, and “no luck today!”. These evaluation markers help to communicate the coming weather situation towards the lay audience but have not yet been studied within the Koch & Oesterreicher research paradigm. Accordingly, we want to figure out whether said evaluation markers are more typical for those weather reports which tend more towards immediacy or those which tend more towards distance. To this aim, we collected a corpus with different kinds of television weather reports,e.g., as part of the news broadcast, including dialogue. The evaluation markers themselves will be studied according to the explained methodology, by correlating them to (i) metadata about the context and (ii) linguistic phenomena characterising immediacy-language: repetition, deixis (personal, spatial, and temporal), a freer choice of tense and right- /left-dislocation. Results: Our results indicate that evaluation markers are more dominantly present in those weather reports inclining towards immediacy-language. Based on the methodology established above, we have gained more insight into the working of evaluation markers in the domain-sensitive text genre of (television) weather reports. For future research, it will be interesting to determine whether said evaluation markers are also typical for immediacy-language-oriented in other domain-sensitive discourses.

Keywords: corpus-based linguistics, evaluation markers, language of immediacy and distance, weather reports

Procedia PDF Downloads 217
654 Inflation and Deflation of Aircraft's Tire with Intelligent Tire Pressure Regulation System

Authors: Masoud Mirzaee, Ghobad Behzadi Pour

Abstract:

An aircraft tire is designed to tolerate extremely heavy loads for a short duration. The number of tires increases with the weight of the aircraft, as it is needed to be distributed more evenly. Generally, aircraft tires work at high pressure, up to 200 psi (14 bar; 1,400 kPa) for airliners and higher for business jets. Tire assemblies for most aircraft categories provide a recommendation of compressed nitrogen that supports the aircraft’s weight on the ground, including a mechanism for controlling the aircraft during taxi, takeoff; landing; and traction for braking. Accurate tire pressure is a key factor that enables tire assemblies to perform reliably under high static and dynamic loads. Concerning ambient temperature change, considering the condition in which the temperature between the origin and destination airport was different, tire pressure should be adjusted and inflated to the specified operating pressure at the colder airport. This adjustment superseding the normal tire over an inflation limit of 5 percent at constant ambient temperature is required because the inflation pressure remains constant to support the load of a specified aircraft configuration. On the other hand, without this adjustment, a tire assembly would be significantly under/over-inflated at the destination. Due to an increase of human errors in the aviation industry, exorbitant costs are imposed on the airlines for providing consumable parts such as aircraft tires. The existence of an intelligent system to adjust the aircraft tire pressure based on weight, load, temperature, and weather conditions of origin and destination airports, could have a significant effect on reducing the aircraft maintenance costs, aircraft fuel and further improving the environmental issues related to the air pollution. An intelligent tire pressure regulation system (ITPRS) contains a processing computer, a nitrogen bottle with 1800 psi, and distribution lines. Nitrogen bottle’s inlet and outlet valves are installed in the main wheel landing gear’s area and are connected through nitrogen lines to main wheels and nose wheels assy. Controlling and monitoring of nitrogen will be performed by a computer, which is adjusted according to the calculations of received parameters, including the temperature of origin and destination airport, the weight of cargo loads and passengers, fuel quantity, and wind direction. Correct tire inflation and deflation are essential in assuring that tires can withstand the centrifugal forces and heat of normal operations, with an adequate margin of safety for unusual operating conditions such as rejected takeoff and hard landings. ITPRS will increase the performance of the aircraft in all phases of takeoff, landing, and taxi. Moreover, this system will reduce human errors, consumption materials, and stresses imposed on the aircraft body.

Keywords: avionic system, improve efficiency, ITPRS, human error, reduced cost, tire pressure

Procedia PDF Downloads 247
653 Evaluation of the Effect of Milk Recording Intervals on the Accuracy of an Empirical Model Fitted to Dairy Sheep Lactations

Authors: L. Guevara, Glória L. S., Corea E. E, A. Ramírez-Zamora M., Salinas-Martinez J. A., Angeles-Hernandez J. C.

Abstract:

Mathematical models are useful for identifying the characteristics of sheep lactation curves to develop and implement improved strategies. However, the accuracy of these models is influenced by factors such as the recording regime, mainly the intervals between test day records (TDR). The current study aimed to evaluate the effect of different TDR intervals on the goodness of fit of the Wood model (WM) applied to dairy sheep lactations. A total of 4,494 weekly TDRs from 156 lactations of dairy crossbred sheep were analyzed. Three new databases were generated from the original weekly TDR data (7D), comprising intervals of 14(14D), 21(21D), and 28(28D) days. The parameters of WM were estimated using the “minpack.lm” package in the R software. The shape of the lactation curve (typical and atypical) was defined based on the WM parameters. The goodness of fit was evaluated using the mean square of prediction error (MSPE), Root of MSPE (RMSPE), Akaike´s Information Criterion (AIC), Bayesian´s Information Criterion (BIC), and the coefficient of correlation (r) between the actual and estimated total milk yield (TMY). WM showed an adequate estimate of TMY regardless of the TDR interval (P=0.21) and shape of the lactation curve (P=0.42). However, we found higher values of r for typical curves compared to atypical curves (0.9vs.0.74), with the highest values for the 28D interval (r=0.95). In the same way, we observed an overestimated peak yield (0.92vs.6.6 l) and underestimated time of peak yield (21.5vs.1.46) in atypical curves. The best values of RMSPE were observed for the 28D interval in both lactation curve shapes. The significant lowest values of AIC (P=0.001) and BIC (P=0.001) were shown by the 7D interval for typical and atypical curves. These results represent the first approach to define the adequate interval to record the regime of dairy sheep in Latin America and showed a better fitting for the Wood model using a 7D interval. However, it is possible to obtain good estimates of TMY using a 28D interval, which reduces the sampling frequency and would save additional costs to dairy sheep producers.

Keywords: gamma incomplete, ewes, shape curves, modeling

Procedia PDF Downloads 75
652 The Relationship between Personal, Psycho-Social and Occupational Risk Factors with Low Back Pain Severity in Industrial Workers

Authors: Omid Giahi, Ebrahim Darvishi, Mahdi Akbarzadeh

Abstract:

Introduction: Occupational low back pain (LBP) is one of the most prevalent work-related musculoskeletal disorders in which a lot of risk factors are involved that. The present study focuses on the relation between personal, psycho-social and occupational risk factors and LBP severity in industrial workers. Materials and Methods: This research was a case-control study which was conducted in Kurdistan province. 100 workers (Mean Age ± SD of 39.9 ± 10.45) with LBP were selected as the case group, and 100 workers (Mean Age ± SD of 37.2 ± 8.5) without LBP were assigned into the control group. All participants were selected from various industrial units, and they had similar occupational conditions. The required data including demographic information (BMI, smoking, alcohol, and family history), occupational (posture, mental workload (MWL), force, vibration and repetition), and psychosocial factors (stress, occupational satisfaction and security) of the participants were collected via consultation with occupational medicine specialists, interview, and the related questionnaires and also the NASA-TLX software and REBA worksheet. Chi-square test, logistic regression and structural equation modeling (SEM) were used to analyze the data. For analysis of data, IBM Statistics SPSS 24 and Mplus6 software have been used. Results: 114 (77%) of the individuals were male and 86 were (23%) female. Mean Career length of the Case Group and Control Group were 10.90 ± 5.92, 9.22 ± 4.24, respectively. The statistical analysis of the data revealed that there was a significant correlation between the Posture, Smoking, Stress, Satisfaction, and MWL with occupational LBP. The odds ratios (95% confidence intervals) derived from a logistic regression model were 2.7 (1.27-2.24) and 2.5 (2.26-5.17) and 3.22 (2.47-3.24) for Stress, MWL, and Posture, respectively. Also, the SEM analysis of the personal, psycho-social and occupational factors with LBP revealed that there was a significant correlation. Conclusion: All three broad categories of risk factors simultaneously increase the risk of occupational LBP in the workplace. But, the risks of Posture, Stress, and MWL have a major role in LBP severity. Therefore, prevention strategies for persons in jobs with high risks for LBP are required to decrease the risk of occupational LBP.

Keywords: industrial workers occupational, low back pain, occupational risk factors, psychosocial factors

Procedia PDF Downloads 257
651 Practical Software for Optimum Bore Hole Cleaning Using Drilling Hydraulics Techniques

Authors: Abdulaziz F. Ettir, Ghait Bashir, Tarek S. Duzan

Abstract:

A proper well planning is very vital to achieve any successful drilling program on the basis of preventing, overcome all drilling problems and minimize cost operations. Since the hydraulic system plays an active role during the drilling operations, that will lead to accelerate the drilling effort and lower the overall well cost. Likewise, an improperly designed hydraulic system can slow drill rate, fail to clean the hole of cuttings, and cause kicks. In most cases, common sense and commercially available computer programs are the only elements required to design the hydraulic system. Drilling optimization is the logical process of analyzing effects and interactions of drilling variables through applied drilling and hydraulic equations and mathematical modeling to achieve maximum drilling efficiency with minimize drilling cost. In this paper, practical software adopted in this paper to define drilling optimization models including four different optimum keys, namely Opti-flow, Opti-clean, Opti-slip and Opti-nozzle that can help to achieve high drilling efficiency with lower cost. The used data in this research from vertical and horizontal wells were recently drilled in Waha Oil Company fields. The input data are: Formation type, Geopressures, Hole Geometry, Bottom hole assembly and Mud reghology. Upon data analysis, all the results from wells show that the proposed program provides a high accuracy than that proposed from the company in terms of hole cleaning efficiency, and cost break down if we consider that the actual data as a reference base for all wells. Finally, it is recommended to use the established Optimization calculations software at drilling design to achieve correct drilling parameters that can provide high drilling efficiency, borehole cleaning and all other hydraulic parameters which assist to minimize hole problems and control drilling operation costs.

Keywords: optimum keys, namely opti-flow, opti-clean, opti-slip and opti-nozzle

Procedia PDF Downloads 318
650 Design and Construction of a Home-Based, Patient-Led, Therapeutic, Post-Stroke Recovery System Using Iterative Learning Control

Authors: Marco Frieslaar, Bing Chu, Eric Rogers

Abstract:

Stroke is a devastating illness that is the second biggest cause of death in the world (after heart disease). Where it does not kill, it leaves survivors with debilitating sensory and physical impairments that not only seriously harm their quality of life, but also cause a high incidence of severe depression. It is widely accepted that early intervention is essential for recovery, but current rehabilitation techniques largely favor hospital-based therapies which have restricted access, expensive and specialist equipment and tend to side-step the emotional challenges. In addition, there is insufficient funding available to provide the long-term assistance that is required. As a consequence, recovery rates are poor. The relatively unexplored solution is to develop therapies that can be harnessed in the home and are formulated from technologies that already exist in everyday life. This would empower individuals to take control of their own improvement and provide choice in terms of when and where they feel best able to undertake their own healing. This research seeks to identify how effective post-stroke, rehabilitation therapy can be applied to upper limb mobility, within the physical context of a home rather than a hospital. This is being achieved through the design and construction of an automation scheme, based on iterative learning control and the Riener muscle model, that has the ability to adapt to the user and react to their level of fatigue and provide tangible physical recovery. It utilizes a SMART Phone and laptop to construct an iterative learning control (ILC) system, that monitors upper arm movement in three dimensions, as a series of exercises are undertaken. The equipment generates functional electrical stimulation to assist in muscle activation and thus improve directional accuracy. In addition, it monitors speed, accuracy, areas of motion weakness and similar parameters to create a performance index that can be compared over time and extrapolated to establish an independent and objective assessment scheme, plus an approximate estimation of predicted final outcome. To further extend its assessment capabilities, nerve conduction velocity readings are taken by the software, between the shoulder and hand muscles. This is utilized to measure the speed of response of neuron signal transfer along the arm and over time, an online indication of regeneration levels can be obtained. This will prove whether or not sufficient training intensity is being achieved even before perceivable movement dexterity is observed. The device also provides the option to connect to other users, via the internet, so that the patient can avoid feelings of isolation and can undertake movement exercises together with others in a similar position. This should create benefits not only for the encouragement of rehabilitation participation, but also an emotional support network potential. It is intended that this approach will extend the availability of stroke recovery options, enable ease of access at a low cost, reduce susceptibility to depression and through these endeavors, enhance the overall recovery success rate.

Keywords: home-based therapy, iterative learning control, Riener muscle model, SMART phone, stroke rehabilitation

Procedia PDF Downloads 264
649 Material Concepts and Processing Methods for Electrical Insulation

Authors: R. Sekula

Abstract:

Epoxy composites are broadly used as an electrical insulation for the high voltage applications since only such materials can fulfill particular mechanical, thermal, and dielectric requirements. However, properties of the final product are strongly dependent on proper manufacturing process with minimized material failures, as too large shrinkage, voids and cracks. Therefore, application of proper materials (epoxy, hardener, and filler) and process parameters (mold temperature, filling time, filling velocity, initial temperature of internal parts, gelation time), as well as design and geometric parameters are essential features for final quality of the produced components. In this paper, an approach for three-dimensional modeling of all molding stages, namely filling, curing and post-curing is presented. The reactive molding simulation tool is based on a commercial CFD package, and include dedicated models describing viscosity and reaction kinetics that have been successfully implemented to simulate the reactive nature of the system with exothermic effect. Also a dedicated simulation procedure for stress and shrinkage calculations, as well as simulation results are presented in the paper. Second part of the paper is dedicated to recent developments on formulations of functional composites for electrical insulation applications, focusing on thermally conductive materials. Concepts based on filler modifications for epoxy electrical composites have been presented, including the results of the obtained properties. Finally, having in mind tough environmental regulations, in addition to current process and design aspects, an approach for product re-design has been presented focusing on replacement of epoxy material with the thermoplastic one. Such “design-for-recycling” method is one of new directions associated with development of new material and processing concepts of electrical products and brings a lot of additional research challenges. For that, one of the successful products has been presented to illustrate the presented methodology.

Keywords: curing, epoxy insulation, numerical simulations, recycling

Procedia PDF Downloads 277
648 Simulation of the FDA Centrifugal Blood Pump Using High Performance Computing

Authors: Mehdi Behbahani, Sebastian Rible, Charles Moulinec, Yvan Fournier, Mike Nicolai, Paolo Crosetto

Abstract:

Computational Fluid Dynamics blood-flow simulations are increasingly used to develop and validate blood-contacting medical devices. This study shows that numerical simulations can provide additional and accurate estimates of relevant hemodynamic indicators (e.g., recirculation zones or wall shear stresses), which may be difficult and expensive to obtain from in-vivo or in-vitro experiments. The most recent FDA (Food and Drug Administration) benchmark consisted of a simplified centrifugal blood pump model that contains fluid flow features as they are commonly found in these devices with a clear focus on highly turbulent phenomena. The FDA centrifugal blood pump study is composed of six test cases with different volumetric flow rates ranging from 2.5 to 7.0 liters per minute, pump speeds, and Reynolds numbers ranging from 210,000 to 293,000. Within the frame of this study different turbulence models were tested including RANS models, e.g. k-omega, k-epsilon and a Reynolds Stress Model (RSM) and, LES. The partitioners Hilbert, METIS, ParMETIS and SCOTCH were used to create an unstructured mesh of 76 million elements and compared in their efficiency. Computations were performed on the JUQUEEN BG/Q architecture applying the highly parallel flow solver Code SATURNE and typically using 32768 or more processors in parallel. Visualisations were performed by means of PARAVIEW. Different turbulence models including all six flow situations could be successfully analysed and validated against analytical considerations and from comparison to other data-bases. It showed that an RSM represents an appropriate choice with respect to modeling high-Reynolds number flow cases. Especially, the Rij-SSG (Speziale, Sarkar, Gatzki) variant turned out to be a good approach. Visualisation of complex flow features could be obtained and the flow situation inside the pump could be characterized.

Keywords: blood flow, centrifugal blood pump, high performance computing, scalability, turbulence

Procedia PDF Downloads 381
647 Geospatial Analysis for Predicting Sinkhole Susceptibility in Greene County, Missouri

Authors: Shishay Kidanu, Abdullah Alhaj

Abstract:

Sinkholes in the karst terrain of Greene County, Missouri, pose significant geohazards, imposing challenges on construction and infrastructure development, with potential threats to lives and property. To address these issues, understanding the influencing factors and modeling sinkhole susceptibility is crucial for effective mitigation through strategic changes in land use planning and practices. This study utilizes geographic information system (GIS) software to collect and process diverse data, including topographic, geologic, hydrogeologic, and anthropogenic information. Nine key sinkhole influencing factors, ranging from slope characteristics to proximity to geological structures, were carefully analyzed. The Frequency Ratio method establishes relationships between attribute classes of these factors and sinkhole events, deriving class weights to indicate their relative importance. Weighted integration of these factors is accomplished using the Analytic Hierarchy Process (AHP) and the Weighted Linear Combination (WLC) method in a GIS environment, resulting in a comprehensive sinkhole susceptibility index (SSI) model for the study area. Employing Jenk's natural break classifier method, the SSI values are categorized into five distinct sinkhole susceptibility zones: very low, low, moderate, high, and very high. Validation of the model, conducted through the Area Under Curve (AUC) and Sinkhole Density Index (SDI) methods, demonstrates a robust correlation with sinkhole inventory data. The prediction rate curve yields an AUC value of 74%, indicating a 74% validation accuracy. The SDI result further supports the success of the sinkhole susceptibility model. This model offers reliable predictions for the future distribution of sinkholes, providing valuable insights for planners and engineers in the formulation of development plans and land-use strategies. Its application extends to enhancing preparedness and minimizing the impact of sinkhole-related geohazards on both infrastructure and the community.

Keywords: sinkhole, GIS, analytical hierarchy process, frequency ratio, susceptibility, Missouri

Procedia PDF Downloads 73
646 Distant Speech Recognition Using Laser Doppler Vibrometer

Authors: Yunbin Deng

Abstract:

Most existing applications of automatic speech recognition relies on cooperative subjects at a short distance to a microphone. Standoff speech recognition using microphone arrays can extend the subject to sensor distance somewhat, but it is still limited to only a few feet. As such, most deployed applications of standoff speech recognitions are limited to indoor use at short range. Moreover, these applications require air passway between the subject and the sensor to achieve reasonable signal to noise ratio. This study reports long range (50 feet) automatic speech recognition experiments using a Laser Doppler Vibrometer (LDV) sensor. This study shows that the LDV sensor modality can extend the speech acquisition standoff distance far beyond microphone arrays to hundreds of feet. In addition, LDV enables 'listening' through the windows for uncooperative subjects. This enables new capabilities in automatic audio and speech intelligence, surveillance, and reconnaissance (ISR) for law enforcement, homeland security and counter terrorism applications. The Polytec LDV model OFV-505 is used in this study. To investigate the impact of different vibrating materials, five parallel LDV speech corpora, each consisting of 630 speakers, are collected from the vibrations of a glass window, a metal plate, a plastic box, a wood slate, and a concrete wall. These are the common materials the application could encounter in a daily life. These data were compared with the microphone counterpart to manifest the impact of various materials on the spectrum of the LDV speech signal. State of the art deep neural network modeling approaches is used to conduct continuous speaker independent speech recognition on these LDV speech datasets. Preliminary phoneme recognition results using time-delay neural network, bi-directional long short term memory, and model fusion shows great promise of using LDV for long range speech recognition. To author’s best knowledge, this is the first time an LDV is reported for long distance speech recognition application.

Keywords: covert speech acquisition, distant speech recognition, DSR, laser Doppler vibrometer, LDV, speech intelligence surveillance and reconnaissance, ISR

Procedia PDF Downloads 177
645 The Predictive Power of Successful Scientific Theories: An Explanatory Study on Their Substantive Ontologies through Theoretical Change

Authors: Damian Islas

Abstract:

Debates on realism in science concern two different questions: (I) whether the unobservable entities posited by theories can be known; and (II) whether any knowledge we have of them is objective or not. Question (I) arises from the doubt that since observation is the basis of all our factual knowledge, unobservable entities cannot be known. Question (II) arises from the doubt that since scientific representations are inextricably laden with the subjective, idiosyncratic, and a priori features of human cognition and scientific practice, they cannot convey any reliable information on how their objects are in themselves. A way of understanding scientific realism (SR) is through three lines of inquiry: ontological, semantic, and epistemological. Ontologically, scientific realism asserts the existence of a world independent of human mind. Semantically, scientific realism assumes that theoretical claims about reality show truth values and, thus, should be construed literally. Epistemologically, scientific realism believes that theoretical claims offer us knowledge of the world. Nowadays, the literature on scientific realism has proceeded rather far beyond the realism versus antirealism debate. This stance represents a middle-ground position between the two according to which science can attain justified true beliefs concerning relational facts about the unobservable realm but cannot attain justified true beliefs concerning the intrinsic nature of any objects occupying that realm. That is, the structural content of scientific theories about the unobservable can be known, but facts about the intrinsic nature of the entities that figure as place-holders in those structures cannot be known. There are two possible versions of SR: Epistemological Structural Realism (ESR) and Ontic Structural Realism (OSR). On ESR, an agnostic stance is preserved with respect to the natures of unobservable entities, but the possibility of knowing the relations obtaining between those entities is affirmed. OSR includes the rather striking claim that when it comes to the unobservables theorized about within fundamental physics, relations exist, but objects do not. Focusing on ESR, questions arise concerning its ability to explain the empirical success of a theory. Empirical success certainly involves predictive success, and predictive success implies a theory’s power to make accurate predictions. But a theory’s power to make any predictions at all seems to derive precisely from its core axioms or laws concerning unobservable entities and mechanisms, and not simply the sort of structural relations often expressed in equations. The specific challenge to ESR concerns its ability to explain the explanatory and predictive power of successful theories without appealing to their substantive ontologies, which are often not preserved by their successors. The response to this challenge will depend on the various and subtle different versions of ESR and OSR stances, which show a sort of progression through eliminativist OSR to moderate OSR of gradual increase in the ontological status accorded to objects. Knowing the relations between unobserved entities is methodologically identical to assert that these relations between unobserved entities exist.

Keywords: eliminativist ontic structural realism, epistemological structuralism, moderate ontic structural realism, ontic structuralism

Procedia PDF Downloads 117
644 Production and Characterization of Biochars from Torrefaction of Biomass

Authors: Serdar Yaman, Hanzade Haykiri-Acma

Abstract:

Biomass is a CO₂-neutral fuel that is renewable and sustainable along with having very huge global potential. Efficient use of biomass in power generation and production of biomass-based biofuels can mitigate the greenhouse gasses (GHG) and reduce dependency on fossil fuels. There are also other beneficial effects of biomass energy use such as employment creation and pollutant reduction. However, most of the biomass materials are not capable of competing with fossil fuels in terms of energy content. High moisture content and high volatile matter yields of biomass make it low calorific fuel, and it is very significant concern over fossil fuels. Besides, the density of biomass is generally low, and it brings difficulty in transportation and storage. These negative aspects of biomass can be overcome by thermal pretreatments that upgrade the fuel property of biomass. That is, torrefaction is such a thermal process in which biomass is heated up to 300ºC under non-oxidizing conditions to avoid burning of the material. The treated biomass is called as biochar that has considerably lower contents of moisture, volatile matter, and oxygen compared to the parent biomass. Accordingly, carbon content and the calorific value of biochar increase to the level which is comparable with that of coal. Moreover, hydrophilic nature of untreated biomass that leads decay in the structure is mostly eliminated, and the surface properties of biochar turn into hydrophobic character upon torrefaction. In order to investigate the effectiveness of torrefaction process on biomass properties, several biomass species such as olive milling residue (OMR), Rhododendron (small shrubby tree with bell-shaped flowers), and ash tree (timber tree) were chosen. The fuel properties of these biomasses were analyzed through proximate and ultimate analyses as well as higher heating value (HHV) determination. For this, samples were first chopped and ground to a particle size lower than 250 µm. Then, samples were subjected to torrefaction in a horizontal tube furnace by heating from ambient up to temperatures of 200, 250, and 300ºC at a heating rate of 10ºC/min. The biochars obtained from this process were also tested by the methods applied to the parent biomass species. Improvement in the fuel properties was interpreted. That is, increasing torrefaction temperature led to regular increases in the HHV in OMR, and the highest HHV (6065 kcal/kg) was gained at 300ºC. Whereas, torrefaction at 250ºC was seen optimum for Rhododendron and ash tree since torrefaction at 300ºC had a detrimental effect on HHV. On the other hand, the increase in carbon contents and reduction in oxygen contents were determined. Burning characteristics of the biochars were also studied using thermal analysis technique. For this purpose, TA Instruments SDT Q600 model thermal analyzer was used and the thermogravimetric analysis (TGA), derivative thermogravimetry (DTG), differential scanning calorimetry (DSC), and differential thermal analysis (DTA) curves were compared and interpreted. It was concluded that torrefaction is an efficient method to upgrade the fuel properties of biomass and the biochars from which have superior characteristics compared to the parent biomasses.

Keywords: biochar, biomass, fuel upgrade, torrefaction

Procedia PDF Downloads 373
643 Waveguiding in an InAs Quantum Dots Nanomaterial for Scintillation Applications

Authors: Katherine Dropiewski, Michael Yakimov, Vadim Tokranov, Allan Minns, Pavel Murat, Serge Oktyabrsky

Abstract:

InAs Quantum Dots (QDs) in a GaAs matrix is a well-documented luminescent material with high light yield, as well as thermal and ionizing radiation tolerance due to quantum confinement. These benefits can be leveraged for high-efficiency, room temperature scintillation detectors. The proposed scintillator is composed of InAs QDs acting as luminescence centers in a GaAs stopping medium, which also acts as a waveguide. This system has appealing potential properties, including high light yield (~240,000 photons/MeV) and fast capture of photoelectrons (2-5ps), orders of magnitude better than currently used inorganic scintillators, such as LYSO or BaF2. The high refractive index of the GaAs matrix (n=3.4) ensures light emitted by the QDs is waveguided, which can be collected by an integrated photodiode (PD). Scintillation structures were grown using Molecular Beam Epitaxy (MBE) and consist of thick GaAs waveguiding layers with embedded sheets of modulation p-type doped InAs QDs. An AlAs sacrificial layer is grown between the waveguide and the GaAs substrate for epitaxial lift-off to separate the scintillator film and transfer it to a low-index substrate for waveguiding measurements. One consideration when using a low-density material like GaAs (~5.32 g/cm³) as a stopping medium is the matrix thickness in the dimension of radiation collection. Therefore, luminescence properties of very thick (4-20 microns) waveguides with up to 100 QD layers were studied. The optimization of the medium included QD shape, density, doping, and AlGaAs barriers at the waveguide surfaces to prevent non-radiative recombination. To characterize the efficiency of QD luminescence, low temperature photoluminescence (PL) (77-450 K) was measured and fitted using a kinetic model. The PL intensity degrades by only 40% at RT, with an activation energy for electron escape from QDs to the barrier of ~60 meV. Attenuation within the waveguide (WG) is a limiting factor for the lateral size of a scintillation detector, so PL spectroscopy in the waveguiding configuration was studied. Spectra were measured while the laser (630 nm) excitation point was scanned away from the collecting fiber coupled to the edge of the WG. The QD ground state PL peak at 1.04 eV (1190 nm) was inhomogeneously broadened with FWHM of 28 meV (33 nm) and showed a distinct red-shift due to self-absorption in the QDs. Attenuation stabilized after traveling over 1 mm through the WG, at about 3 cm⁻¹. Finally, a scintillator sample was used to test detection and evaluate timing characteristics using 5.5 MeV alpha particles. With a 2D waveguide and a small area of integrated PD, the collected charge averaged 8.4 x10⁴ electrons, corresponding to a collection efficiency of about 7%. The scintillation response had 80 ps noise-limited time resolution and a QD decay time of 0.6 ns. The data confirms unique properties of this scintillation detector which can be potentially much faster than any currently used inorganic scintillator.

Keywords: GaAs, InAs, molecular beam epitaxy, quantum dots, III-V semiconductor

Procedia PDF Downloads 254