Search results for: modeling accuracy
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 7096

Search results for: modeling accuracy

916 An Analysis of the Performances of Various Buoys as the Floats of Wave Energy Converters

Authors: İlkay Özer Erselcan, Abdi Kükner, Gökhan Ceylan

Abstract:

The power generated by eight point absorber type wave energy converters each having a different buoy are calculated in order to investigate the performances of buoys in this study. The calculations are carried out by modeling three different sea states observed in two different locations in the Black Sea. The floats analyzed in this study have two basic geometries and four different draft/radius (d/r) ratios. The buoys possess the shapes of a semi-ellipsoid and a semi-elliptic paraboloid. Additionally, the draft/radius ratios range from 0.25 to 1 by an increment of 0.25. The radiation forces acting on the buoys due to the oscillatory motions of these bodies are evaluated by employing a 3D panel method along with a distribution of 3D pulsating sources in frequency domain. On the other hand, the wave forces acting on the buoys which are taken as the sum of Froude-Krylov forces and diffraction forces are calculated by using linear wave theory. Furthermore, the wave energy converters are assumed to be taut-moored to the seabed so that the secondary body which houses a power take-off system oscillates with much smaller amplitudes compared to the buoy. As a result, it is assumed that there is not any significant contribution to the power generation from the motions of the housing body and the only contribution to power generation comes from the buoy. The power take-off systems of the wave energy converters are high pressure oil hydraulic systems which are identical in terms of their characteristic parameters. The results show that the power generated by wave energy converters which have semi-ellipsoid floats is higher than that of those which have semi elliptic paraboloid floats in both locations and in all sea states. It is also determined that the power generated by the wave energy converters follow an unsteady pattern such that they do not decrease or increase with changing draft/radius ratios of the floats. Although the highest power level is obtained with a semi-ellipsoid float which has a draft/radius ratio equal to 1, other floats of which the draft/radius ratio is 0.25 delivered higher power that the floats with a draft/radius ratio equal to 1 in some cases.

Keywords: Black Sea, buoys, hydraulic power take-off system, wave energy converters

Procedia PDF Downloads 338
915 A Non-Invasive Neonatal Jaundice Screening Device Measuring Bilirubin on Eyes

Authors: Li Shihao, Dieter Trau

Abstract:

Bilirubin is a yellow substance that is made when the body breaks down old red blood cells. High levels of bilirubin can cause jaundice, a condition that makes the newborn's skin and the white part of the eyes look yellow. Jaundice is a serial-killer in developing countries in Southeast Asia such as Myanmar and most parts of Africa where jaundice screening is largely unavailable. Worldwide, 60% of newborns experience infant jaundice. One in ten will require therapy to prevent serious complications and lifelong neurologic sequelae. Limitations of current solutions: - Blood test: Blood tests are painful may largely unavailable in poor areas of developing countries, and also can be costly and unsafe due to the insufficient investment and lack of access to health care systems. - Transcutaneous jaundice-meter: 1) can only provide reliable results to caucasian newborns, due to skin pigmentations since current technologies measure bilirubin by the color of the skin. Basically, the darker the skin is, the harder to measure, 2) current jaundice meters are not affordable for most underdeveloped areas in Africa like Kenya and Togo, 3) fat tissue under the skin also influences the accuracy, which will give overestimated results, 4) current jaundice meters are not reliable after treatment (phototherapy) because bilirubin levels underneath the skin will be reduced first, while overall levels may be quite high. Thus, there is an urgent need for a low-cost non-invasive device, which can be effective not only for caucasian babies but also Asian and African newborns, to save lives at the most vulnerable time and prevent any complications like brain damage. Instead of measuring bilirubin on skin, we proposed a new method to do the measurement on the sclera, which can avoid the difference of skin pigmentations and ethnicities, due to the necessity for the sclera to be white regardless of racial background. This is a novel approach for measuring bilirubin by an optical method of light reflection off the white part of the eye. Moreover, the device is connected to a smart device, which can provide a user-friendly interface and the ability to record the clinical data continuously A disposable eye cap will be provided avoiding contamination and fixing the distance to the eye.

Keywords: Jaundice, bilirubin, non-invasive, sclera

Procedia PDF Downloads 223
914 Physical Activity Self-Efficacy among Pregnant Women with High Risk for Gestational Diabetes Mellitus: A Cross-Sectional Study

Authors: Xiao Yang, Ji Zhang, Yingli Song, Hui Huang, Jing Zhang, Yan Wang, Rongrong Han, Zhixuan Xiang, Lu Chen, Lingling Gao

Abstract:

Aim and Objectives: To examine physical activity self-efficacy, identify its predictors, and further explore the mechanism of action among the predictors in mainland Chinese pregnant women with high risk for gestational diabetes mellitus (GDM). Background: Physical activity could protect pregnant women from developing GDM. Physical activity self-efficacy was the key predictor of physical activity. Design: A cross-sectional study was conducted from October 2021 to May 2022 in Zhengzhou, China. Methods: 252 eligible pregnant women completed the Pregnancy Physical Activity Self-efficacy Scale, the Social Support for Physical Activity Scale, the Knowledge on Physical Activity Questionnaire, the 7-item Generalized Anxiety Disorder scale, the Edinburgh Postnatal Depression Scale, and a socio-demographic data sheet. Multiple linear regression was applied to explore the predictors of physical activity self-efficacy. Structural equation modeling was used to explore the mechanism of action among the predictors. Results: Chinese pregnant women with a high risk for GDM reported a moderate level of physical activity self-efficacy. The best-fit regression analysis revealed four variables explained 17.5% of the variance in physical activity self-efficacy. Social support for physical activity was the strongest predictor, followed by knowledge of the physical activity, intention to do physical activity, and anxiety symptoms. The model analysis indicated that knowledge of physical activity could release anxiety and depressive symptoms and then increase physical activity self-efficacy. Conclusion: The present study revealed a moderate level of physical activity self-efficacy. Interventions targeting pregnant women with high risk for GDM need to include the predictors of physical activity self-efficacy. Relevance to clinical practice: To facilitate pregnant women with high risk for GDM to engage in physical activity, healthcare professionals may find assess physical activity self-efficacy and intervene as soon as possible on their first antenatal visit. Physical activity intervention programs focused on self-efficacy may be conducted in further research.

Keywords: physical activity, gestational diabetes, self-efficacy, predictors

Procedia PDF Downloads 80
913 Optimizing 3D Shape Parameters of Sports Bra Pads in Motion by Finite Element Dynamic Modelling with Inverse Problem Solution

Authors: Jiazhen Chen, Yue Sun, Joanne Yip, Kit-Lun Yick

Abstract:

The design of sports bras poses a considerable challenge due to the difficulty in accurately predicting the wearing result after computer-aided design (CAD). It needs repeated physical try-on or virtual try-on to obtain a comfortable pressure range during motion. Specifically, in the context of running, the exact support area and force exerted on the breasts remain unclear. Consequently, obtaining an effective method to design the sports bra pads shape becomes particularly challenging. This predicament hinders the successful creation and production of sports bras that cater to women's health needs. The purpose of this study is to propose an effective method to obtain the 3D shape of sports bra pads and to understand the relationship between the supporting force and the 3D shape parameters of the pads. Firstly, the static 3D shape of the sports bra pad and human motion data (Running) are obtained by using the 3D scanner and advanced 4D scanning technology. The 3D shape of the sports bra pad is parameterised and simplified by Free-form Deformation (FFD). Then the sub-models of sports bra and human body are constructed by segmenting and meshing them with MSC Apex software. The material coefficient of sports bras is obtained by material testing. The Marc software is then utilised to establish a dynamic contact model between the human breast and the sports bra pad. To realise the reverse design of the sports bra pad, this contact model serves as a forward model for calculating the inverse problem. Based on the forward contact model, the inverse problem of the 3D shape parameters of the sports bra pad with the target bra-wearing pressure range as the boundary condition is solved. Finally, the credibility and accuracy of the simulation are validated by comparing the experimental results with the simulations by the FE model on the pressure distribution. On the one hand, this research allows for a more accurate understanding of the support area and force distribution on the breasts during running. On the other hand, this study can contribute to the customization of sports bra pads for different individuals. It can help to obtain sports bra pads with comfortable dynamic pressure.

Keywords: sports bra design, breast motion, running, inverse problem, finite element dynamic model

Procedia PDF Downloads 38
912 Exceptional Cost and Time Optimization with Successful Leak Repair and Restoration of Oil Production: West Kuwait Case Study

Authors: Nasser Al-Azmi, Al-Sabea Salem, Abu-Eida Abdullah, Milan Patra, Mohamed Elyas, Daniel Freile, Larisa Tagarieva

Abstract:

Well intervention was done along with Production Logging Tools (PLT) to detect sources of water, and to check well integrity for two West Kuwait oil wells started to produce 100 % water. For the first well, to detect the source of water, PLT was performed to check the perforations, no production observed from the bottom two perforation intervals, and an intake of water was observed from the top most perforation. Then a decision was taken to extend the PLT survey from tag depth to the Y-tool. For the second well, the aim was to detect the source of water and if there was a leak in the 7’’liner in front of the upper zones. Data could not be recorded in flowing conditions due to the casing deformation at almost 8300 ft. For the first well from the interpretation of PLT and well integrity data, there was a hole in the 9 5/8'' casing from 8468 ft to 8494 ft producing almost the majority of water, which is 2478 bbl/d. The upper perforation from 10812 ft to 10854 ft was taking 534 stb/d. For the second well, there was a hole in the 7’’liner from 8303 ft MD to 8324 ft MD producing 8334.0 stb/d of water with an intake zone from10322.9-10380.8 ft MD taking the whole fluid. To restore the oil production, W/O rig was mobilized to prevent dump flooding, and during the W/O, the leaking interval was confirmed for both wells. The leakage was cement squeezed and tested at 900-psi positive pressure and 500-psi drawdown pressure. The cement squeeze job was successful. After W/O, the wells kept producing for cleaning, and eventually, the WC reduced to 0%. Regular PLT and well integrity logs are required to study well performance, and well integrity issues, proper cement behind casing is essential to well longevity and well integrity, and the presence of the Y-tool is essential as monitoring of well parameters and ESP to facilitate well intervention tasks. Cost and time optimization in oil and gas and especially during rig operations is crucial. PLT data quality and the accuracy of the interpretations contributed a lot to identify the leakage interval accurately and, in turn, saved a lot of time and reduced the repair cost with almost 35 to 45 %. The added value here was more related to the cost reduction and effective and quick proper decision making based on the economic environment.

Keywords: leak, water shut-off, cement, water leak

Procedia PDF Downloads 108
911 Evaluation of Different Liquid Scintillation Counting Methods for 222Rn Determination in Waters

Authors: Jovana Nikolov, Natasa Todorovic, Ivana Stojkovic

Abstract:

Monitoring of 222Rn in drinking or surface waters, as well as in groundwater has been performed in connection with geological, hydrogeological and hydrological surveys and health hazard studies. Liquid scintillation counting (LSC) is often preferred analytical method for 222Rn measurements in waters because it allows multiple-sample automatic analysis. LSC method implies mixing of water samples with organic scintillation cocktail, which triggers radon diffusion from the aqueous into organic phase for which it has a much greater affinity, eliminating possibility of radon emanation in that manner. Two direct LSC methods that assume different sample composition have been presented, optimized and evaluated in this study. One-phase method assumed direct mixing of 10 ml sample with 10 ml of emulsifying cocktail (Ultima Gold AB scintillation cocktail is used). Two-phase method involved usage of water-immiscible cocktails (in this study High Efficiency Mineral Oil Scintillator, Opti-Fluor O and Ultima Gold F are used). Calibration samples were prepared with aqueous 226Ra standard in glass 20 ml vials and counted on ultra-low background spectrometer Quantulus 1220TM equipped with PSA (Pulse Shape Analysis) circuit which discriminates alpha/beta spectra. Since calibration procedure is carried out with 226Ra standard, which has both alpha and beta progenies, it is clear that PSA discriminator has vital importance in order to provide reliable and precise spectra separation. Consequentially, calibration procedure was done through investigation of PSA discriminator level influence on 222Rn efficiency detection, using 226Ra calibration standard in wide range of activity concentrations. Evaluation of presented methods was based on obtained efficiency detections and achieved Minimal Detectable Activity (MDA). Comparison of presented methods, accuracy and precision as well as different scintillation cocktail’s performance was considered from results of measurements of 226Ra spiked water samples with known activity and environmental samples.

Keywords: 222Rn in water, Quantulus1220TM, scintillation cocktail, PSA parameter

Procedia PDF Downloads 189
910 Surprise Fraudsters Before They Surprise You: A South African Telecommunications Case Study

Authors: Ansoné Human, Nantes Kirsten, Tanja Verster, Willem D. Schutte

Abstract:

Every year the telecommunications industry suffers huge losses due to fraud. Mobile fraud, or generally, telecommunications fraud is the utilisation of telecommunication products or services to acquire money illegally from or failing to pay a telecommunication company. A South African telecommunication operator developed two internal fraud scorecards to mitigate future risks of application fraud events. The scorecards aim to predict the likelihood of an application being fraudulent and surprise fraudsters before they surprise the telecommunication operator by identifying fraud at the time of application. The scorecards are utilised in the vetting process to evaluate the applicant in terms of the fraud risk the applicant would present to the telecommunication operator. Telecommunication providers can utilise these scorecards to profile customers, as well as isolate fraudulent and/or high-risk applicants. We provide the complete methodology utilised in the development of the scorecards. Furthermore, a Determination and Discrimination (DD) ratio is provided in the methodology to select the most influential variables from a group of related variables. Throughout the development of these scorecards, the following was revealed regarding fraudulent cases and fraudster behaviour within the telecommunications industry: Fraudsters typically target high-value handsets. Furthermore, debit order dates scheduled for the end of the month have the highest fraud probability. The fraudsters target specific stores. Applicants who acquire an expensive package and receive a medium-income, as well as applicants who obtain an expensive package and receive a high income, have higher fraud percentages. If one month prior to application, the status of an account is already in arrears (two months or more), the applicant has a high probability of fraud. The applicants with the highest average spend on calls have a higher probability of fraud. If the amount collected changes from month to month, the likelihood of fraud is higher. Lastly, young and middle-aged applicants have an increased probability of being targeted by fraudsters than other ages.

Keywords: application fraud scorecard, predictive modeling, regression, telecommunications

Procedia PDF Downloads 109
909 Comparing the Apparent Error Rate of Gender Specifying from Human Skeletal Remains by Using Classification and Cluster Methods

Authors: Jularat Chumnaul

Abstract:

In forensic science, corpses from various homicides are different; there are both complete and incomplete, depending on causes of death or forms of homicide. For example, some corpses are cut into pieces, some are camouflaged by dumping into the river, some are buried, some are burned to destroy the evidence, and others. If the corpses are incomplete, it can lead to the difficulty of personally identifying because some tissues and bones are destroyed. To specify gender of the corpses from skeletal remains, the most precise method is DNA identification. However, this method is costly and takes longer so that other identification techniques are used instead. The first technique that is widely used is considering the features of bones. In general, an evidence from the corpses such as some pieces of bones, especially the skull and pelvis can be used to identify their gender. To use this technique, forensic scientists are required observation skills in order to classify the difference between male and female bones. Although this technique is uncomplicated, saving time and cost, and the forensic scientists can fairly accurately determine gender by using this technique (apparently an accuracy rate of 90% or more), the crucial disadvantage is there are only some positions of skeleton that can be used to specify gender such as supraorbital ridge, nuchal crest, temporal lobe, mandible, and chin. Therefore, the skeletal remains that will be used have to be complete. The other technique that is widely used for gender specifying in forensic science and archeology is skeletal measurements. The advantage of this method is it can be used in several positions in one piece of bones, and it can be used even if the bones are not complete. In this study, the classification and cluster analysis are applied to this technique, including the Kth Nearest Neighbor Classification, Classification Tree, Ward Linkage Cluster, K-mean Cluster, and Two Step Cluster. The data contains 507 particular individuals and 9 skeletal measurements (diameter measurements), and the performance of five methods are investigated by considering the apparent error rate (APER). The results from this study indicate that the Two Step Cluster and Kth Nearest Neighbor method seem to be suitable to specify gender from human skeletal remains because both yield small apparent error rate of 0.20% and 4.14%, respectively. On the other hand, the Classification Tree, Ward Linkage Cluster, and K-mean Cluster method are not appropriate since they yield large apparent error rate of 10.65%, 10.65%, and 16.37%, respectively. However, there are other ways to evaluate the performance of classification such as an estimate of the error rate using the holdout procedure or misclassification costs, and the difference methods can make the different conclusions.

Keywords: skeletal measurements, classification, cluster, apparent error rate

Procedia PDF Downloads 240
908 Predicting Subsurface Abnormalities Growth Using Physics-Informed Neural Networks

Authors: Mehrdad Shafiei Dizaji, Hoda Azari

Abstract:

The research explores the pioneering integration of Physics-Informed Neural Networks (PINNs) into the domain of Ground-Penetrating Radar (GPR) data prediction, akin to advancements in medical imaging for tracking tumor progression in the human body. This research presents a detailed development framework for a specialized PINN model proficient at interpreting and forecasting GPR data, much like how medical imaging models predict tumor behavior. By harnessing the synergy between deep learning algorithms and the physical laws governing subsurface structures—or, in medical terms, human tissues—the model effectively embeds the physics of electromagnetic wave propagation into its architecture. This ensures that predictions not only align with fundamental physical principles but also mirror the precision needed in medical diagnostics for detecting and monitoring tumors. The suggested deep learning structure comprises three components: a CNN, a spatial feature channel attention (SFCA) mechanism, and ConvLSTM, along with temporal feature frame attention (TFFA) modules. The attention mechanism computes channel attention and temporal attention weights using self-adaptation, thereby fine-tuning the visual and temporal feature responses to extract the most pertinent and significant visual and temporal features. By integrating physics directly into the neural network, our model has shown enhanced accuracy in forecasting GPR data. This improvement is vital for conducting effective assessments of bridge deck conditions and other evaluations related to civil infrastructure. The use of Physics-Informed Neural Networks (PINNs) has demonstrated the potential to transform the field of Non-Destructive Evaluation (NDE) by enhancing the precision of infrastructure deterioration predictions. Moreover, it offers a deeper insight into the fundamental mechanisms of deterioration, viewed through the prism of physics-based models.

Keywords: physics-informed neural networks, deep learning, ground-penetrating radar (GPR), NDE, ConvLSTM, physics, data driven

Procedia PDF Downloads 17
907 Mathematics Bridging Theory and Applications for a Data-Driven World

Authors: Zahid Ullah, Atlas Khan

Abstract:

In today's data-driven world, the role of mathematics in bridging the gap between theory and applications is becoming increasingly vital. This abstract highlights the significance of mathematics as a powerful tool for analyzing, interpreting, and extracting meaningful insights from vast amounts of data. By integrating mathematical principles with real-world applications, researchers can unlock the full potential of data-driven decision-making processes. This abstract delves into the various ways mathematics acts as a bridge connecting theoretical frameworks to practical applications. It explores the utilization of mathematical models, algorithms, and statistical techniques to uncover hidden patterns, trends, and correlations within complex datasets. Furthermore, it investigates the role of mathematics in enhancing predictive modeling, optimization, and risk assessment methodologies for improved decision-making in diverse fields such as finance, healthcare, engineering, and social sciences. The abstract also emphasizes the need for interdisciplinary collaboration between mathematicians, statisticians, computer scientists, and domain experts to tackle the challenges posed by the data-driven landscape. By fostering synergies between these disciplines, novel approaches can be developed to address complex problems and make data-driven insights accessible and actionable. Moreover, this abstract underscores the importance of robust mathematical foundations for ensuring the reliability and validity of data analysis. Rigorous mathematical frameworks not only provide a solid basis for understanding and interpreting results but also contribute to the development of innovative methodologies and techniques. In summary, this abstract advocates for the pivotal role of mathematics in bridging theory and applications in a data-driven world. By harnessing mathematical principles, researchers can unlock the transformative potential of data analysis, paving the way for evidence-based decision-making, optimized processes, and innovative solutions to the challenges of our rapidly evolving society.

Keywords: mathematics, bridging theory and applications, data-driven world, mathematical models

Procedia PDF Downloads 62
906 From Text to Data: Sentiment Analysis of Presidential Election Political Forums

Authors: Sergio V Davalos, Alison L. Watkins

Abstract:

User generated content (UGC) such as website post has data associated with it: time of the post, gender, location, type of device, and number of words. The text entered in user generated content (UGC) can provide a valuable dimension for analysis. In this research, each user post is treated as a collection of terms (words). In addition to the number of words per post, the frequency of each term is determined by post and by the sum of occurrences in all posts. This research focuses on one specific aspect of UGC: sentiment. Sentiment analysis (SA) was applied to the content (user posts) of two sets of political forums related to the US presidential elections for 2012 and 2016. Sentiment analysis results in deriving data from the text. This enables the subsequent application of data analytic methods. The SASA (SAIL/SAI Sentiment Analyzer) model was used for sentiment analysis. The application of SASA resulted with a sentiment score for each post. Based on the sentiment scores for the posts there are significant differences between the content and sentiment of the two sets for the 2012 and 2016 presidential election forums. In the 2012 forums, 38% of the forums started with positive sentiment and 16% with negative sentiment. In the 2016 forums, 29% started with positive sentiment and 15% with negative sentiment. There also were changes in sentiment over time. For both elections as the election got closer, the cumulative sentiment score became negative. The candidate who won each election was in the more posts than the losing candidates. In the case of Trump, there were more negative posts than Clinton’s highest number of posts which were positive. KNIME topic modeling was used to derive topics from the posts. There were also changes in topics and keyword emphasis over time. Initially, the political parties were the most referenced and as the election got closer the emphasis changed to the candidates. The performance of the SASA method proved to predict sentiment better than four other methods in Sentibench. The research resulted in deriving sentiment data from text. In combination with other data, the sentiment data provided insight and discovery about user sentiment in the US presidential elections for 2012 and 2016.

Keywords: sentiment analysis, text mining, user generated content, US presidential elections

Procedia PDF Downloads 176
905 Subsidiary Entrepreneurial Orientation, Trust in Headquarters and Performance: The Mediating Role of Autonomy

Authors: Zhang Qingzhong

Abstract:

Though there exists an increasing number of research studies on the headquarters-subsidiary relationship, and within this context, there is a focus on subsidiaries' contributory role to multinational corporations (MNC), subsidiary autonomy, and the conditions under which autonomy exerts an effect on subsidiary performance still constitute a subject of debate in the literature. The objective of this research is to study the MNC subsidiary autonomy and performance relationship and the effect of subsidiary entrepreneurial orientation and trust on subsidiary autonomy in the China environment, a phenomenon that has not yet been studied. The research addresses the following three questions: (i) Is subsidiary autonomy associated with MNC subsidiary performance in the China environment? (ii) How do subsidiary entrepreneurship and its trust in headquarters affect the level of subsidiary autonomy and its relationship with subsidiary performance? (iii) Does subsidiary autonomy have a mediating effect on subsidiary performance with subsidiary’s entrepreneurship and trust in headquarters? In the present study, we have reviewed literature and conducted semi-structured interviews with multinational corporation (MNC) subsidiary senior executives in China. Building on our insights from the interviews and taking perspectives from four theories, namely the resource-based view (RBV), resource dependency theory, integration-responsiveness framework, and social exchange theory, as well as the extant articles on subsidiary autonomy, entrepreneurial orientation, trust, and subsidiary performance, we have developed a model and have explored the direct and mediating effects of subsidiary autonomy on subsidiary performance within the framework of the MNC. To test the model, we collected and analyzed data based on cross-industry two waves of an online survey from 102 subsidiaries of MNCs in China. We used structural equation modeling to test measurement, direct effect model, and conceptual framework with hypotheses. Our findings confirm that (a) subsidiary autonomy is positively related to subsidiary performance; (b) subsidiary entrepreneurial orientation is positively related to subsidiary autonomy; (c) subsidiary’s trust in headquarters has a positive effect on subsidiary autonomy; (d) subsidiary autonomy mediates the relationship between entrepreneurial orientation and subsidiary performance; (e) subsidiary autonomy mediates the relationship between trust and subsidiary performance. Our study highlights the important role of subsidiary autonomy in leveraging the resource of subsidiary entrepreneurial orientation and its trust relationship with headquarters to achieve high performance. We discuss the theoretical and managerial implications of the findings and propose directions for future research.

Keywords: subsidiary entrepreneurial orientation, trust, subsidiary autonomy, subsidiary performance

Procedia PDF Downloads 176
904 Enhancing Signal Reception in a Mobile Radio Network Using Adaptive Beamforming Antenna Arrays Technology

Authors: Ugwu O. C., Mamah R. O., Awudu W. S.

Abstract:

This work is aimed at enhancing signal reception on a mobile radio network and minimizing outage probability in a mobile radio network using adaptive beamforming antenna arrays. In this research work, an empirical real-time drive measurement was done in a cellular network of Globalcom Nigeria Limited located at Ikeja, the headquarters of Lagos State, Nigeria, with reference base station number KJA 004. The empirical measurement includes Received Signal Strength and Bit Error Rate which were recorded for exact prediction of the signal strength of the network as at the time of carrying out this research work. The Received Signal Strength and Bit Error Rate were measured with a spectrum monitoring Van with the help of a Ray Tracer at an interval of 100 meters up to 700 meters from the transmitting base station. The distance and angular location measurements from the reference network were done with the help Global Positioning System (GPS). The other equipment used were transmitting equipment measurements software (Temsoftware), Laptops and log files, which showed received signal strength with distance from the base station. Results obtained were about 11% from the real-time experiment, which showed that mobile radio networks are prone to signal failure and can be minimized using an Adaptive Beamforming Antenna Array in terms of a significant reduction in Bit Error Rate, which implies improved performance of the mobile radio network. In addition, this work did not only include experiments done through empirical measurement but also enhanced mathematical models that were developed and implemented as a reference model for accurate prediction. The proposed signal models were based on the analysis of continuous time and discrete space, and some other assumptions. These developed (proposed) enhanced models were validated using MATLAB (version 7.6.3.35) program and compared with the conventional antenna for accuracy. These outage models were used to manage the blocked call experience in the mobile radio network. 20% improvement was obtained when the adaptive beamforming antenna arrays were implemented on the wireless mobile radio network.

Keywords: beamforming algorithm, adaptive beamforming, simulink, reception

Procedia PDF Downloads 22
903 Comparison of Existing Predictor and Development of Computational Method for S- Palmitoylation Site Identification in Arabidopsis Thaliana

Authors: Ayesha Sanjana Kawser Parsha

Abstract:

S-acylation is an irreversible bond in which cysteine residues are linked to fatty acids palmitate (74%) or stearate (22%), either at the COOH or NH2 terminal, via a thioester linkage. There are several experimental methods that can be used to identify the S-palmitoylation site; however, since they require a lot of time, computational methods are becoming increasingly necessary. There aren't many predictors, however, that can locate S- palmitoylation sites in Arabidopsis Thaliana with sufficient accuracy. This research is based on the importance of building a better prediction tool. To identify the type of machine learning algorithm that predicts this site more accurately for the experimental dataset, several prediction tools were examined in this research, including the GPS PALM 6.0, pCysMod, GPS LIPID 1.0, CSS PALM 4.0, and NBA PALM. These analyses were conducted by constructing the receiver operating characteristics plot and the area under the curve score. An AI-driven deep learning-based prediction tool has been developed utilizing the analysis and three sequence-based input data, such as the amino acid composition, binary encoding profile, and autocorrelation features. The model was developed using five layers, two activation functions, associated parameters, and hyperparameters. The model was built using various combinations of features, and after training and validation, it performed better when all the features were present while using the experimental dataset for 8 and 10-fold cross-validations. While testing the model with unseen and new data, such as the GPS PALM 6.0 plant and pCysMod mouse, the model performed better, and the area under the curve score was near 1. It can be demonstrated that this model outperforms the prior tools in predicting the S- palmitoylation site in the experimental data set by comparing the area under curve score of 10-fold cross-validation of the new model with the established tools' area under curve score with their respective training sets. The objective of this study is to develop a prediction tool for Arabidopsis Thaliana that is more accurate than current tools, as measured by the area under the curve score. Plant food production and immunological treatment targets can both be managed by utilizing this method to forecast S- palmitoylation sites.

Keywords: S- palmitoylation, ROC PLOT, area under the curve, cross- validation score

Procedia PDF Downloads 61
902 Diagnostic Contribution of the MMSE-2:EV in the Detection and Monitoring of the Cognitive Impairment: Case Studies

Authors: Cornelia-Eugenia Munteanu

Abstract:

The goal of this paper is to present the diagnostic contribution that the screening instrument, Mini-Mental State Examination-2: Expanded Version (MMSE-2:EV), brings in detecting the cognitive impairment or in monitoring the progress of degenerative disorders. The diagnostic signification is underlined by the interpretation of the MMSE-2:EV scores, resulted from the test application to patients with mild and major neurocognitive disorders. The original MMSE is one of the most widely used screening tools for detecting the cognitive impairment, in clinical settings, but also in the field of neurocognitive research. Now, the practitioners and researchers are turning their attention to the MMSE-2. To enhance its clinical utility, the new instrument was enriched and reorganized in three versions (MMSE-2:BV, MMSE-2:SV and MMSE-2:EV), each with two forms: blue and red. The MMSE-2 was adapted and used successfully in Romania since 2013. The cases were selected from current practice, in order to cover vast and significant neurocognitive pathology: mild cognitive impairment, Alzheimer’s disease, vascular dementia, mixed dementia, Parkinson’s disease, conversion of the mild cognitive impairment into Alzheimer’s disease. The MMSE-2:EV version was used: it was applied one month after the initial assessment, three months after the first reevaluation and then every six months, alternating the blue and red forms. Correlated with age and educational level, the raw scores were converted in T scores and then, with the mean and the standard deviation, the z scores were calculated. The differences of raw scores between the evaluations were analyzed from the point of view of statistic signification, in order to establish the progression in time of the disease. The results indicated that the psycho-diagnostic approach for the evaluation of the cognitive impairment with MMSE-2:EV is safe and the application interval is optimal. The alternation of the forms prevents the learning phenomenon. The diagnostic accuracy and efficient therapeutic conduct derive from the usage of the national test norms. In clinical settings with a large flux of patients, the application of the MMSE-2:EV is a safe and fast psycho-diagnostic solution. The clinicians can draw objective decisions and for the patients: it doesn’t take too much time and energy, it doesn’t bother them and it doesn’t force them to travel frequently.

Keywords: MMSE-2, dementia, cognitive impairment, neuropsychology

Procedia PDF Downloads 506
901 Optimizing the Location of Parking Areas Adapted for Dangerous Goods in the European Road Transport Network

Authors: María Dolores Caro, Eugenio M. Fedriani, Ángel F. Tenorio

Abstract:

The transportation of dangerous goods by lorries throughout Europe must be done by using the roads conforming the European Road Transport Network. In this network, there are several parking areas where lorry drivers can park to rest according to the regulations. According to the "European Agreement concerning the International Carriage of Dangerous Goods by Road", parking areas where lorries transporting dangerous goods can park to rest, must follow several security stipulations to keep safe the rest of road users. At this respect, these lorries must be parked in adapted areas with strict and permanent surveillance measures. Moreover, drivers must satisfy several restrictions about resting and driving time. Under these facts, one may expect that there exist enough parking areas for the transport of this type of goods in order to obey the regulations prescribed by the European Union and its member countries. However, the already-existing parking areas are not sufficient to cover all the stops required by drivers transporting dangerous goods. Our main goal is, starting from the already-existing parking areas and the loading-and-unloading location, to provide an optimal answer to the following question: how many additional parking areas must be built and where must they be located to assure that lorry drivers can transport dangerous goods following all the stipulations about security and safety for their stops? The sense of the word “optimal” is due to the fact that we give a global solution for the location of parking areas throughout the whole European Road Transport Network, adjusting the number of additional areas to be as lower as possible. To do so, we have modeled the problem using graph theory since we are working with a road network. As nodes, we have considered the locations of each already-existing parking area, each loading-and-unloading area each road bifurcation. Each road connecting two nodes is considered as an edge in the graph whose weight corresponds to the distance between both nodes in the edge. By applying a new efficient algorithm, we have found the additional nodes for the network representing the new parking areas adapted for dangerous goods, under the fact that the distance between two parking areas must be less than or equal to 400 km.

Keywords: trans-european transport network, dangerous goods, parking areas, graph-based modeling

Procedia PDF Downloads 270
900 NanoFrazor Lithography for advanced 2D and 3D Nanodevices

Authors: Zhengming Wu

Abstract:

NanoFrazor lithography systems were developed as a first true alternative or extension to standard mask-less nanolithography methods like electron beam lithography (EBL). In contrast to EBL they are based on thermal scanning probe lithography (t-SPL). Here a heatable ultra-sharp probe tip with an apex of a few nm is used for patterning and simultaneously inspecting complex nanostructures. The heat impact from the probe on a thermal responsive resist generates those high-resolution nanostructures. The patterning depth of each individual pixel can be controlled with better than 1 nm precision using an integrated in-situ metrology method. Furthermore, the inherent imaging capability of the Nanofrazor technology allows for markerless overlay, which has been achieved with sub-5 nm accuracy as well as it supports stitching layout sections together with < 10 nm error. Pattern transfer from such resist features below 10 nm resolution were demonstrated. The technology has proven its value as an enabler of new kinds of ultra-high resolution nanodevices as well as for improving the performance of existing device concepts. The application range for this new nanolithography technique is very broad spanning from ultra-high resolution 2D and 3D patterning to chemical and physical modification of matter at the nanoscale. Nanometer-precise markerless overlay and non-invasiveness to sensitive materials are among the key strengths of the technology. However, while patterning at below 10 nm resolution is achieved, significantly increasing the patterning speed at the expense of resolution is not feasible by using the heated tip alone. Towards this end, an integrated laser write head for direct laser sublimation (DLS) of the thermal resist has been introduced for significantly faster patterning of micrometer to millimeter-scale features. Remarkably, the areas patterned by the tip and the laser are seamlessly stitched together and both processes work on the very same resist material enabling a true mix-and-match process with no developing or any other processing steps in between. The presentation will include examples for (i) high-quality metal contacting of 2D materials, (ii) tuning photonic molecules, (iii) generating nanofluidic devices and (iv) generating spintronic circuits. Some of these applications have been enabled only due to the various unique capabilities of NanoFrazor lithography like the absence of damage from a charged particle beam.

Keywords: nanofabrication, grayscale lithography, 2D materials device, nano-optics, photonics, spintronic circuits

Procedia PDF Downloads 64
899 Detecting Natural Fractures and Modeling Them to Optimize Field Development Plan in Libyan Deep Sandstone Reservoir (Case Study)

Authors: Tarek Duzan

Abstract:

Fractures are a fundamental property of most reservoirs. Despite their abundance, they remain difficult to detect and quantify. The most effective characterization of fractured reservoirs is accomplished by integrating geological, geophysical, and engineering data. Detection of fractures and defines their relative contribution is crucial in the early stages of exploration and later in the production of any field. Because fractures could completely change our thoughts, efforts, and planning to produce a specific field properly. From the structural point of view, all reservoirs are fractured to some point of extent. North Gialo field is thought to be a naturally fractured reservoir to some extent. Historically, natural fractured reservoirs are more complicated in terms of their exploration and production efforts, and most geologists tend to deny the presence of fractures as an effective variable. Our aim in this paper is to determine the degree of fracturing, and consequently, our evaluation and planning can be done properly and efficiently from day one. The challenging part in this field is that there is no enough data and straightforward well testing that can let us completely comfortable with the idea of fracturing; however, we cannot ignore the fractures completely. Logging images, available well testing, and limited core studies are our tools in this stage to evaluate, model, and predict possible fracture effects in this reservoir. The aims of this study are both fundamental and practical—to improve the prediction and diagnosis of natural-fracture attributes in N. Gialo hydrocarbon reservoirs and accurately simulate their influence on production. Moreover, the production of this field comes from 2-phase plan; a self depletion of oil and then gas injection period for pressure maintenance and increasing ultimate recovery factor. Therefore, well understanding of fracturing network is essential before proceeding with the targeted plan. New analytical methods will lead to more realistic characterization of fractured and faulted reservoir rocks. These methods will produce data that can enhance well test and seismic interpretations, and that can readily be used in reservoir simulators.

Keywords: natural fracture, sandstone reservoir, geological, geophysical, and engineering data

Procedia PDF Downloads 87
898 Automatic Detection of Sugarcane Diseases: A Computer Vision-Based Approach

Authors: Himanshu Sharma, Karthik Kumar, Harish Kumar

Abstract:

The major problem in crop cultivation is the occurrence of multiple crop diseases. During the growth stage, timely identification of crop diseases is paramount to ensure the high yield of crops, lower production costs, and minimize pesticide usage. In most cases, crop diseases produce observable characteristics and symptoms. The Surveyors usually diagnose crop diseases when they walk through the fields. However, surveyor inspections tend to be biased and error-prone due to the nature of the monotonous task and the subjectivity of individuals. In addition, visual inspection of each leaf or plant is costly, time-consuming, and labour-intensive. Furthermore, the plant pathologists and experts who can often identify the disease within the plant according to their symptoms in early stages are not readily available in remote regions. Therefore, this study specifically addressed early detection of leaf scald, red rot, and eyespot types of diseases within sugarcane plants. The study proposes a computer vision-based approach using a convolutional neural network (CNN) for automatic identification of crop diseases. To facilitate this, firstly, images of sugarcane diseases were taken from google without modifying the scene, background, or controlling the illumination to build the training dataset. Then, the testing dataset was developed based on the real-time collected images from the sugarcane field from India. Then, the image dataset is pre-processed for feature extraction and selection. Finally, the CNN-based Visual Geometry Group (VGG) model was deployed on the training and testing dataset to classify the images into diseased and healthy sugarcane plants and measure the model's performance using various parameters, i.e., accuracy, sensitivity, specificity, and F1-score. The promising result of the proposed model lays the groundwork for the automatic early detection of sugarcane disease. The proposed research directly sustains an increase in crop yield.

Keywords: automatic classification, computer vision, convolutional neural network, image processing, sugarcane disease, visual geometry group

Procedia PDF Downloads 109
897 The Relationship between Personal, Psycho-Social and Occupational Risk Factors with Low Back Pain Severity in Industrial Workers

Authors: Omid Giahi, Ebrahim Darvishi, Mahdi Akbarzadeh

Abstract:

Introduction: Occupational low back pain (LBP) is one of the most prevalent work-related musculoskeletal disorders in which a lot of risk factors are involved that. The present study focuses on the relation between personal, psycho-social and occupational risk factors and LBP severity in industrial workers. Materials and Methods: This research was a case-control study which was conducted in Kurdistan province. 100 workers (Mean Age ± SD of 39.9 ± 10.45) with LBP were selected as the case group, and 100 workers (Mean Age ± SD of 37.2 ± 8.5) without LBP were assigned into the control group. All participants were selected from various industrial units, and they had similar occupational conditions. The required data including demographic information (BMI, smoking, alcohol, and family history), occupational (posture, mental workload (MWL), force, vibration and repetition), and psychosocial factors (stress, occupational satisfaction and security) of the participants were collected via consultation with occupational medicine specialists, interview, and the related questionnaires and also the NASA-TLX software and REBA worksheet. Chi-square test, logistic regression and structural equation modeling (SEM) were used to analyze the data. For analysis of data, IBM Statistics SPSS 24 and Mplus6 software have been used. Results: 114 (77%) of the individuals were male and 86 were (23%) female. Mean Career length of the Case Group and Control Group were 10.90 ± 5.92, 9.22 ± 4.24, respectively. The statistical analysis of the data revealed that there was a significant correlation between the Posture, Smoking, Stress, Satisfaction, and MWL with occupational LBP. The odds ratios (95% confidence intervals) derived from a logistic regression model were 2.7 (1.27-2.24) and 2.5 (2.26-5.17) and 3.22 (2.47-3.24) for Stress, MWL, and Posture, respectively. Also, the SEM analysis of the personal, psycho-social and occupational factors with LBP revealed that there was a significant correlation. Conclusion: All three broad categories of risk factors simultaneously increase the risk of occupational LBP in the workplace. But, the risks of Posture, Stress, and MWL have a major role in LBP severity. Therefore, prevention strategies for persons in jobs with high risks for LBP are required to decrease the risk of occupational LBP.

Keywords: industrial workers occupational, low back pain, occupational risk factors, psychosocial factors

Procedia PDF Downloads 248
896 Material Concepts and Processing Methods for Electrical Insulation

Authors: R. Sekula

Abstract:

Epoxy composites are broadly used as an electrical insulation for the high voltage applications since only such materials can fulfill particular mechanical, thermal, and dielectric requirements. However, properties of the final product are strongly dependent on proper manufacturing process with minimized material failures, as too large shrinkage, voids and cracks. Therefore, application of proper materials (epoxy, hardener, and filler) and process parameters (mold temperature, filling time, filling velocity, initial temperature of internal parts, gelation time), as well as design and geometric parameters are essential features for final quality of the produced components. In this paper, an approach for three-dimensional modeling of all molding stages, namely filling, curing and post-curing is presented. The reactive molding simulation tool is based on a commercial CFD package, and include dedicated models describing viscosity and reaction kinetics that have been successfully implemented to simulate the reactive nature of the system with exothermic effect. Also a dedicated simulation procedure for stress and shrinkage calculations, as well as simulation results are presented in the paper. Second part of the paper is dedicated to recent developments on formulations of functional composites for electrical insulation applications, focusing on thermally conductive materials. Concepts based on filler modifications for epoxy electrical composites have been presented, including the results of the obtained properties. Finally, having in mind tough environmental regulations, in addition to current process and design aspects, an approach for product re-design has been presented focusing on replacement of epoxy material with the thermoplastic one. Such “design-for-recycling” method is one of new directions associated with development of new material and processing concepts of electrical products and brings a lot of additional research challenges. For that, one of the successful products has been presented to illustrate the presented methodology.

Keywords: curing, epoxy insulation, numerical simulations, recycling

Procedia PDF Downloads 266
895 Analysis and the Fair Distribution Modeling of Urban Facilities in Kabul City

Authors: Ansari Mohammad Reza, Hiroko Ono, Fakhrullah Sarwari

Abstract:

Our world is fast heading toward being a predominantly urban planet. This can be a double-edged sword reality where it is as much frightening as it seems interesting. Moreover, a look to the current predictions and taking into the consideration the fact that about 90 percent of the coming urbanization is going to be absorbed by the towns and the cities of the developing countries of Asia and Africa, directly provide us the clues to assume a much more tragic ending to this story than to the happy one. Likewise, in a situation wherein most of these countries are still severely struggling to find the proper answer to their very first initial questions of urbanization—e.g. how to provide the essential structure for their cities, define the regulation, or even design the proper pattern on how the cities should be expanded—thus it is not weird to claim that most of the coming urbanization of the world is going to happen informally. This reality could not only bring the feature, landscape or the picture of the cities of the future under the doubt but at the same time provide the ground for the rise of a bunch of other essential questions of how the facilities would be distributed in these cities, or how fair will this pattern of distribution be. Kabul the capital of Afghanistan, as a city located in the developing world that its process of urbanization has been starting since 2001 and currently hold the position to be the fifth fastest growing city in the world, contained to a considerable slum ratio of 0.7—that means about 70 percent of its population is living in the informal areas—subsequently could be a very good case study to put this questions into the research and find out how the informal development of a city can lead to the unfair and unbalanced distribution of its facilities. Likewise, in this study we tried our best to first propose the ideal model for the fair distribution of the facilities in the Kabul city—where all the citizens have the same equal chance of access to the facilities—and then evaluate the situation of the city based on how fair the facilities are currently distributed therein. We subsequently did it by the comparative analysis between the existing facility rate in the formal and informal areas of the city to the one that was proposed as the fair ideal model.

Keywords: Afghanistan, facility distribution, formal settlements, informal settlements, Kabul

Procedia PDF Downloads 107
894 Mg Doped CuCrO₂ Thin Oxides Films for Thermoelectric Properties

Authors: I. Sinnarasa, Y. Thimont, L. Presmanes, A. Barnabé

Abstract:

The thermoelectricity is a promising technique to overcome the issues in recovering waste heat to electricity without using moving parts. In fact, the thermoelectric (TE) effect defines as the conversion of a temperature gradient directly into electricity and vice versa. To optimize TE materials, the power factor (PF = σS² where σ is electrical conductivity and S is Seebeck coefficient) must be increased by adjusting the carrier concentration, and/or the lattice thermal conductivity Kₜₕ must be reduced by introducing scattering centers with point defects, interfaces, and nanostructuration. The PF does not show the advantages of the thin film because it does not take into account the thermal conductivity. In general, the thermal conductivity of the thin film is lower than the bulk material due to their microstructure and increasing scattering effects with decreasing thickness. Delafossite type oxides CuᴵMᴵᴵᴵO₂ received main attention for their optoelectronic properties as a p-type semiconductor they exhibit also interesting thermoelectric (TE) properties due to their high electrical conductivity and their stability in room atmosphere. As there are few proper studies on the TE properties of Mg-doped CuCrO₂ thin films, we have investigated, the influence of the annealing temperature on the electrical conductivity and the Seebeck coefficient of Mg-doped CuCrO₂ thin films and calculated the PF in the temperature range from 40 °C to 220 °C. For it, we have deposited Mg-doped CuCrO₂ thin films on fused silica substrates by RF magnetron sputtering. This study was carried out on 300 nm thin films. The as-deposited Mg doped CuCrO₂ thin films have been annealed at different temperatures (from 450 to 650 °C) under primary vacuum. Electrical conductivity and Seebeck coefficient of the thin films have been measured from 40 to 220 °C. The highest electrical conductivity of 0.60 S.cm⁻¹ with a Seebeck coefficient of +329 µV.K⁻¹ at 40 °C have been obtained for the sample annealed at 550 °C. The calculated power factor of optimized CuCrO₂:Mg thin film was 6 µW.m⁻¹K⁻² at 40 °C. Due to the constant Seebeck coefficient and the increasing electrical conductivity with temperature it reached 38 µW.m⁻¹K⁻² at 220 °C that was a quite good result for an oxide thin film. Moreover, the degenerate behavior and the hopping mechanism of CuCrO₂:Mg thin film were elucidated. Their high and constant Seebeck coefficient in temperature and their stability in room atmosphere could be a great advantage for an application of this material in a high accuracy temperature measurement devices.

Keywords: thermoelectric, oxides, delafossite, thin film, power factor, degenerated semiconductor, hopping mode

Procedia PDF Downloads 190
893 Optimal Placement of the Unified Power Controller to Improve the Power System Restoration

Authors: Mohammad Reza Esmaili

Abstract:

One of the most important parts of the restoration process of a power network is the synchronizing of its subsystems. In this situation, the biggest concern of the system operators will be the reduction of the standing phase angle (SPA) between the endpoints of the two islands. In this regard, the system operators perform various actions and maneuvers so that the synchronization operation of the subsystems is successfully carried out and the system finally reaches acceptable stability. The most common of these actions include load control, generation control and, in some cases, changing the network topology. Although these maneuvers are simple and common, due to the weak network and extreme load changes, the restoration will be associated with low speed. One of the best ways to control the SPA is to use FACTS devices. By applying a soft control signal, these tools can reduce the SPA between two subsystems with more speed and accuracy, and the synchronization process can be done in less time. Meanwhile, the unified power controller (UPFC), a series-parallel compensator device with the change of transmission line power and proper adjustment of the phase angle, will be the proposed option in order to realize the subject of this research. Therefore, with the optimal placement of UPFC in a power system, in addition to improving the normal conditions of the system, it is expected to be effective in reducing the SPA during power system restoration. Therefore, the presented paper provides an optimal structure to coordinate the three problems of improving the division of subsystems, reducing the SPA and optimal power flow with the aim of determining the optimal location of UPFC and optimal subsystems. The proposed objective functions in this paper include maximizing the quality of the subsystems, reducing the SPA at the endpoints of the subsystems, and reducing the losses of the power system. Since there will be a possibility of creating contradictions in the simultaneous optimization of the proposed objective functions, the structure of the proposed optimization problem is introduced as a non-linear multi-objective problem, and the Pareto optimization method is used to solve it. The innovative technique proposed to implement the optimization process of the mentioned problem is an optimization algorithm called the water cycle (WCA). To evaluate the proposed method, the IEEE 39 bus power system will be used.

Keywords: UPFC, SPA, water cycle algorithm, multi-objective problem, pareto

Procedia PDF Downloads 52
892 Simulation of the FDA Centrifugal Blood Pump Using High Performance Computing

Authors: Mehdi Behbahani, Sebastian Rible, Charles Moulinec, Yvan Fournier, Mike Nicolai, Paolo Crosetto

Abstract:

Computational Fluid Dynamics blood-flow simulations are increasingly used to develop and validate blood-contacting medical devices. This study shows that numerical simulations can provide additional and accurate estimates of relevant hemodynamic indicators (e.g., recirculation zones or wall shear stresses), which may be difficult and expensive to obtain from in-vivo or in-vitro experiments. The most recent FDA (Food and Drug Administration) benchmark consisted of a simplified centrifugal blood pump model that contains fluid flow features as they are commonly found in these devices with a clear focus on highly turbulent phenomena. The FDA centrifugal blood pump study is composed of six test cases with different volumetric flow rates ranging from 2.5 to 7.0 liters per minute, pump speeds, and Reynolds numbers ranging from 210,000 to 293,000. Within the frame of this study different turbulence models were tested including RANS models, e.g. k-omega, k-epsilon and a Reynolds Stress Model (RSM) and, LES. The partitioners Hilbert, METIS, ParMETIS and SCOTCH were used to create an unstructured mesh of 76 million elements and compared in their efficiency. Computations were performed on the JUQUEEN BG/Q architecture applying the highly parallel flow solver Code SATURNE and typically using 32768 or more processors in parallel. Visualisations were performed by means of PARAVIEW. Different turbulence models including all six flow situations could be successfully analysed and validated against analytical considerations and from comparison to other data-bases. It showed that an RSM represents an appropriate choice with respect to modeling high-Reynolds number flow cases. Especially, the Rij-SSG (Speziale, Sarkar, Gatzki) variant turned out to be a good approach. Visualisation of complex flow features could be obtained and the flow situation inside the pump could be characterized.

Keywords: blood flow, centrifugal blood pump, high performance computing, scalability, turbulence

Procedia PDF Downloads 374
891 DEEPMOTILE: Motility Analysis of Human Spermatozoa Using Deep Learning in Sri Lankan Population

Authors: Chamika Chiran Perera, Dananjaya Perera, Chirath Dasanayake, Banuka Athuraliya

Abstract:

Male infertility is a major problem in the world, and it is a neglected and sensitive health issue in Sri Lanka. It can be determined by analyzing human semen samples. Sperm motility is one of many factors that can evaluate male’s fertility potential. In Sri Lanka, this analysis is performed manually. Manual methods are time consuming and depend on the person, but they are reliable and it can depend on the expert. Machine learning and deep learning technologies are currently being investigated to automate the spermatozoa motility analysis, and these methods are unreliable. These automatic methods tend to produce false positive results and false detection. Current automatic methods support different techniques, and some of them are very expensive. Due to the geographical variance in spermatozoa characteristics, current automatic methods are not reliable for motility analysis in Sri Lanka. The suggested system, DeepMotile, is to explore a method to analyze motility of human spermatozoa automatically and present it to the andrology laboratories to overcome current issues. DeepMotile is a novel deep learning method for analyzing spermatozoa motility parameters in the Sri Lankan population. To implement the current approach, Sri Lanka patient data were collected anonymously as a dataset, and glass slides were used as a low-cost technique to analyze semen samples. Current problem was identified as microscopic object detection and tackling the problem. YOLOv5 was customized and used as the object detector, and it achieved 94 % mAP (mean average precision), 86% Precision, and 90% Recall with the gathered dataset. StrongSORT was used as the object tracker, and it was validated with andrology experts due to the unavailability of annotated ground truth data. Furthermore, this research has identified many potential ways for further investigation, and andrology experts can use this system to analyze motility parameters with realistic accuracy.

Keywords: computer vision, deep learning, convolutional neural networks, multi-target tracking, microscopic object detection and tracking, male infertility detection, motility analysis of human spermatozoa

Procedia PDF Downloads 95
890 Shedding Light on Colorism: Exploring Stereotypes, Influential Factors, and Consequences in African American Communities

Authors: India Sanders, Jeffrey Sherman

Abstract:

Colorism has been a persistent and ingrained issue in the history of the United States, with far-reaching consequences that continue to affect various aspects of daily life, institutional policies, public spaces, economic structures, and social norms. This complex problem has had a particularly profound impact on the African-American community, shaping how they are perceived and treated within society at large. The prevalence of negative stereotypes surrounding African Americans can lead to severe repercussions such as discrimination and mental health disparities. The effects of such biases can also materialize in diverse forms, impacting the well-being and livelihoods of individuals within this community. Current research has examined how people from different racial groups perceive different skin tones of Black people, looking at the cognitive processes that manifest through categorization and stereotypes. Additionally, studies observed consequences related to colorism and how it directly affects those with darker versus lighter skin tones. However, not much research has been conducted on the influence of stereotypes associated with various skin tones. In the present study, it is hypothesized that participants in Group A will rate positive stereotypes associated with lighter skin tones significantly higher than positive stereotypes associated with darker skin tones. It is also hypothesized that participants in Group B will rate negative stereotypes associated with darker skin tones significantly higher than negative stereotypes associated with lighter skin tones. For this study, a quantitative study on stereotypes of skin tone representation within the African-American community will be conducted. Participants will rate the accuracy of various visual representations within mass media of African Americans with light skin tones and dark skin tones using a Likert scale. Participants will also be provided a questionnaire further examining the perception of stereotypes and how this affects their interactions with African Americans with lighter versus darker skin tones. The purpose of this study is to investigate the impact of skin tone portrayals on African Americans, including associated stereotypes and societal perceptions. It is expected that participants will more likely associate negative stereotypes with African Americans who have darker skin tones, as this is a common and reinforced viewpoint in the cultural and social system.

Keywords: colorism, discrimination, racism, stereotype

Procedia PDF Downloads 53
889 Development of an Intervention Program for Moral Education of Undergraduate Students of Sport Sciences and Physical Education

Authors: Najia Zulfiqar

Abstract:

Imparting moral education is the need of time, considering the obvious moral decline in society. Recent research shows the downfall of moral competence among university students. The main objective of the present study was to develop moral development intervention strategies for undergraduate students of Sports and Physical Education. Using an interpretative phenomenological approach, insight into field-specific moral issues was gained through interviews with 7 subject experts and a focus-group discussion session with 8 students. Two research assistants who were trained in qualitative interviewing collected, transcribed and analyzed data into the MAXQDA software using content and discourse analyses. The identified moral issues in Sports and Physical Education were sports gambling and betting, pay-for-play, doping, coach misconduct, tampering, cultural bias, gender equity/nepotism, bullying/discrimination, and harassment. Next, intervention modules were developed for each moral issue based on hypothetical situations, and followed by guided reflection and dilemma discussion questions. The third moral development strategy was community services that included posture screening, diet plan for different age groups, open fitness ground training, exercise camps for physical fitness, balanced diet awareness camp, gymnastic camp, shoe assessment as per health standards, and volunteering for public awareness at the playground, gymnasium, stadium, park, etc. The intervention modules were given to four subject specialists for expert validation who were from different backgrounds within Sport Sciences. Upon refinement and finalization, four students were presented with these intervention modules and questioned about accuracy, relevance, comprehension, and content organization. Iterative changes were made in the content of the intervention modules to tailor them to the moral development needs of undergraduate students. This intervention will strengthen positive moral values and foster mature decision-making about right and wrong acts. As this intervention is easy to apply as a remedial tool, academicians and policymakers can use this to promote students’ moral development.

Keywords: community service, dilemma discussion, morality, physical education, university students.

Procedia PDF Downloads 63
888 Distant Speech Recognition Using Laser Doppler Vibrometer

Authors: Yunbin Deng

Abstract:

Most existing applications of automatic speech recognition relies on cooperative subjects at a short distance to a microphone. Standoff speech recognition using microphone arrays can extend the subject to sensor distance somewhat, but it is still limited to only a few feet. As such, most deployed applications of standoff speech recognitions are limited to indoor use at short range. Moreover, these applications require air passway between the subject and the sensor to achieve reasonable signal to noise ratio. This study reports long range (50 feet) automatic speech recognition experiments using a Laser Doppler Vibrometer (LDV) sensor. This study shows that the LDV sensor modality can extend the speech acquisition standoff distance far beyond microphone arrays to hundreds of feet. In addition, LDV enables 'listening' through the windows for uncooperative subjects. This enables new capabilities in automatic audio and speech intelligence, surveillance, and reconnaissance (ISR) for law enforcement, homeland security and counter terrorism applications. The Polytec LDV model OFV-505 is used in this study. To investigate the impact of different vibrating materials, five parallel LDV speech corpora, each consisting of 630 speakers, are collected from the vibrations of a glass window, a metal plate, a plastic box, a wood slate, and a concrete wall. These are the common materials the application could encounter in a daily life. These data were compared with the microphone counterpart to manifest the impact of various materials on the spectrum of the LDV speech signal. State of the art deep neural network modeling approaches is used to conduct continuous speaker independent speech recognition on these LDV speech datasets. Preliminary phoneme recognition results using time-delay neural network, bi-directional long short term memory, and model fusion shows great promise of using LDV for long range speech recognition. To author’s best knowledge, this is the first time an LDV is reported for long distance speech recognition application.

Keywords: covert speech acquisition, distant speech recognition, DSR, laser Doppler vibrometer, LDV, speech intelligence surveillance and reconnaissance, ISR

Procedia PDF Downloads 168
887 Monitoring Large-Coverage Forest Canopy Height by Integrating LiDAR and Sentinel-2 Images

Authors: Xiaobo Liu, Rakesh Mishra, Yun Zhang

Abstract:

Continuous monitoring of forest canopy height with large coverage is essential for obtaining forest carbon stocks and emissions, quantifying biomass estimation, analyzing vegetation coverage, and determining biodiversity. LiDAR can be used to collect accurate woody vegetation structure such as canopy height. However, LiDAR’s coverage is usually limited because of its high cost and limited maneuverability, which constrains its use for dynamic and large area forest canopy monitoring. On the other hand, optical satellite images, like Sentinel-2, have the ability to cover large forest areas with a high repeat rate, but they do not have height information. Hence, exploring the solution of integrating LiDAR data and Sentinel-2 images to enlarge the coverage of forest canopy height prediction and increase the prediction repeat rate has been an active research topic in the environmental remote sensing community. In this study, we explore the potential of training a Random Forest Regression (RFR) model and a Convolutional Neural Network (CNN) model, respectively, to develop two predictive models for predicting and validating the forest canopy height of the Acadia Forest in New Brunswick, Canada, with a 10m ground sampling distance (GSD), for the year 2018 and 2021. Two 10m airborne LiDAR-derived canopy height models, one for 2018 and one for 2021, are used as ground truth to train and validate the RFR and CNN predictive models. To evaluate the prediction performance of the trained RFR and CNN models, two new predicted canopy height maps (CHMs), one for 2018 and one for 2021, are generated using the trained RFR and CNN models and 10m Sentinel-2 images of 2018 and 2021, respectively. The two 10m predicted CHMs from Sentinel-2 images are then compared with the two 10m airborne LiDAR-derived canopy height models for accuracy assessment. The validation results show that the mean absolute error (MAE) for year 2018 of the RFR model is 2.93m, CNN model is 1.71m; while the MAE for year 2021 of the RFR model is 3.35m, and the CNN model is 3.78m. These demonstrate the feasibility of using the RFR and CNN models developed in this research for predicting large-coverage forest canopy height at 10m spatial resolution and a high revisit rate.

Keywords: remote sensing, forest canopy height, LiDAR, Sentinel-2, artificial intelligence, random forest regression, convolutional neural network

Procedia PDF Downloads 76