Search results for: churn prediction
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 2244

Search results for: churn prediction

324 Identification of the Expression of Top Deregulated MiRNAs in Rheumatoid Arthritis and Osteoarthritis

Authors: Hala Raslan, Noha Eltaweel, Hanaa Rasmi, Solaf Kamel, May Magdy, Sherif Ismail, Khalda Amr

Abstract:

Introduction: Rheumatoid arthritis (RA) is an inflammatory, autoimmune disorder with progressive joint damage. Osteoarthritis (OA) is a degenerative disease of the articular cartilage that shows multiple clinical manifestations or symptoms resembling those of RA. Genetic predisposition is believed to be a principal etiological factor for RA and OA. In this study, we aimed to measure the expression of the top deregulated miRNAs that might be the cause of pathogenesis in both diseases, according to our latest NGS analysis. Six of the deregulated miRNAs were selected as they had multiple target genes in the RA pathway, so they are more likely to affect the RA pathogenesis.Methods: Eighty cases were recruited in this study; 45 rheumatoid arthiritis (RA), 30 osteoarthiritis (OA) patients, as well as 20 healthy controls. The selection of the miRNAs from our latest NGS study was done using miRwalk according to the number of their target genes that are members in the KEGG RA pathway. Total RNA was isolated from plasma of all recruited cases. The cDNA was generated by the miRcury RT Kit then used as a template for real-time PCR with miRcury Primer Assays and the miRcury SYBR Green PCR Kit. Fold changes were calculated from CT values using the ΔΔCT method of relative quantification. Results were compared RA vs Controls and OA vs Controls. Target gene prediction and functional annotation of the deregulated miRNAs was done using Mienturnet. Results: Six miRNAs were selected. They were miR-15b-3p, -128-3p, -194-3p, -328-3p, -542-3p and -3180-5p. In RA samples, three of the measured miRNAs were upregulated (miR-194, -542, and -3180; mean Rq= 2.6, 3.8 and 8.05; P-value= 0.07, 0.05 and 0.01; respectively) while the remaining 3 were downregulated (miR-15b, -128 and -328; mean Rq= 0.21, 0.39 and 0.6; P-value= <0.0001, <0.0001 and 0.02; respectively) all with high statistical significance except miR-194. While in OA samples, two of the measured miRNAs were upregulated (miR-194 and -3180; mean Rq= 2.6 and 7.7; P-value= 0.1 and 0.03; respectively) while the remaining 4 were downregulated (miR-15b, -128, -328 and -542; mean Rq= 0.5, 0.03, 0.08 and 0.5; P-value= 0.0008, 0.003, 0.006 and 0.4; respectively) with statistical significance compared to controls except miR-194 and miR-542. The functional enrichment of the selected top deregulated miRNAs revealed the highly enriched KEGG pathways and GO terms. Conclusion: Five of the studied miRNAs were greatly deregulated in RA and OA, they might be highly involved in the disease pathogenesis and so might be future therapeutic targets. Further functional studies are crucial to assess their roles and actual target genes.

Keywords: MiRNAs, expression, rheumatoid arthritis, osteoarthritis

Procedia PDF Downloads 79
323 Importance of Prostate Volume, Prostate Specific Antigen Density and Free/Total Prostate Specific Antigen Ratio for Prediction of Prostate Cancer

Authors: Aliseydi Bozkurt

Abstract:

Objectives: Benign prostatic hyperplasia (BPH) is the most common benign disease, and prostate cancer (PC) is malign disease of the prostate gland. Transrectal ultrasound-guided biopsy (TRUS-bx) is one of the most important diagnostic tools in PC diagnosis. Identifying men at increased risk for having a biopsy detectable prostate cancer should consider prostate specific antigen density (PSAD), f/t PSA Ratio, an estimate of prostate volume. Method: We retrospectively studied 269 patients who had a prostate specific antigen (PSA) score of 4 or who had suspected rectal examination at any PSA level and received TRUS-bx between January 2015 and June 2018 in our clinic. TRUS-bx was received by 12 experienced urologists with 12 quadrants. Prostate volume was calculated prior to biopsy together with TRUS. Patients were classified as malignant and benign at the end of pathology. Age, PSA value, prostate volume in transrectal ultrasonography, corpuscle biopsy, biopsy pathology result, the number of cancer core and Gleason score were evaluated in the study. The success rates of PV, PSAD, and f/tPSA were compared in all patients and those with PSA 2.5-10 ng/mL and 10.1-30 ng/mL tp foresee prostate cancer. Result: In the present study, in patients with PSA 2.5-10 ng/ml, PV cut-off value was 43,5 mL (n=42 < 43,5 mL and n=102 > 43,5 mL) while in those with PSA 10.1-30 ng/mL prostate volüme (PV) cut-off value was found 61,5 mL (n=31 < 61,5 mL and n=36 > 61,5 mL). Total PSA values in the group with PSA 2.5-10 ng/ml were found lower (6.0 ± 1.3 vs 6.7 ± 1.7) than that with PV < 43,5 mL, this value was nearly significant (p=0,043). In the group with PSA value 10.1-30 ng/mL, no significant difference was found (p=0,117) in terms of total PSA values between the group with PV < 61,5 mL and that with PV > 61,5 mL. In the group with PSA 2.5-10 ng/ml, in patients with PV < 43,5 mL, f/t PSA value was found significantly lower compared to the group with PV > 43,5 mL (0.21 ± 0.09 vs 0.26 ± 0.09 p < 0.001 ). Similarly, in the group with PSA value of 10.1-30 ng/mL, f/t PSA value was found significantly lower in patients with PV < 61,5 mL (0.16 ± 0.08 vs 0.23 ± 0.10 p=0,003). In the group with PSA 2.5-10 ng/ml, PSAD value in patients with PV < 43,5 mL was found significantly higher compared to those with PV > 43,5 mL (0.17 ± 0.06 vs 0.10 ± 0.03 p < 0.001). Similarly, in the group with PSA value 10.1-30 ng/mL PSAD value was found significantly higher in patients with PV < 61,5 mL (0.47 ± 0.23 vs 0.17 ± 0.08 p < 0.001 ). The biopsy results suggest that in the group with PSA 2.5-10 ng/ml, in 29 of the patients with PV < 43,5 mL (69%) cancer was detected while in 13 patients (31%) no cancer was detected. While in 19 patients with PV > 43,5 mL (18,6%) cancer was found, in 83 patients (81,4%) no cancer was detected (p < 0.001). In the group with PSA value 10.1-30 ng/mL, in 21 patients with PV < 61,5 mL (67.7%) cancer was observed while only in10 patients (32.3%) no cancer was seen. In 5 patients with PV > 61,5 mL (13.9%) cancer was found while in 31 patients (86.1%) no cancer was observed (p < 0.001). Conclusions: Identifying men at increased risk for having a biopsy detectable prostate cancer should consider PSA, f/t PSA Ratio, an estimate of prostate volume. Prostate volume in PC was found lower.

Keywords: prostate cancer, prostate volume, prostate specific antigen, free/total PSA ratio

Procedia PDF Downloads 150
322 Geospatial Analysis for Predicting Sinkhole Susceptibility in Greene County, Missouri

Authors: Shishay Kidanu, Abdullah Alhaj

Abstract:

Sinkholes in the karst terrain of Greene County, Missouri, pose significant geohazards, imposing challenges on construction and infrastructure development, with potential threats to lives and property. To address these issues, understanding the influencing factors and modeling sinkhole susceptibility is crucial for effective mitigation through strategic changes in land use planning and practices. This study utilizes geographic information system (GIS) software to collect and process diverse data, including topographic, geologic, hydrogeologic, and anthropogenic information. Nine key sinkhole influencing factors, ranging from slope characteristics to proximity to geological structures, were carefully analyzed. The Frequency Ratio method establishes relationships between attribute classes of these factors and sinkhole events, deriving class weights to indicate their relative importance. Weighted integration of these factors is accomplished using the Analytic Hierarchy Process (AHP) and the Weighted Linear Combination (WLC) method in a GIS environment, resulting in a comprehensive sinkhole susceptibility index (SSI) model for the study area. Employing Jenk's natural break classifier method, the SSI values are categorized into five distinct sinkhole susceptibility zones: very low, low, moderate, high, and very high. Validation of the model, conducted through the Area Under Curve (AUC) and Sinkhole Density Index (SDI) methods, demonstrates a robust correlation with sinkhole inventory data. The prediction rate curve yields an AUC value of 74%, indicating a 74% validation accuracy. The SDI result further supports the success of the sinkhole susceptibility model. This model offers reliable predictions for the future distribution of sinkholes, providing valuable insights for planners and engineers in the formulation of development plans and land-use strategies. Its application extends to enhancing preparedness and minimizing the impact of sinkhole-related geohazards on both infrastructure and the community.

Keywords: sinkhole, GIS, analytical hierarchy process, frequency ratio, susceptibility, Missouri

Procedia PDF Downloads 74
321 Prediction of Springback in U-bending of W-Temper AA6082 Aluminum Alloy

Authors: Jemal Ebrahim Dessie, Lukács Zsolt

Abstract:

High-strength aluminum alloys have drawn a lot of attention because of the expanding demand for lightweight vehicle design in the automotive sector. Due to poor formability at room temperature, warm and hot forming have been advised. However, warm and hot forming methods need more steps in the production process and an advanced tooling system. In contrast, since ordinary tools can be used, forming sheets at room temperature in the W temper condition is advantageous. However, springback of supersaturated sheets and their thinning are critical challenges and must be resolved during the use of this technique. In this study, AA6082-T6 aluminum alloy was solution heat treated at different oven temperatures and times using a specially designed and developed furnace in order to optimize the W-temper heat treatment temperature. A U-shaped bending test was carried out at different time periods between W-temper heat treatment and forming operation. Finite element analysis (FEA) of U-bending was conducted using AutoForm aiming to validate the experimental result. The uniaxial tensile and unload test was performed in order to determine the kinematic hardening behavior of the material and has been optimized in the Finite element code using systematic process improvement (SPI). In the simulation, the effect of friction coefficient & blank holder force was considered. Springback parameters were evaluated by the geometry adopted from the NUMISHEET ’93 benchmark problem. It is noted that the change of shape was higher at the more extended time periods between W-temper heat treatment and forming operation. Die radius was the most influential parameter at the flange springback. However, the change of shape shows an overall increasing tendency on the sidewall as the increase of radius of the punch than the radius of the die. The springback angles on the flange and sidewall seem to be highly influenced by the coefficient of friction than blank holding force, and the effect becomes increases as increasing the blank holding force.

Keywords: aluminum alloy, FEA, springback, SPI, U-bending, W-temper

Procedia PDF Downloads 100
320 Study on the Prediction of Serviceability of Garments Based on the Seam Efficiency and Selection of the Right Seam to Ensure Better Serviceability of Garments

Authors: Md Azizul Islam

Abstract:

Seam is the line of joining two separate fabric layers for functional or aesthetic purposes. Different kinds of seams are used for assembling the different areas or parts of the garment to increase serviceability. To empirically support the importance of seam efficiency on serviceability of garments, this study is focused on choosing the right type of seams for particular sewing parts of the garments based on the seam efficiency to ensure better serviceability. Seam efficiency is the ratio of seam strength and fabric strength. Single jersey knitted finished fabrics of four different GSMs (gram per square meter) were used to make the test garments T-shirt. Three distinct types of the seam: superimposed, lapped and flat seam was applied to the side seams of T-shirt and sewn by lockstitch (stitch class- 301) in a flat-bed plain sewing machine (maximum sewing speed: 5000 rpm) to make (3x4) 12 T-shirts. For experimental purposes, needle thread count (50/3 Ne), bobbin thread count (50/2 Ne) and the stitch density (stitch per inch: 8-9), Needle size (16 in singer system), stitch length (31 cm), and seam allowance (2.5cm) were kept same for all specimens. The grab test (ASTM D5034-08) was done in the Universal tensile tester to measure the seam strength and fabric strength. The produced T-shirts were given to 12 soccer players who wore the shirts for 20 soccer matches (each match of 90 minutes duration). Serviceability of the shirt were measured by visual inspection of a 5 points scale based on the seam conditions. The study found that T-shirts produced with lapped seam show better serviceability and T-shirts made of flat seams perform the lowest score in serviceability score. From the calculated seam efficiency (seam strength/ fabric strength), it was obvious that the performance (in terms of strength) of the lapped and bound seam is higher than that of the superimposed seam and the performance of superimposed seam is far better than that of the flat seam. So it can be predicted that to get a garment of high serviceability, lapped seams could be used instead of superimposed or other types of the seam. In addition, less stressed garments can be assembled by others seems like superimposed seams or flat seams.

Keywords: seam, seam efficiency, serviceability, T-shirt

Procedia PDF Downloads 202
319 Development and Validation of Cylindrical Linear Oscillating Generator

Authors: Sungin Jeong

Abstract:

This paper presents a linear oscillating generator of cylindrical type for hybrid electric vehicle application. The focus of the study is the suggestion of the optimal model and the design rule of the cylindrical linear oscillating generator with permanent magnet in the back-iron translator. The cylindrical topology is achieved using equivalent magnetic circuit considering leakage elements as initial modeling. This topology with permanent magnet in the back-iron translator is described by number of phases and displacement of stroke. For more accurate analysis of an oscillating machine, it will be compared by moving just one-pole pitch forward and backward the thrust of single-phase system and three-phase system. Through the analysis and comparison, a single-phase system of cylindrical topology as the optimal topology is selected. Finally, the detailed design of the optimal topology takes the magnetic saturation effects into account by finite element analysis. Besides, the losses are examined to obtain more accurate results; copper loss in the conductors of machine windings, eddy-current loss of permanent magnet, and iron-loss of specific material of electrical steel. The considerations of thermal performances and mechanical robustness are essential, because they have an effect on the entire efficiency and the insulations of the machine due to the losses of the high temperature generated in each region of the generator. Besides electric machine with linear oscillating movement requires a support system that can resist dynamic forces and mechanical masses. As a result, the fatigue analysis of shaft is achieved by the kinetic equations. Also, the thermal characteristics are analyzed by the operating frequency in each region. The results of this study will give a very important design rule in the design of linear oscillating machines. It enables us to more accurate machine design and more accurate prediction of machine performances.

Keywords: equivalent magnetic circuit, finite element analysis, hybrid electric vehicle, linear oscillating generator

Procedia PDF Downloads 195
318 Game Structure and Spatio-Temporal Action Detection in Soccer Using Graphs and 3D Convolutional Networks

Authors: Jérémie Ochin

Abstract:

Soccer analytics are built on two data sources: the frame-by-frame position of each player on the terrain and the sequences of events, such as ball drive, pass, cross, shot, throw-in... With more than 2000 ball-events per soccer game, their precise and exhaustive annotation, based on a monocular video stream such as a TV broadcast, remains a tedious and costly manual task. State-of-the-art methods for spatio-temporal action detection from a monocular video stream, often based on 3D convolutional neural networks, are close to reach levels of performances in mean Average Precision (mAP) compatibles with the automation of such task. Nevertheless, to meet their expectation of exhaustiveness in the context of data analytics, such methods must be applied in a regime of high recall – low precision, using low confidence score thresholds. This setting unavoidably leads to the detection of false positives that are the product of the well documented overconfidence behaviour of neural networks and, in this case, their limited access to contextual information and understanding of the game: their predictions are highly unstructured. Based on the assumption that professional soccer players’ behaviour, pose, positions and velocity are highly interrelated and locally driven by the player performing a ball-action, it is hypothesized that the addition of information regarding surrounding player’s appearance, positions and velocity in the prediction methods can improve their metrics. Several methods are compared to build a proper representation of the game surrounding a player, from handcrafted features of the local graph, based on domain knowledge, to the use of Graph Neural Networks trained in an end-to-end fashion with existing state-of-the-art 3D convolutional neural networks. It is shown that the inclusion of information regarding surrounding players helps reaching higher metrics.

Keywords: fine-grained action recognition, human action recognition, convolutional neural networks, graph neural networks, spatio-temporal action recognition

Procedia PDF Downloads 24
317 Prediction of Ionic Liquid Densities Using a Corresponding State Correlation

Authors: Khashayar Nasrifar

Abstract:

Ionic liquids (ILs) exhibit particular properties exemplified by extremely low vapor pressure and high thermal stability. The properties of ILs can be tailored by proper selection of cations and anions. As such, ILs are appealing as potential solvents to substitute traditional solvents with high vapor pressure. One of the IL properties required in chemical and process design is density. In developing corresponding state liquid density correlations, scaling hypothesis is often used. The hypothesis expresses the temperature dependence of saturated liquid densities near the vapor-liquid critical point as a function of reduced temperature. Extending the temperature dependence, several successful correlations were developed to accurately correlate the densities of normal liquids from the triple point to a critical point. Applying mixing rules, the liquid density correlations are extended to liquid mixtures as well. ILs are not molecular liquids, and they are not classified among normal liquids either. Also, ILs are often used where the condition is far from equilibrium. Nevertheless, in calculating the properties of ILs, the use of corresponding state correlations would be useful if no experimental data were available. With well-known generalized saturated liquid density correlations, the accuracy in predicting the density of ILs is not that good. An average error of 4-5% should be expected. In this work, a data bank was compiled. A simplified and concise corresponding state saturated liquid density correlation is proposed by phenomena-logically modifying reduced temperature using the temperature-dependence for an interacting parameter of the Soave-Redlich-Kwong equation of state. This modification improves the temperature dependence of the developed correlation. Parametrization was next performed to optimize the three global parameters of the correlation. The correlation was then applied to the ILs in our data bank with satisfactory predictions. The correlation of IL density applied at 0.1 MPa and was tested with an average uncertainty of around 2%. No adjustable parameter was used. The critical temperature, critical volume, and acentric factor were all required. Methods to extend the predictions to higher pressures (200 MPa) were also devised. Compared to other methods, this correlation was found more accurate. This work also presents the chronological order of developing such correlations dealing with ILs. The pros and cons are also expressed.

Keywords: correlation, corresponding state principle, ionic liquid, density

Procedia PDF Downloads 127
316 Sibling Relationship of Adults with Intellectual Disability in China

Authors: Luyin Liang

Abstract:

Although sibling relationship has been viewed as one of the most important family relationships that significantly impacted on the quality of life of both adults with Intellectual Disability (AWID) and their brothers/sisters, very few research have been done to investigate this relationship in China. This study investigated Chinese siblings of AWID’s relational motivations in sibling relationship and their determining factors. Quantitative research method has been adopted and 284 samples were recruited in this study. Siblings of AWID’s two types of relational motivations, including obligatory motivations and discretionary motivations were examined. Their emotional closeness, senses of responsibility, experiences of ID stigma, and expectancy of self-reward in sibling relationship were measured by validated scales. Personal, and familial-social demographic characteristics were also investigated. Linear correlation test and standard multiple regression analysis were the major statistical methods that have been used to analyze the data. The findings of this study showed that all the measured factors, including siblings of AWID’s emotional closeness, their senses of responsibility, experiences of ID stigma, and self-reward expectations had significant relationships with their both types of motivations. However, when these factors were grouped together to measure each type of these motivations, the prediction results were varied. The order of factors that best predict siblings of AWID’s obligatory motivations was: their senses of responsibility, emotional closeness, experiences of ID stigma, and their expectancy of self-reward, whereas the order of these factors that best determine siblings of AWID’s discretionary motivations was: their self-reward expectations, experiences of ID stigma, senses of responsibility, and emotional closeness. Among different demographic characteristics, AWID’s disability condition, their siblings’ age, gender, marital status, number of children, both siblings’ living arrangements and family financial status were found to have significant impacts on siblings of AWID’s both types of motivations in sibling relationship. The results of this study could enhance social work practitioners’ understandings about the needs and challenges of siblings of AWID. Suggestions on advocacies for policy changes and services improvements for these siblings were discussed in this study.

Keywords: sibling relationship, intellectual disability, adults, China

Procedia PDF Downloads 410
315 Estimation of Snow and Ice Melt Contributions to Discharge from the Glacierized Hunza River Basin, Karakoram, Pakistan

Authors: Syed Hammad Ali, Rijan Bhakta Kayastha, Danial Hashmi, Richard Armstrong, Ahuti Shrestha, Iram Bano, Javed Hassan

Abstract:

This paper presents the results of a semi-distributed modified positive degree-day model (MPDDM) for estimating snow and ice melt contributions to discharge from the glacierized Hunza River basin, Pakistan. The model uses daily temperature data, daily precipitation data, and positive degree day factors for snow and ice melt. The model is calibrated for the period 1995-2001 and validated for 2002-2013, and demonstrates close agreements between observed and simulated discharge with Nash–Sutcliffe Efficiencies of 0.90 and 0.88, respectively. Furthermore, the Weather Research and Forecasting model projected temperature, and precipitation data from 2016-2050 are used for representative concentration pathways RCP4.5 and RCP8.5, and bias correction was done using a statistical approach for future discharge estimation. No drastic changes in future discharge are predicted for the emissions scenarios. The aggregate snow-ice melt contribution is 39% of total discharge in the period 1993-2013. Snow-ice melt contribution ranges from 35% to 63% during the high flow period (May to October), which constitutes 89% of annual discharge; in the low flow period (November to April) it ranges from 0.02% to 17%, which constitutes 11 % of the annual discharge. The snow-ice melt contribution to total discharge will increase gradually in the future and reach up to 45% in 2041-2050. From a sensitivity analysis, it is found that the combination of a 2°C temperature rise and 20% increase in precipitation shows a 10% increase in discharge. The study allows us to evaluate the impact of climate change in such basins and is also useful for the future prediction of discharge to define hydropower potential, inform other water resource management in the area, to understand future changes in snow-ice melt contribution to discharge, and offer a possible evaluation of future water quantity and availability.

Keywords: climate variability, future discharge projection, positive degree day, regional climate model, water resource management

Procedia PDF Downloads 290
314 Predicting and Optimizing the Mechanical Behavior of a Flax Reinforced Composite

Authors: Georgios Koronis, Arlindo Silva

Abstract:

This study seeks to understand the mechanical behavior of a natural fiber reinforced composite (epoxy/flax) in more depth, utilizing both experimental and numerical methods. It is attempted to identify relationships between the design parameters and the product performance, understand the effect of noise factors and reduce process variations. Optimization of the mechanical performance of manufactured goods has recently been implemented by numerous studies for green composites. However, these studies are limited and have explored in principal mass production processes. It is expected here to discover knowledge about composite’s manufacturing that can be used to design artifacts that are of low batch and tailored to niche markets. The goal is to reach greater consistency in the performance and further understand which factors play significant roles in obtaining the best mechanical performance. A prediction of response function (in various operating conditions) of the process is modeled by the DoE. Normally, a full factorial designed experiment is required and consists of all possible combinations of levels for all factors. An analytical assessment is possible though with just a fraction of the full factorial experiment. The outline of the research approach will comprise of evaluating the influence that these variables have and how they affect the composite mechanical behavior. The coupons will be fabricated by the vacuum infusion process defined by three process parameters: flow rate, injection point position and fiber treatment. Each process parameter is studied at 2-levels along with their interactions. Moreover, the tensile and flexural properties will be obtained through mechanical testing to discover the key process parameters. In this setting, an experimental phase will be followed in which a number of fabricated coupons will be tested to allow for a validation of the design of the experiment’s setup. Finally, the results are validated by performing the optimum set of in a final set of experiments as indicated by the DoE. It is expected that after a good agreement between the predicted and the verification experimental values, the optimal processing parameter of the biocomposite lamina will be effectively determined.

Keywords: design of experiments, flax fabrics, mechanical performance, natural fiber reinforced composites

Procedia PDF Downloads 204
313 Career Guidance System Using Machine Learning

Authors: Mane Darbinyan, Lusine Hayrapetyan, Elen Matevosyan

Abstract:

Artificial Intelligence in Education (AIED) has been created to help students get ready for the workforce, and over the past 25 years, it has grown significantly, offering a variety of technologies to support academic, institutional, and administrative services. However, this is still challenging, especially considering the labor market's rapid change. While choosing a career, people face various obstacles because they do not take into consideration their own preferences, which might lead to many other problems like shifting jobs, work stress, occupational infirmity, reduced productivity, and manual error. Besides preferences, people should properly evaluate their technical and non-technical skills, as well as their personalities. Professional counseling has become a difficult undertaking for counselors due to the wide range of career choices brought on by changing technological trends. It is necessary to close this gap by utilizing technology that makes sophisticated predictions about a person's career goals based on their personality. Hence, there is a need to create an automated model that would help in decision-making based on user inputs. Improving career guidance can be achieved by embedding machine learning into the career consulting ecosystem. There are various systems of career guidance that work based on the same logic, such as the classification of applicants, matching applications with appropriate departments or jobs, making predictions, and providing suitable recommendations. Methodologies like KNN, Neural Networks, K-means clustering, D-Tree, and many other advanced algorithms are applied in the fields of data and compute some data, which is helpful to predict the right careers. Besides helping users with their career choice, these systems provide numerous opportunities which are very useful while making this hard decision. They help the candidate to recognize where he/she specifically lacks sufficient skills so that the candidate can improve those skills. They are also capable to offer an e-learning platform, taking into account the user's lack of knowledge. Furthermore, users can be provided with details on a particular job, such as the abilities required to excel in that industry.

Keywords: career guidance system, machine learning, career prediction, predictive decision, data mining, technical and non-technical skills

Procedia PDF Downloads 80
312 Career Guidance System Using Machine Learning

Authors: Mane Darbinyan, Lusine Hayrapetyan, Elen Matevosyan

Abstract:

Artificial Intelligence in Education (AIED) has been created to help students get ready for the workforce, and over the past 25 years, it has grown significantly, offering a variety of technologies to support academic, institutional, and administrative services. However, this is still challenging, especially considering the labor market's rapid change. While choosing a career, people face various obstacles because they do not take into consideration their own preferences, which might lead to many other problems like shifting jobs, work stress, occupational infirmity, reduced productivity, and manual error. Besides preferences, people should evaluate properly their technical and non-technical skills, as well as their personalities. Professional counseling has become a difficult undertaking for counselors due to the wide range of career choices brought on by changing technological trends. It is necessary to close this gap by utilizing technology that makes sophisticated predictions about a person's career goals based on their personality. Hence, there is a need to create an automated model that would help in decision-making based on user inputs. Improving career guidance can be achieved by embedding machine learning into the career consulting ecosystem. There are various systems of career guidance that work based on the same logic, such as the classification of applicants, matching applications with appropriate departments or jobs, making predictions, and providing suitable recommendations. Methodologies like KNN, neural networks, K-means clustering, D-Tree, and many other advanced algorithms are applied in the fields of data and compute some data, which is helpful to predict the right careers. Besides helping users with their career choice, these systems provide numerous opportunities which are very useful while making this hard decision. They help the candidate to recognize where he/she specifically lacks sufficient skills so that the candidate can improve those skills. They are also capable of offering an e-learning platform, taking into account the user's lack of knowledge. Furthermore, users can be provided with details on a particular job, such as the abilities required to excel in that industry.

Keywords: career guidance system, machine learning, career prediction, predictive decision, data mining, technical and non-technical skills

Procedia PDF Downloads 70
311 An Advanced Automated Brain Tumor Diagnostics Approach

Authors: Berkan Ural, Arif Eser, Sinan Apaydin

Abstract:

Medical image processing is generally become a challenging task nowadays. Indeed, processing of brain MRI images is one of the difficult parts of this area. This study proposes a hybrid well-defined approach which is consisted from tumor detection, extraction and analyzing steps. This approach is mainly consisted from a computer aided diagnostics system for identifying and detecting the tumor formation in any region of the brain and this system is commonly used for early prediction of brain tumor using advanced image processing and probabilistic neural network methods, respectively. For this approach, generally, some advanced noise removal functions, image processing methods such as automatic segmentation and morphological operations are used to detect the brain tumor boundaries and to obtain the important feature parameters of the tumor region. All stages of the approach are done specifically with using MATLAB software. Generally, for this approach, firstly tumor is successfully detected and the tumor area is contoured with a specific colored circle by the computer aided diagnostics program. Then, the tumor is segmented and some morphological processes are achieved to increase the visibility of the tumor area. Moreover, while this process continues, the tumor area and important shape based features are also calculated. Finally, with using the probabilistic neural network method and with using some advanced classification steps, tumor area and the type of the tumor are clearly obtained. Also, the future aim of this study is to detect the severity of lesions through classes of brain tumor which is achieved through advanced multi classification and neural network stages and creating a user friendly environment using GUI in MATLAB. In the experimental part of the study, generally, 100 images are used to train the diagnostics system and 100 out of sample images are also used to test and to check the whole results. The preliminary results demonstrate the high classification accuracy for the neural network structure. Finally, according to the results, this situation also motivates us to extend this framework to detect and localize the tumors in the other organs.

Keywords: image processing algorithms, magnetic resonance imaging, neural network, pattern recognition

Procedia PDF Downloads 418
310 Simulations to Predict Solar Energy Potential by ERA5 Application at North Africa

Authors: U. Ali Rahoma, Nabil Esawy, Fawzia Ibrahim Moursy, A. H. Hassan, Samy A. Khalil, Ashraf S. Khamees

Abstract:

The design of any solar energy conversion system requires the knowledge of solar radiation data obtained over a long period. Satellite data has been widely used to estimate solar energy where no ground observation of solar radiation is available, yet there are limitations on the temporal coverage of satellite data. Reanalysis is a “retrospective analysis” of the atmosphere parameters generated by assimilating observation data from various sources, including ground observation, satellites, ships, and aircraft observation with the output of NWP (Numerical Weather Prediction) models, to develop an exhaustive record of weather and climate parameters. The evaluation of the performance of reanalysis datasets (ERA-5) for North Africa against high-quality surface measured data was performed using statistical analysis. The estimation of global solar radiation (GSR) distribution over six different selected locations in North Africa during ten years from the period time 2011 to 2020. The root means square error (RMSE), mean bias error (MBE) and mean absolute error (MAE) of reanalysis data of solar radiation range from 0.079 to 0.222, 0.0145 to 0.198, and 0.055 to 0.178, respectively. The seasonal statistical analysis was performed to study seasonal variation of performance of datasets, which reveals the significant variation of errors in different seasons—the performance of the dataset changes by changing the temporal resolution of the data used for comparison. The monthly mean values of data show better performance, but the accuracy of data is compromised. The solar radiation data of ERA-5 is used for preliminary solar resource assessment and power estimation. The correlation coefficient (R2) varies from 0.93 to 99% for the different selected sites in North Africa in the present research. The goal of this research is to give a good representation for global solar radiation to help in solar energy application in all fields, and this can be done by using gridded data from European Centre for Medium-Range Weather Forecasts ECMWF and producing a new model to give a good result.

Keywords: solar energy, solar radiation, ERA-5, potential energy

Procedia PDF Downloads 211
309 Thermal Behaviour of a Low-Cost Passive Solar House in Somerset East, South Africa

Authors: Ochuko K. Overen, Golden Makaka, Edson L. Meyer, Sampson Mamphweli

Abstract:

Low-cost housing provided for people with small incomes in South Africa are characterized by poor thermal performance. This is due to inferior craftsmanship with no regard to energy efficient design during the building process. On average, South African households spend 14% of their total monthly income on energy needs, in particular space heating; which is higher than the international benchmark of 10% for energy poverty. Adopting energy efficient passive solar design strategies and superior thermal building materials can create a stable thermal comfort environment indoors. Thereby, reducing energy consumption for space heating. The aim of this study is to analyse the thermal behaviour of a low-cost house integrated with passive solar design features. A low-cost passive solar house with superstructure fly ash brick walls was designed and constructed in Somerset East, South Africa. Indoor and outdoor meteorological parameters of the house were monitored for a period of one year. The ASTM E741-11 Standard was adopted to perform ventilation test in the house. In summer, the house was found to be thermally comfortable for 66% of the period monitored, while for winter it was about 79%. The ventilation heat flow rate of the windows and doors were found to be 140 J/s and 68 J/s, respectively. Air leakage through cracks and openings in the building envelope was 0.16 m3/m2h with a corresponding ventilation heat flow rate of 24 J/s. The indoor carbon dioxide concentration monitored overnight was found to be 0.248%, which is less than the maximum range limit of 0.500%. The prediction percentage dissatisfaction of the house shows that 86% of the occupants will express the thermal satisfaction of the indoor environment. With a good operation of the house, it can create a well-ventilated, thermal comfortable and nature luminous indoor environment for the occupants. Incorporating passive solar design in low-cost housing can be one of the long and immediate solutions to the energy crisis facing South Africa.

Keywords: energy efficiency, low-cost housing, passive solar design, rural development, thermal comfort

Procedia PDF Downloads 261
308 Entrepreneurial Intention and Social Entrepreneurship among Students in Malaysian Higher Education

Authors: Radin Siti Aishah Radin A Rahman, Norasmah Othman, Zaidatol Akmaliah Lope Pihie, Hariyaty Ab. Wahid

Abstract:

The recent instability in economy was found to be influencing the situation in Malaysia whether directly or indirectly. Taking that into consideration, the government needs to find the best approach to balance its citizen’s socio-economic strata level urgently. Through education platform is among the efforts planned and acted upon for the purpose of balancing the effects of the influence, through the exposure of social entrepreneurial activity towards youth especially those in higher institution level. Armed with knowledge and skills that they gained, with the support by entrepreneurial culture and environment while in campus; indirectly, the students will lean more on making social entrepreneurship as a career option when they graduate. Following the issues of marketability and workability of current graduates that are becoming dire, research involving how far the willingness of student to create social innovation that contribute to the society without focusing solely on personal gain is relevant enough to be conducted. With that, this research is conducted with the purpose of identifying the level of entrepreneurial intention and social entrepreneurship among higher institution students in Malaysia. Stratified random sampling involves 355 undergraduate students from five public universities had been made as research respondents and data were collected through surveys. The data was then analyzed descriptively using min score and standard deviation. The study found that the entrepreneurial intention of higher education students are on moderate level, however it is the contrary for social entrepreneurship activities, where it was shown on a high level. This means that while the students only have moderate level of willingness to be a social entrepreneur, they are very committed to created social innovation through the social entrepreneurship activities conducted. The implication from this study can be contributed towards the higher institution authorities in prediction the tendency of student in becoming social entrepreneurs. Thus, the opportunities and facilities for realizing the courses related to social entrepreneurship must be created expansively so that the vision of creating as many social entrepreneurs as possible can be achieved.

Keywords: entrepreneurial intention, higher education institutions (HEIs), social entrepreneurship, social entrepreneurial activity, gender

Procedia PDF Downloads 262
307 Modeling Biomass and Biodiversity across Environmental and Management Gradients in Temperate Grasslands with Deep Learning and Sentinel-1 and -2

Authors: Javier Muro, Anja Linstadter, Florian Manner, Lisa Schwarz, Stephan Wollauer, Paul Magdon, Gohar Ghazaryan, Olena Dubovyk

Abstract:

Monitoring the trade-off between biomass production and biodiversity in grasslands is critical to evaluate the effects of management practices across environmental gradients. New generations of remote sensing sensors and machine learning approaches can model grasslands’ characteristics with varying accuracies. However, studies often fail to cover a sufficiently broad range of environmental conditions, and evidence suggests that prediction models might be case specific. In this study, biomass production and biodiversity indices (species richness and Fishers’ α) are modeled in 150 grassland plots for three sites across Germany. These sites represent a North-South gradient and are characterized by distinct soil types, topographic properties, climatic conditions, and management intensities. Predictors used are derived from Sentinel-1 & 2 and a set of topoedaphic variables. The transferability of the models is tested by training and validating at different sites. The performance of feed-forward deep neural networks (DNN) is compared to a random forest algorithm. While biomass predictions across gradients and sites were acceptable (r2 0.5), predictions of biodiversity indices were poor (r2 0.14). DNN showed higher generalization capacity than random forest when predicting biomass across gradients and sites (relative root mean squared error of 0.5 for DNN vs. 0.85 for random forest). DNN also achieved high performance when using the Sentinel-2 surface reflectance data rather than different combinations of spectral indices, Sentinel-1 data, or topoedaphic variables, simplifying dimensionality. This study demonstrates the necessity of training biomass and biodiversity models using a broad range of environmental conditions and ensuring spatial independence to have realistic and transferable models where plot level information can be upscaled to landscape scale.

Keywords: ecosystem services, grassland management, machine learning, remote sensing

Procedia PDF Downloads 218
306 Constructing a Semi-Supervised Model for Network Intrusion Detection

Authors: Tigabu Dagne Akal

Abstract:

While advances in computer and communications technology have made the network ubiquitous, they have also rendered networked systems vulnerable to malicious attacks devised from a distance. These attacks or intrusions start with attackers infiltrating a network through a vulnerable host and then launching further attacks on the local network or Intranet. Nowadays, system administrators and network professionals can attempt to prevent such attacks by developing intrusion detection tools and systems using data mining technology. In this study, the experiments were conducted following the Knowledge Discovery in Database Process Model. The Knowledge Discovery in Database Process Model starts from selection of the datasets. The dataset used in this study has been taken from Massachusetts Institute of Technology Lincoln Laboratory. After taking the data, it has been pre-processed. The major pre-processing activities include fill in missed values, remove outliers; resolve inconsistencies, integration of data that contains both labelled and unlabelled datasets, dimensionality reduction, size reduction and data transformation activity like discretization tasks were done for this study. A total of 21,533 intrusion records are used for training the models. For validating the performance of the selected model a separate 3,397 records are used as a testing set. For building a predictive model for intrusion detection J48 decision tree and the Naïve Bayes algorithms have been tested as a classification approach for both with and without feature selection approaches. The model that was created using 10-fold cross validation using the J48 decision tree algorithm with the default parameter values showed the best classification accuracy. The model has a prediction accuracy of 96.11% on the training datasets and 93.2% on the test dataset to classify the new instances as normal, DOS, U2R, R2L and probe classes. The findings of this study have shown that the data mining methods generates interesting rules that are crucial for intrusion detection and prevention in the networking industry. Future research directions are forwarded to come up an applicable system in the area of the study.

Keywords: intrusion detection, data mining, computer science, data mining

Procedia PDF Downloads 296
305 The Study of the Correlation of Future-Oriented Thinking and Retirement Planning: The Analysis of Two Professions

Authors: Ya-Hui Lee, Ching-Yi Lu, Chien Hung, Hsieh

Abstract:

The purpose of this study is to explore the difference between state-owned-enterprise employees and the civil servants regarding their future-oriented thinking and retirement planning. The researchers investigated 687 middle age and older adults (345 state-owned-enterprise employees and 342 civil servants) through survey research, to understand the relevance between and the prediction of their future-oriented thinking and retirement planning. The findings of this study are: 1.There are significant differences between these two professions regarding future-oriented thinking but not retirement planning. The results of the future-oriented thinking of civil servants are overall higher than that of the state-owned-enterprise employees. 2. There are significant differences both in the aspects of future-oriented thinking and retirement planning among civil servants of different ages. The future-oriented thinking and retirement planning of ages 55 and above are more significant than those of ages 45 or under. For the state-owned-enterprise employees, however, there is no significance found in their future-oriented thinking, but in their retirement planning. Moreover, retirement planning is higher at ages 55 or above than at other ages. 3. With regard to education, there is no correlation to future-oriented thinking or retirement planning for civil servants. For state-owned-enterprise employees, however, their levels of education directly affect their future-oriented thinking. Those with a master degree or above have greater future-oriented thinking than those with other educational degrees. As for retirement planning, there is no correlation. 4. Self-assessment of economic status significantly affects the future-oriented thinking and retirement planning of both civil servants and state-owned-enterprise employees. Those who assess themselves more affluently are more inclined to future-oriented thinking and retirement planning. 5. For civil servants, there are significant differences between their monthly income and retirement planning, but none with future-oriented thinking. As for state-owned-enterprise employees, there are significant differences between their monthly income and retirement planning as well as future-oriented thinking. State-owned-enterprise employees who have significantly higher monthly incomes (1,960 euros and above) have more significant future-oriented thinking and retirement planning than those with lower monthly incomes (1,469 euros and below). 6. The middle age and older adults of both professions have positive correlations with future-oriented thinking and retirement planning. Through stepwise multiple regression analysis, the results indicate that future-oriented thinking and retirement planning have positive predictions. The authors then present the findings of this study for state-owned-enterprises, public authorities, and older adult educational program designs in Taiwan as references.

Keywords: state-owned-enterprise employees, civil servants, future-oriented thinking, retirement planning

Procedia PDF Downloads 366
304 Downregulation of Epidermal Growth Factor Receptor in Advanced Stage Laryngeal Squamous Cell Carcinoma

Authors: Sarocha Vivatvakin, Thanaporn Ratchataswan, Thiratest Leesutipornchai, Komkrit Ruangritchankul, Somboon Keelawat, Virachai Kerekhanjanarong, Patnarin Mahattanasakul, Saknan Bongsebandhu-Phubhakdi

Abstract:

In this globalization era, much attention has been drawn to various molecular biomarkers, which may have the potential to predict the progression of cancer. Epidermal growth factor receptor (EGFR) is the classic member of the ErbB family of membrane-associated intrinsic tyrosine kinase receptors. EGFR expression was found in several organs throughout the body as its roles involve in the regulation of cell proliferation, survival, and differentiation in normal physiologic conditions. However, anomalous expression, whether over- or under-expression is believed to be the underlying mechanism of pathologic conditions, including carcinogenesis. Even though numerous discussions regarding the EGFR as a prognostic tool in head and neck cancer have been established, the consensus has not yet been met. The aims of the present study are to assess the correlation between the level of EGFR expression and demographic data as well as clinicopathological features and to evaluate the ability of EGFR as a reliable prognostic marker. Furthermore, another aim of this study is to investigate the probable pathophysiology that explains the finding results. This retrospective study included 30 squamous cell laryngeal carcinoma patients treated at King Chulalongkorn Memorial Hospital from January 1, 2000, to December 31, 2004. EGFR expression level was observed to be significantly downregulated with the progression of the laryngeal cancer stage. (one way ANOVA, p = 0.001) A statistically significant lower EGFR expression in the late stage of the disease compared to the early stage was recorded. (unpaired t-test, p = 0.041) EGFR overexpression also showed the tendency to increase recurrence of cancer (unpaired t-test, p = 0.128). A significant downregulation of EGFR expression was documented in advanced stage laryngeal cancer. The results indicated that EGFR level correlates to prognosis in term of stage progression. Thus, EGFR expression might be used as a prevailing biomarker for laryngeal squamous cell carcinoma prognostic prediction.

Keywords: downregulation, epidermal growth factor receptor, immunohistochemistry, laryngeal squamous cell carcinoma

Procedia PDF Downloads 111
303 Predictive Relationship between Motivation Strategies and Musical Creativity of Secondary School Music Students

Authors: Lucy Lugo Mawang

Abstract:

Educational Psychologists have highlighted the significance of creativity in education. Likewise, a fundamental objective of music education concern the development of students’ musical creativity potential. The purpose of this study was to determine the relationship between motivation strategies and musical creativity, and establish the prediction equation of musical creativity. The study used purposive sampling and census to select 201 fourth-form music students (139 females/ 62 males), mainly from public secondary schools in Kenya. The mean age of participants was 17.24 years (SD = .78). Framed upon self- determination theory and the dichotomous model of achievement motivation, the study adopted an ex post facto research design. A self-report measure, the Achievement Goal Questionnaire-Revised (AGQ-R) was used in data collection for the independent variable. Musical creativity was based on a creative music composition task and measured by the Consensual Musical Creativity Assessment Scale (CMCAS). Data collected in two separate sessions within an interval of one month. The questionnaire was administered in the first session, lasting approximately 20 minutes. The second session was for notation of participants’ creative composition. The results indicated a positive correlation r(199) = .39, p ˂ .01 between musical creativity and intrinsic music motivation. Conversely, negative correlation r(199) = -.19, p < .01 was observed between musical creativity and extrinsic music motivation. The equation for predicting musical creativity from music motivation strategies was significant F(2, 198) = 20.8, p < .01, with R2 = .17. Motivation strategies accounted for approximately (17%) of the variance in participants’ musical creativity. Intrinsic music motivation had the highest significant predictive value (β = .38, p ˂ .01) on musical creativity. In the exploratory analysis, a significant mean difference t(118) = 4.59, p ˂ .01 in musical creativity for intrinsic and extrinsic music motivation was observed in favour of intrinsically motivated participants. Further, a significant gender difference t(93.47) = 4.31, p ˂ .01 in musical creativity was observed, with male participants scoring higher than females. However, there was no significant difference in participants’ musical creativity based on age. The study recommended that music educators should strive to enhance intrinsic music motivation among students. Specifically, schools should create conducive environments and have interventions for the development of intrinsic music motivation since it is the most facilitative motivation strategy in predicting musical creativity.

Keywords: extrinsic music motivation, intrinsic music motivation, musical creativity, music composition

Procedia PDF Downloads 154
302 Physics-Based Earthquake Source Models for Seismic Engineering: Analysis and Validation for Dip-Slip Faults

Authors: Percy Galvez, Anatoly Petukhin, Paul Somerville, Ken Miyakoshi, Kojiro Irikura, Daniel Peter

Abstract:

Physics-based dynamic rupture modelling is necessary for estimating parameters such as rupture velocity and slip rate function that are important for ground motion simulation, but poorly resolved by observations, e.g. by seismic source inversion. In order to generate a large number of physically self-consistent rupture models, whose rupture process is consistent with the spatio-temporal heterogeneity of past earthquakes, we use multicycle simulations under the heterogeneous rate-and-state (RS) friction law for a 45deg dip-slip fault. We performed a parametrization study by fully dynamic rupture modeling, and then, a set of spontaneous source models was generated in a large magnitude range (Mw > 7.0). In order to validate rupture models, we compare the source scaling relations vs. seismic moment Mo for the modeled rupture area S, as well as average slip Dave and the slip asperity area Sa, with similar scaling relations from the source inversions. Ground motions were also computed from our models. Their peak ground velocities (PGV) agree well with the GMPE values. We obtained good agreement of the permanent surface offset values with empirical relations. From the heterogeneous rupture models, we analyzed parameters, which are critical for ground motion simulations, i.e. distributions of slip, slip rate, rupture initiation points, rupture velocities, and source time functions. We studied cross-correlations between them and with the friction weakening distance Dc value, the only initial heterogeneity parameter in our modeling. The main findings are: (1) high slip-rate areas coincide with or are located on an outer edge of the large slip areas, (2) ruptures have a tendency to initiate in small Dc areas, and (3) high slip-rate areas correlate with areas of small Dc, large rupture velocity and short rise-time.

Keywords: earthquake dynamics, strong ground motion prediction, seismic engineering, source characterization

Procedia PDF Downloads 144
301 Predictive Analysis of the Stock Price Market Trends with Deep Learning

Authors: Suraj Mehrotra

Abstract:

The stock market is a volatile, bustling marketplace that is a cornerstone of economics. It defines whether companies are successful or in spiral. A thorough understanding of it is important - many companies have whole divisions dedicated to analysis of both their stock and of rivaling companies. Linking the world of finance and artificial intelligence (AI), especially the stock market, has been a relatively recent development. Predicting how stocks will do considering all external factors and previous data has always been a human task. With the help of AI, however, machine learning models can help us make more complete predictions in financial trends. Taking a look at the stock market specifically, predicting the open, closing, high, and low prices for the next day is very hard to do. Machine learning makes this task a lot easier. A model that builds upon itself that takes in external factors as weights can predict trends far into the future. When used effectively, new doors can be opened up in the business and finance world, and companies can make better and more complete decisions. This paper explores the various techniques used in the prediction of stock prices, from traditional statistical methods to deep learning and neural networks based approaches, among other methods. It provides a detailed analysis of the techniques and also explores the challenges in predictive analysis. For the accuracy of the testing set, taking a look at four different models - linear regression, neural network, decision tree, and naïve Bayes - on the different stocks, Apple, Google, Tesla, Amazon, United Healthcare, Exxon Mobil, J.P. Morgan & Chase, and Johnson & Johnson, the naïve Bayes model and linear regression models worked best. For the testing set, the naïve Bayes model had the highest accuracy along with the linear regression model, followed by the neural network model and then the decision tree model. The training set had similar results except for the fact that the decision tree model was perfect with complete accuracy in its predictions, which makes sense. This means that the decision tree model likely overfitted the training set when used for the testing set.

Keywords: machine learning, testing set, artificial intelligence, stock analysis

Procedia PDF Downloads 95
300 Streamflow Modeling Using the PyTOPKAPI Model with Remotely Sensed Rainfall Data: A Case Study of Gilgel Ghibe Catchment, Ethiopia

Authors: Zeinu Ahmed Rabba, Derek D Stretch

Abstract:

Remote sensing contributes valuable information to streamflow estimates. Usually, stream flow is directly measured through ground-based hydrological monitoring station. However, in many developing countries like Ethiopia, ground-based hydrological monitoring networks are either sparse or nonexistent, which limits the manage water resources and hampers early flood-warning systems. In such cases, satellite remote sensing is an alternative means to acquire such information. This paper discusses the application of remotely sensed rainfall data for streamflow modeling in Gilgel Ghibe basin in Ethiopia. Ten years (2001-2010) of two satellite-based precipitation products (SBPP), TRMM and WaterBase, were used. These products were combined with the PyTOPKAPI hydrological model to generate daily stream flows. The results were compared with streamflow observations at Gilgel Ghibe Nr, Assendabo gauging station using four statistical tools (Bias, R², NS and RMSE). The statistical analysis indicates that the bias-adjusted SBPPs agree well with gauged rainfall compared to bias-unadjusted ones. The SBPPs with no bias-adjustment tend to overestimate (high Bias and high RMSE) the extreme precipitation events and the corresponding simulated streamflow outputs, particularly during wet months (June-September) and underestimate the streamflow prediction over few dry months (January and February). This shows that bias-adjustment can be important for improving the performance of the SBPPs in streamflow forecasting. We further conclude that the general streamflow patterns were well captured at daily time scales when using SBPPs after bias adjustment. However, the overall results demonstrate that the simulated streamflow using the gauged rainfall is superior to those obtained from remotely sensed rainfall products including bias-adjusted ones.

Keywords: Ethiopia, PyTOPKAPI model, remote sensing, streamflow, Tropical Rainfall Measuring Mission (TRMM), waterBase

Procedia PDF Downloads 286
299 Study on Control Techniques for Adaptive Impact Mitigation

Authors: Rami Faraj, Cezary Graczykowski, Błażej Popławski, Grzegorz Mikułowski, Rafał Wiszowaty

Abstract:

Progress in the field of sensors, electronics and computing results in more and more often applications of adaptive techniques for dynamic response mitigation. When it comes to systems excited with mechanical impacts, the control system has to take into account the significant limitations of actuators responsible for system adaptation. The paper provides a comprehensive discussion of the problem of appropriate design and implementation of adaptation techniques and mechanisms. Two case studies are presented in order to compare completely different adaptation schemes. The first example concerns a double-chamber pneumatic shock absorber with a fast piezo-electric valve and parameters corresponding to the suspension of a small unmanned aerial vehicle, whereas the second considered system is a safety air cushion applied for evacuation of people from heights during a fire. For both systems, it is possible to ensure adaptive performance, but a realization of the system’s adaptation is completely different. The reason for this is technical limitations corresponding to specific types of shock-absorbing devices and their parameters. Impact mitigation using a pneumatic shock absorber corresponds to much higher pressures and small mass flow rates, which can be achieved with minimal change of valve opening. In turn, mass flow rates in safety air cushions relate to gas release areas counted in thousands of sq. cm. Because of these facts, both shock-absorbing systems are controlled based on completely different approaches. Pneumatic shock-absorber takes advantage of real-time control with valve opening recalculated at least every millisecond. In contrast, safety air cushion is controlled using the semi-passive technique, where adaptation is provided using prediction of the entire impact mitigation process. Similarities of both approaches, including applied models, algorithms and equipment, are discussed. The entire study is supported by numerical simulations and experimental tests, which prove the effectiveness of both adaptive impact mitigation techniques.

Keywords: adaptive control, adaptive system, impact mitigation, pneumatic system, shock-absorber

Procedia PDF Downloads 91
298 Arterial Compliance Measurement Using Split Cylinder Sensor/Actuator

Authors: Swati Swati, Yuhang Chen, Robert Reuben

Abstract:

Coronary stents are devices resembling the shape of a tube which are placed in coronary arteries, to keep the arteries open in the treatment of coronary arterial diseases. Coronary stents are routinely deployed to clear atheromatous plaque. The stent essentially applies an internal pressure to the artery because its structure is cylindrically symmetrical and this may introduce some abnormalities in final arterial shape. The goal of the project is to develop segmented circumferential arterial compliance measuring devices which can be deployed (eventually) in vivo. The segmentation of the device will allow the mechanical asymmetry of any stenosis to be assessed. The purpose will be to assess the quality of arterial tissue for applications in tailored stents and in the assessment of aortic aneurism. Arterial distensibility measurement is of utmost importance to diagnose cardiovascular diseases and for prediction of future cardiac events or coronary artery diseases. In order to arrive at some generic outcomes, a preliminary experimental set-up has been devised to establish the measurement principles for the device at macro-scale. The measurement methodology consists of a strain gauge system monitored by LABVIEW software in a real-time fashion. This virtual instrument employs a balloon within a gelatine model contained in a split cylinder with strain gauges fixed on it. The instrument allows automated measurement of the effect of air-pressure on gelatine and measurement of strain with respect to time and pressure during inflation. Compliance simple creep model has been applied to the results for the purpose of extracting some measures of arterial compliance. The results obtained from the experiments have been used to study the effect of air pressure on strain at varying time intervals. The results clearly demonstrate that with decrease in arterial volume and increase in arterial pressure, arterial strain increases thereby decreasing the arterial compliance. The measurement system could lead to development of portable, inexpensive and small equipment and could prove to be an efficient automated compliance measurement device.

Keywords: arterial compliance, atheromatous plaque, mechanical symmetry, strain measurement

Procedia PDF Downloads 279
297 Derivation of Bathymetry from High-Resolution Satellite Images: Comparison of Empirical Methods through Geographical Error Analysis

Authors: Anusha P. Wijesundara, Dulap I. Rathnayake, Nihal D. Perera

Abstract:

Bathymetric information is fundamental importance to coastal and marine planning and management, nautical navigation, and scientific studies of marine environments. Satellite-derived bathymetry data provide detailed information in areas where conventional sounding data is lacking and conventional surveys are inaccessible. The two empirical approaches of log-linear bathymetric inversion model and non-linear bathymetric inversion model are applied for deriving bathymetry from high-resolution multispectral satellite imagery. This study compares these two approaches by means of geographical error analysis for the site Kankesanturai using WorldView-2 satellite imagery. Based on the Levenberg-Marquardt method calibrated the parameters of non-linear inversion model and the multiple-linear regression model was applied to calibrate the log-linear inversion model. In order to calibrate both models, Single Beam Echo Sounding (SBES) data in this study area were used as reference points. Residuals were calculated as the difference between the derived depth values and the validation echo sounder bathymetry data and the geographical distribution of model residuals was mapped. The spatial autocorrelation was calculated by comparing the performance of the bathymetric models and the results showing the geographic errors for both models. A spatial error model was constructed from the initial bathymetry estimates and the estimates of autocorrelation. This spatial error model is used to generate more reliable estimates of bathymetry by quantifying autocorrelation of model error and incorporating this into an improved regression model. Log-linear model (R²=0.846) performs better than the non- linear model (R²=0.692). Finally, the spatial error models improved bathymetric estimates derived from linear and non-linear models up to R²=0.854 and R²=0.704 respectively. The Root Mean Square Error (RMSE) was calculated for all reference points in various depth ranges. The magnitude of the prediction error increases with depth for both the log-linear and the non-linear inversion models. Overall RMSE for log-linear and the non-linear inversion models were ±1.532 m and ±2.089 m, respectively.

Keywords: log-linear model, multi spectral, residuals, spatial error model

Procedia PDF Downloads 297
296 Bioinformatic Design of a Non-toxic Modified Adjuvant from the Native A1 Structure of Cholera Toxin with Membrane Synthetic Peptide of Naegleria fowleri

Authors: Frida Carrillo Morales, Maria Maricela Carrasco Yépez, Saúl Rojas Hernández

Abstract:

Naegleria fowleri is the causative agent of primary amebic meningoencephalitis, this disease is acute and fulminant that affects humans. It has been reported that despite the existence of therapeutic options against this disease, its mortality rate is 97%. Therefore, the need arises to have vaccines that confer protection against this disease and, in addition to developing adjuvants to enhance the immune response. In this regard, in our work group, we obtained a peptide designed from the membrane protein MP2CL5 of Naegleria fowleri called Smp145 that was shown to be immunogenic; however, it would be of great importance to enhance its immunological response, being able to co-administer it with a non-toxic adjuvant. Therefore, the objective of this work was to carry out the bioinformatic design of a peptide of the Naegleria fowleri membrane protein MP2CL5 conjugated with a non-toxic modified adjuvant from the native A1 structure of Cholera Toxin. For which different bioinformatics tools were used to obtain a model with a modification in amino acid 61 of the A1 subunit of the CT (CTA1), to which the Smp145 peptide was added and both molecules were joined with a 13-glycine linker. As for the results obtained, the modification in CTA1 bound to the peptide produces a reduction in the toxicity of the molecule in in silico experiments, likewise, the prediction in the binding of Smp145 to the receptor of B cells suggests that the molecule is directed in specifically to the BCR receptor, decreasing its native enzymatic activity. The stereochemical evaluation showed that the generated model has a high number of adequately predicted residues. In the ERRAT test, the confidence with which it is possible to reject regions that exceed the error values was evaluated, in the generated model, a high score was obtained, which determines that the model has a good structural resolution. Therefore, the design of the conjugated peptide in this work will allow us to proceed with its chemical synthesis and subsequently be able to use it in the mouse meningitis protection model caused by N. fowleri.

Keywords: immunology, vaccines, pathogens, infectious disease

Procedia PDF Downloads 92
295 Factors Impacting Geostatistical Modeling Accuracy and Modeling Strategy of Fluvial Facies Models

Authors: Benbiao Song, Yan Gao, Zhuo Liu

Abstract:

Geostatistical modeling is the key technic for reservoir characterization, the quality of geological models will influence the prediction of reservoir performance greatly, but few studies have been done to quantify the factors impacting geostatistical reservoir modeling accuracy. In this study, 16 fluvial prototype models have been established to represent different geological complexity, 6 cases range from 16 to 361 wells were defined to reproduce all those 16 prototype models by different methodologies including SIS, object-based and MPFS algorithms accompany with different constraint parameters. Modeling accuracy ratio was defined to quantify the influence of each factor, and ten realizations were averaged to represent each accuracy ratio under the same modeling condition and parameters association. Totally 5760 simulations were done to quantify the relative contribution of each factor to the simulation accuracy, and the results can be used as strategy guide for facies modeling in the similar condition. It is founded that data density, geological trend and geological complexity have great impact on modeling accuracy. Modeling accuracy may up to 90% when channel sand width reaches up to 1.5 times of well space under whatever condition by SIS and MPFS methods. When well density is low, the contribution of geological trend may increase the modeling accuracy from 40% to 70%, while the use of proper variogram may have very limited contribution for SIS method. It can be implied that when well data are dense enough to cover simple geobodies, few efforts were needed to construct an acceptable model, when geobodies are complex with insufficient data group, it is better to construct a set of robust geological trend than rely on a reliable variogram function. For object-based method, the modeling accuracy does not increase obviously as SIS method by the increase of data density, but kept rational appearance when data density is low. MPFS methods have the similar trend with SIS method, but the use of proper geological trend accompany with rational variogram may have better modeling accuracy than MPFS method. It implies that the geological modeling strategy for a real reservoir case needs to be optimized by evaluation of dataset, geological complexity, geological constraint information and the modeling objective.

Keywords: fluvial facies, geostatistics, geological trend, modeling strategy, modeling accuracy, variogram

Procedia PDF Downloads 264