Search results for: adaptive random testing
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 5988

Search results for: adaptive random testing

4788 Lung Icams and Vcam-1 in Innate and Adaptive Immunity to Influenza Infections: Implications for Vaccination Strategies

Authors: S. Kozlovski, S.W. Feigelson, R. Alon

Abstract:

The b2 integrin ligands ICAM-1 ICAM-2 and the endothelial VLA-4 integrin ligand VCAM-1 are constitutively expressed on different lung vessels and on high endothelial venules (HEVs), the main portal for lymphocyte entry from the blood into lung draining lymph nodes. ICAMs are also ubiquitously expressed by many antigen-presenting leukocytes and have been traditionally suggested as critical for the various antigen-specific immune synapses generated by these distinct leukocytes and specific naïve and effector T cells. Loss of both ICAM-1 and ICAM-2 on the lung vasculature reduces the ability to patrol monocytes and Tregs to patrol the lung vasculature at a steady state. Our new findings suggest, however, that in terms of innate leukocyte trafficking into the lung lamina propria, both constitutively expressed and virus-induced vascular VCAM-1 can functionally compensate for the loss of these ICAMs. In a mouse model for influenza infection, neutrophil and NK cell recruitment and clearance of influenza remained normal in mice deficient in both ICAMs. Strikingly, mice deficient in both ICAMs also mounted normal influenza-specific CD8 proliferation and differentiation. In addition, these mice normally combated secondary influenza infection, indicating that the presence of ICAMs on conventional dendritic cells (cDCs) that present viral antigens are not required for immune synapse formation between these APCs and naïve CD8 T cells as previously suggested. Furthermore, long-lasting humoral responses critical for protection from a secondary homosubtypic influenza infection were also normal in mice deficient in both ICAM-1 and ICAM-2. Collectively, our results suggest that the expression of ICAM-1 and ICAM-2 on lung endothelial and epithelial cells, as well as on DCs and B cells, is not critical for the generation of innate or adaptive anti-viral immunity in the lungs. Our findings also suggest that endothelial VCAM-1 can substitute for the functions of vascular ICAMs in leukocyte trafficking into various lung compartments.

Keywords: emigration, ICAM-1, lymph nodes, VCAM-1

Procedia PDF Downloads 128
4787 The Development of Online-Class Scheduling Management System Conducted by the Case Study of Department of Social Science: Faculty of Humanities and Social Sciences Suan Sunandha Rajabhat University

Authors: Wipada Chaiwchan, Patcharee Klinhom

Abstract:

This research is aimed to develop the online-class scheduling management system and improve as a complex problem solution, this must take into consideration in various conditions and factors. In addition to the number of courses, the number of students and a timetable to study, the physical characteristics of each class room and regulations used in the class scheduling must also be taken into consideration. This system is developed to assist management in the class scheduling for convenience and efficiency. It can provide several instructors to schedule simultaneously. Both lecturers and students can check and publish a timetable and other documents associated with the system online immediately. It is developed in a web-based application. PHP is used as a developing tool. The database management system was MySQL. The tool that is used for efficiency testing of the system is questionnaire. The system was evaluated by using a Black-Box testing. The sample was composed of 2 groups: 5 experts and 100 general users. The average and the standard deviation of results from the experts were 3.50 and 0.67. The average and the standard deviation of results from the general users were 3.54 and 0.54. In summary, the results from the research indicated that the satisfaction of users was in a good level. Therefore, this system could be implemented in an actual workplace and satisfy the users’ requirement effectively

Keywords: timetable, schedule, management system, online

Procedia PDF Downloads 237
4786 Intrusion Detection in Cloud Computing Using Machine Learning

Authors: Faiza Babur Khan, Sohail Asghar

Abstract:

With an emergence of distributed environment, cloud computing is proving to be the most stimulating computing paradigm shift in computer technology, resulting in spectacular expansion in IT industry. Many companies have augmented their technical infrastructure by adopting cloud resource sharing architecture. Cloud computing has opened doors to unlimited opportunities from application to platform availability, expandable storage and provision of computing environment. However, from a security viewpoint, an added risk level is introduced from clouds, weakening the protection mechanisms, and hardening the availability of privacy, data security and on demand service. Issues of trust, confidentiality, and integrity are elevated due to multitenant resource sharing architecture of cloud. Trust or reliability of cloud refers to its capability of providing the needed services precisely and unfailingly. Confidentiality is the ability of the architecture to ensure authorization of the relevant party to access its private data. It also guarantees integrity to protect the data from being fabricated by an unauthorized user. So in order to assure provision of secured cloud, a roadmap or model is obligatory to analyze a security problem, design mitigation strategies, and evaluate solutions. The aim of the paper is twofold; first to enlighten the factors which make cloud security critical along with alleviation strategies and secondly to propose an intrusion detection model that identifies the attackers in a preventive way using machine learning Random Forest classifier with an accuracy of 99.8%. This model uses less number of features. A comparison with other classifiers is also presented.

Keywords: cloud security, threats, machine learning, random forest, classification

Procedia PDF Downloads 320
4785 Heat Stress Adaptive Urban Design Intervention for Planned Residential Areas of Khulna City: Case Study of Sonadanga

Authors: Tanjil Sowgat, Shamim Kobir

Abstract:

World is now experiencing the consequences of climate change such as increased heat stress due to high temperature rise. In the context of changing climate, this study intends to find out the planning interventions necessary to adapt to the current heat stress in the planned residential areas of Khulna city. To carry out the study Sonadanga residential area (phase I) of Khulna city has been taken as the study site. This residential neighbourhood covering an area of 30 acres has 206 residential plots. The study area comprises twelve access roads, one park, one playfield, one water body and two street furniture’s. This study conducts visual analysis covering green, open space, water body, footpath, drainage and street trees and furniture and questionnaire survey deals with socio-economic, housing tenancy, experience of heat stress and urban design interventions. It finds that the current state that accelerates the heat stress condition such as lack of street trees and inadequate shading, maximum uses are not within ten minutes walking distance, no footpath for the pedestrians and lack of well-maintained street furniture. It proposes that to adapt to the heat stress pedestrian facilities, buffer sidewalk with landscaping, street trees and open spaces, soft scape, natural and man-made water bodies, green roofing could be effective urban design interventions. There are evidences of limited number of heat stress adaptive planned residential area. Since current sub-division planning practice focuses on rigid land use allocation, it partly addresses the climatic concerns through creating open space and street trees. To better respond to adapt to the heat stress, urban design considerations in the context of sub-division practice would bring more benefits.

Keywords: climate change, urban design, adaptation, heat stress, water-logging

Procedia PDF Downloads 296
4784 The Formation of the Diminutive in Colloquial Jordanian Arabic

Authors: Yousef Barahmeh

Abstract:

This paper is a linguistic and pragmatic analysis of the use of the diminutive in Colloquial Jordanian Arabic (CJA). It demonstrates a peculiar form of the diminutive in CJA inflected by means of feminine plural ends with -aat suffix. The analysis shows that the pragmatic function(s) of the diminutive in CJA refers primarily to ‘littleness’ while the morphological inflection conveys the message of ‘the plethora’. Examples of this linguistic phenomenon are intelligible and often include a large number of words that are culture-specific to the rural dialect in the north of Jordan. In both cases, the diminutive in CJA is an adaptive strategy relative to its pragmatic and social contexts.

Keywords: Colloquial Jordanian Arabic, diminutive, morphology, pragmatics

Procedia PDF Downloads 266
4783 Customer Churn Prediction by Using Four Machine Learning Algorithms Integrating Features Selection and Normalization in the Telecom Sector

Authors: Alanoud Moraya Aldalan, Abdulaziz Almaleh

Abstract:

A crucial component of maintaining a customer-oriented business as in the telecom industry is understanding the reasons and factors that lead to customer churn. Competition between telecom companies has greatly increased in recent years. It has become more important to understand customers’ needs in this strong market of telecom industries, especially for those who are looking to turn over their service providers. So, predictive churn is now a mandatory requirement for retaining those customers. Machine learning can be utilized to accomplish this. Churn Prediction has become a very important topic in terms of machine learning classification in the telecommunications industry. Understanding the factors of customer churn and how they behave is very important to building an effective churn prediction model. This paper aims to predict churn and identify factors of customers’ churn based on their past service usage history. Aiming at this objective, the study makes use of feature selection, normalization, and feature engineering. Then, this study compared the performance of four different machine learning algorithms on the Orange dataset: Logistic Regression, Random Forest, Decision Tree, and Gradient Boosting. Evaluation of the performance was conducted by using the F1 score and ROC-AUC. Comparing the results of this study with existing models has proven to produce better results. The results showed the Gradients Boosting with feature selection technique outperformed in this study by achieving a 99% F1-score and 99% AUC, and all other experiments achieved good results as well.

Keywords: machine learning, gradient boosting, logistic regression, churn, random forest, decision tree, ROC, AUC, F1-score

Procedia PDF Downloads 134
4782 Environmental Modeling of Storm Water Channels

Authors: L. Grinis

Abstract:

Turbulent flow in complex geometries receives considerable attention due to its importance in many engineering applications. It has been the subject of interest for many researchers. Some of these interests include the design of storm water channels. The design of these channels requires testing through physical models. The main practical limitation of physical models is the so called “scale effect”, that is, the fact that in many cases only primary physical mechanisms can be correctly represented, while secondary mechanisms are often distorted. These observations form the basis of our study, which centered on problems associated with the design of storm water channels near the Dead Sea, in Israel. To help reach a final design decision we used different physical models. Our research showed good coincidence with the results of laboratory tests and theoretical calculations, and allowed us to study different effects of fluid flow in an open channel. We determined that problems of this nature cannot be solved only by means of theoretical calculation and computer simulation. This study demonstrates the use of physical models to help resolve very complicated problems of fluid flow through baffles and similar structures. The study applies these models and observations to different construction and multiphase water flows, among them, those that include sand and stone particles, a significant attempt to bring to the testing laboratory a closer association with reality.

Keywords: open channel, physical modeling, baffles, turbulent flow

Procedia PDF Downloads 284
4781 Modeling Optimal Lipophilicity and Drug Performance in Ligand-Receptor Interactions: A Machine Learning Approach to Drug Discovery

Authors: Jay Ananth

Abstract:

The drug discovery process currently requires numerous years of clinical testing as well as money just for a single drug to earn FDA approval. For drugs that even make it this far in the process, there is a very slim chance of receiving FDA approval, resulting in detrimental hurdles to drug accessibility. To minimize these inefficiencies, numerous studies have implemented computational methods, although few computational investigations have focused on a crucial feature of drugs: lipophilicity. Lipophilicity is a physical attribute of a compound that measures its solubility in lipids and is a determinant of drug efficacy. This project leverages Artificial Intelligence to predict the impact of a drug’s lipophilicity on its performance by accounting for factors such as binding affinity and toxicity. The model predicted lipophilicity and binding affinity in the validation set with very high R² scores of 0.921 and 0.788, respectively, while also being applicable to a variety of target receptors. The results expressed a strong positive correlation between lipophilicity and both binding affinity and toxicity. The model helps in both drug development and discovery, providing every pharmaceutical company with recommended lipophilicity levels for drug candidates as well as a rapid assessment of early-stage drugs prior to any testing, eliminating significant amounts of time and resources currently restricting drug accessibility.

Keywords: drug discovery, lipophilicity, ligand-receptor interactions, machine learning, drug development

Procedia PDF Downloads 111
4780 Model-Driven and Data-Driven Approaches for Crop Yield Prediction: Analysis and Comparison

Authors: Xiangtuo Chen, Paul-Henry Cournéde

Abstract:

Crop yield prediction is a paramount issue in agriculture. The main idea of this paper is to find out efficient way to predict the yield of corn based meteorological records. The prediction models used in this paper can be classified into model-driven approaches and data-driven approaches, according to the different modeling methodologies. The model-driven approaches are based on crop mechanistic modeling. They describe crop growth in interaction with their environment as dynamical systems. But the calibration process of the dynamic system comes up with much difficulty, because it turns out to be a multidimensional non-convex optimization problem. An original contribution of this paper is to propose a statistical methodology, Multi-Scenarios Parameters Estimation (MSPE), for the parametrization of potentially complex mechanistic models from a new type of datasets (climatic data, final yield in many situations). It is tested with CORNFLO, a crop model for maize growth. On the other hand, the data-driven approach for yield prediction is free of the complex biophysical process. But it has some strict requirements about the dataset. A second contribution of the paper is the comparison of these model-driven methods with classical data-driven methods. For this purpose, we consider two classes of regression methods, methods derived from linear regression (Ridge and Lasso Regression, Principal Components Regression or Partial Least Squares Regression) and machine learning methods (Random Forest, k-Nearest Neighbor, Artificial Neural Network and SVM regression). The dataset consists of 720 records of corn yield at county scale provided by the United States Department of Agriculture (USDA) and the associated climatic data. A 5-folds cross-validation process and two accuracy metrics: root mean square error of prediction(RMSEP), mean absolute error of prediction(MAEP) were used to evaluate the crop prediction capacity. The results show that among the data-driven approaches, Random Forest is the most robust and generally achieves the best prediction error (MAEP 4.27%). It also outperforms our model-driven approach (MAEP 6.11%). However, the method to calibrate the mechanistic model from dataset easy to access offers several side-perspectives. The mechanistic model can potentially help to underline the stresses suffered by the crop or to identify the biological parameters of interest for breeding purposes. For this reason, an interesting perspective is to combine these two types of approaches.

Keywords: crop yield prediction, crop model, sensitivity analysis, paramater estimation, particle swarm optimization, random forest

Procedia PDF Downloads 231
4779 Multi-Criteria Test Case Selection Using Ant Colony Optimization

Authors: Niranjana Devi N.

Abstract:

Test case selection is to select the subset of only the fit test cases and remove the unfit, ambiguous, redundant, unnecessary test cases which in turn improve the quality and reduce the cost of software testing. Test cases optimization is the problem of finding the best subset of test cases from a pool of the test cases to be audited. It will meet all the objectives of testing concurrently. But most of the research have evaluated the fitness of test cases only on single parameter fault detecting capability and optimize the test cases using a single objective. In the proposed approach, nine parameters are considered for test case selection and the best subset of parameters for test case selection is obtained using Interval Type-2 Fuzzy Rough Set. Test case selection is done in two stages. The first stage is the fuzzy entropy-based filtration technique, used for estimating and reducing the ambiguity in test case fitness evaluation and selection. The second stage is the ant colony optimization-based wrapper technique with a forward search strategy, employed to select test cases from the reduced test suite of the first stage. The results are evaluated using the Coverage parameters, Precision, Recall, F-Measure, APSC, APDC, and SSR. The experimental evaluation demonstrates that by this approach considerable computational effort can be avoided.

Keywords: ant colony optimization, fuzzy entropy, interval type-2 fuzzy rough set, test case selection

Procedia PDF Downloads 668
4778 Supervised Machine Learning Approach for Studying the Effect of Different Joint Sets on Stability of Mine Pit Slopes Under the Presence of Different External Factors

Authors: Sudhir Kumar Singh, Debashish Chakravarty

Abstract:

Slope stability analysis is an important aspect in the field of geotechnical engineering. It is also important from safety, and economic point of view as any slope failure leads to loss of valuable lives and damage to property worth millions. This paper aims at mitigating the risk of slope failure by studying the effect of different joint sets on the stability of mine pit slopes under the influence of various external factors, namely degree of saturation, rainfall intensity, and seismic coefficients. Supervised machine learning approach has been utilized for making accurate and reliable predictions regarding the stability of slopes based on the value of Factor of Safety. Numerous cases have been studied for analyzing the stability of slopes using the popular Finite Element Method, and the data thus obtained has been used as training data for the supervised machine learning models. The input data has been trained on different supervised machine learning models, namely Random Forest, Decision Tree, Support vector Machine, and XGBoost. Distinct test data that is not present in training data has been used for measuring the performance and accuracy of different models. Although all models have performed well on the test dataset but Random Forest stands out from others due to its high accuracy of greater than 95%, thus helping us by providing a valuable tool at our disposition which is neither computationally expensive nor time consuming and in good accordance with the numerical analysis result.

Keywords: finite element method, geotechnical engineering, machine learning, slope stability

Procedia PDF Downloads 101
4777 Robustness of Steel Beam to Column Moment Resisting Joints

Authors: G. Culache, M. P. Byfield, N. S. Ferguson, A. Tyas

Abstract:

Steel joints in building structures represent a weak link in the case of accidental transient loading. This type of loading can occur due to blast effects or impact with moving vehicles and will result in large deformations in the material as well as large rotations. This paper addresses the lack of experimental investigations into the response of moment resisting connections subjected to such loading. The current design philosophy was used to create test specimens with flush and extended end plates. The specimens were tested in a specially designed testing rig capable of delivering the sustained loading even beyond the point of failure. Types of failure that the authors attempted to obtain were bolt fracture, flange crushing and end plate fracture. Experimental data is presented, described and analyzed. The tests show that the strength and ductility can be significantly improved by replacing ordinary mild-steel bolts with their stainless steel equivalents. This minor modification is demonstrated to significantly improve the robustness when subjected to loading that results in high deformations and rotation, where loading is maintained during failure. Conclusions are drawn about the wider implications of this research and recommendations made on the direction of future research in this field.

Keywords: steel moment connections, high strain rates, dynamic loading, experimental testing

Procedia PDF Downloads 323
4776 Using Econometric Methods to Explore Obesity Stigma and Avoidance of Breast and Cervical Cancer Screening

Authors: Stephanie A. Schauder, Gosia Sylwestrzak

Abstract:

Overweight and obese women report avoiding preventive care due to fear of weight-related bias from medical professionals. Gynecological exams, due to their sensitive and personally invasive nature, are especially susceptible to avoidance. This research investigates the association between body mass index (BMI) and screening rates for breast and cervical cancer using claims data from 1.3 million members of a large health insurance company. Because obesity is associated with increased cancer risk, screenings for these cancers should increase as BMI increases. However, this paper finds that the distribution of cancer screening rates by BMI take an inverted U-shape with underweight and obese members having the lowest screening rates. For cervical cancer screening, those in the target population with a BMI of 23 have the highest screening rate at 68%, while Obese Class III members have a screening rate of 50%. Those in the underweight category have a screening rate of 58%. This relationship persists even after controlling for health and demographic covariates in regression analysis. Interestingly, there is no association between BMI and BRCA (BReast CAncer gene) genetic testing. This is consistent with the narrative that stigma causes avoidance because genetic testing does not involve any assessment of a person’s body. More work must be done to determine how to increase cancer screening rates in those who may feel stigmatized due to their weight.

Keywords: cancer screening, cervical cancer, breast cancer, weight stigma, avoidance of care

Procedia PDF Downloads 202
4775 Modular Harmonic Cancellation in a Multiplier High Voltage Direct Current Generator

Authors: Ahmad Zahran, Ahmed Herzallah, Ahmad Ahmad, Mahran Quraan

Abstract:

Generation of high DC voltages is necessary for testing the insulation material of high voltage AC transmission lines with long lengths. The harmonic and ripple contents of the output DC voltage supplied by high voltage DC circuits require the use of costly capacitors to smooth the output voltage after rectification. This paper proposes a new modular multiplier high voltage DC generator with embedded Cockcroft-Walton circuits that achieve a negligible harmonic and ripple contents of the output DC voltage without the need for costly filters to produce a nearly constant output voltage. In this new topology, Cockcroft-Walton modules are connected in series to produce a high DC output voltage. The modules are supplied by low input AC voltage sources that have the same magnitude and frequency and shifted from each other by a certain angle to eliminate the harmonics from the output voltage. The small ripple factor is provided by the smoothing column capacitors and the phase shifted input voltages of the cascaded modules. The constituent harmonics within each module are determined using Fourier analysis. The viability of the proposed DC generator for testing purposes and the effectiveness of the cascaded connection are confirmed by numerical simulations using MATLAB/Simulink.

Keywords: Cockcroft-Walton circuit, harmonics, ripple factor, HVDC generator

Procedia PDF Downloads 367
4774 Prediction of Live Birth in a Matched Cohort of Elective Single Embryo Transfers

Authors: Mohsen Bahrami, Banafsheh Nikmehr, Yueqiang Song, Anuradha Koduru, Ayse K. Vuruskan, Hongkun Lu, Tamer M. Yalcinkaya

Abstract:

In recent years, we have witnessed an explosion of studies aimed at using a combination of artificial intelligence (AI) and time-lapse imaging data on embryos to improve IVF outcomes. However, despite promising results, no study has used a matched cohort of transferred embryos which only differ in pregnancy outcome, i.e., embryos from a single clinic which are similar in parameters, such as: morphokinetic condition, patient age, and overall clinic and lab performance. Here, we used time-lapse data on embryos with known pregnancy outcomes to see if the rich spatiotemporal information embedded in this data would allow the prediction of the pregnancy outcome regardless of such critical parameters. Methodology—We did a retrospective analysis of time-lapse data from our IVF clinic utilizing Embryoscope 100% of the time for embryo culture to blastocyst stage with known clinical outcomes, including live birth vs nonpregnant (embryos with spontaneous abortion outcomes were excluded). We used time-lapse data from 200 elective single transfer embryos randomly selected from January 2019 to June 2021. Our sample included 100 embryos in each group with no significant difference in patient age (P=0.9550) and morphokinetic scores (P=0.4032). Data from all patients were combined to make a 4th order tensor, and feature extraction were subsequently carried out by a tensor decomposition methodology. The features were then used in a machine learning classifier to classify the two groups. Major Findings—The performance of the model was evaluated using 100 random subsampling cross validation (train (80%) - test (20%)). The prediction accuracy, averaged across 100 permutations, exceeded 80%. We also did a random grouping analysis, in which labels (live birth, nonpregnant) were randomly assigned to embryos, which yielded 50% accuracy. Conclusion—The high accuracy in the main analysis and the low accuracy in random grouping analysis suggest a consistent spatiotemporal pattern which is associated with pregnancy outcomes, regardless of patient age and embryo morphokinetic condition, and beyond already known parameters, such as: early cleavage or early blastulation. Despite small samples size, this ongoing analysis is the first to show the potential of AI methods in capturing the complex morphokinetic changes embedded in embryo time-lapse data, which contribute to successful pregnancy outcomes, regardless of already known parameters. The results on a larger sample size with complementary analysis on prediction of other key outcomes, such as: euploidy and aneuploidy of embryos will be presented at the meeting.

Keywords: IVF, embryo, machine learning, time-lapse imaging data

Procedia PDF Downloads 92
4773 Methods of Variance Estimation in Two-Phase Sampling

Authors: Raghunath Arnab

Abstract:

The two-phase sampling which is also known as double sampling was introduced in 1938. In two-phase sampling, samples are selected in phases. In the first phase, a relatively large sample of size is selected by some suitable sampling design and only information on the auxiliary variable is collected. During the second phase, a sample of size is selected either from, the sample selected in the first phase or from the entire population by using a suitable sampling design and information regarding the study and auxiliary variable is collected. Evidently, two phase sampling is useful if the auxiliary information is relatively easy and cheaper to collect than the study variable as well as if the strength of the relationship between the variables and is high. If the sample is selected in more than two phases, the resulting sampling design is called a multi-phase sampling. In this article we will consider how one can use data collected at the first phase sampling at the stages of estimation of the parameter, stratification, selection of sample and their combinations in the second phase in a unified setup applicable to any sampling design and wider classes of estimators. The problem of the estimation of variance will also be considered. The variance of estimator is essential for estimating precision of the survey estimates, calculation of confidence intervals, determination of the optimal sample sizes and for testing of hypotheses amongst others. Although, the variance is a non-negative quantity but its estimators may not be non-negative. If the estimator of variance is negative, then it cannot be used for estimation of confidence intervals, testing of hypothesis or measure of sampling error. The non-negativity properties of the variance estimators will also be studied in details.

Keywords: auxiliary information, two-phase sampling, varying probability sampling, unbiased estimators

Procedia PDF Downloads 588
4772 Building Social Capital for Social Inclusion: The Use of Social Networks in Government

Authors: Suha Alawadhi, Malak Alrasheed

Abstract:

In the recent past, public participation in governments has been declined to a great extent, as citizens have been isolated from community life and their ability to articulate demands for good government has been noticeably decreased. However, the Internet has introduced new forms of interaction that could enhance different types of relationships, including government-public relationship. In fact, technology-enabled government has become a catalyst for enabling social inclusion. This exploratory study seeks to investigate public perceptions in Kuwait regarding the use of social media networks in government where social capital is built to achieve social inclusion. Social capital has been defined as social networks and connections amongst individuals, that are based on shared trust, ideas and norms, enable participants of a network to act effectively to pursue a shared objective. The quantitative method was used to generate empirical evidence. A questionnaire was designed to address the research objective and reflect the identified constructs: social capital dimensions (bridging, bonding and maintaining social capital), social inclusion, and social equality. In this pilot study, data was collected from a random sample of 61 subjects. The results indicate that all participants have a positive attitude towards the dimensions of social capital (bridging, bonding and maintaining), social inclusion and social equality constructs. Tests of identified constructs against demographic characteristics indicate that there are significant differences between male and female as they perceived bonding and maintaining social capital, social inclusion and social equality whereas no difference was identified in their perceptions of bridging social capital. Also, those who are aged 26-30 perceived bonding and maintaining social capital, social inclusion and social equality negatively compared to those aged 20-25, 31-35, and 40-above whose perceptions were positive. With regard to education, the results also show that those holding high school, university degree and diploma perceived maintaining social capital positively higher than with those who hold graduate degrees. Moreover, a regression model is proposed to study the effect of bridging, bonding, and maintaining social capital on social inclusion via social equality as a mediator. This exploratory study is necessary for testing the validity and reliability of the questionnaire which will be used in the main study that aims to investigate the perceptions of individuals towards building social capital to achieve social inclusion.

Keywords: government, social capital, social inclusion, social networks

Procedia PDF Downloads 326
4771 Adaptive Process Monitoring for Time-Varying Situations Using Statistical Learning Algorithms

Authors: Seulki Lee, Seoung Bum Kim

Abstract:

Statistical process control (SPC) is a practical and effective method for quality control. The most important and widely used technique in SPC is a control chart. The main goal of a control chart is to detect any assignable changes that affect the quality output. Most conventional control charts, such as Hotelling’s T2 charts, are commonly based on the assumption that the quality characteristics follow a multivariate normal distribution. However, in modern complicated manufacturing systems, appropriate control chart techniques that can efficiently handle the nonnormal processes are required. To overcome the shortcomings of conventional control charts for nonnormal processes, several methods have been proposed to combine statistical learning algorithms and multivariate control charts. Statistical learning-based control charts, such as support vector data description (SVDD)-based charts, k-nearest neighbors-based charts, have proven their improved performance in nonnormal situations compared to that of the T2 chart. Beside the nonnormal property, time-varying operations are also quite common in real manufacturing fields because of various factors such as product and set-point changes, seasonal variations, catalyst degradation, and sensor drifting. However, traditional control charts cannot accommodate future condition changes of the process because they are formulated based on the data information recorded in the early stage of the process. In the present paper, we propose a SVDD algorithm-based control chart, which is capable of adaptively monitoring time-varying and nonnormal processes. We reformulated the SVDD algorithm into a time-adaptive SVDD algorithm by adding a weighting factor that reflects time-varying situations. Moreover, we defined the updating region for the efficient model-updating structure of the control chart. The proposed control chart simultaneously allows efficient model updates and timely detection of out-of-control signals. The effectiveness and applicability of the proposed chart were demonstrated through experiments with the simulated data and the real data from the metal frame process in mobile device manufacturing.

Keywords: multivariate control chart, nonparametric method, support vector data description, time-varying process

Procedia PDF Downloads 299
4770 Exploring Factors That May Contribute to the Underdiagnosis of Hereditary Transthyretin Amyloidosis in African American Patients

Authors: Kelsi Hagerty, Ami Rosen, Aaliyah Heyward, Nadia Ali, Emily Brown, Erin Demo, Yue Guan, Modele Ogunniyi, Brianna McDaniels, Alanna Morris, Kunal Bhatt

Abstract:

Hereditary transthyretin amyloidosis (hATTR) is a progressive, multi-systemic, and life-threatening disease caused by a disruption in the TTR protein that delivers thyroxine and retinol to the liver. This disruption causes the protein to misfold into amyloid fibrils, leading to the accumulation of the amyloid fibrils in the heart, nerves, and GI tract. Over 130 variants in the TTR gene are known to cause hATTR. The Val122Ile variant is the most common in the United States and is seen almost exclusively in people of African descent. TTR variants are inherited in an autosomal dominant fashion and have incomplete penetrance and variable expressivity. Individuals with hATTR may exhibit symptoms from as early as 30 years to as late as 80 years of age. hATTR is characterized by a wide range of clinical symptoms such as cardiomyopathy, neuropathy, carpal tunnel syndrome, and GI complications. Without treatment, hATTR leads to progressive disease and can ultimately lead to heart failure. hATTR disproportionately affects individuals of African descent; the estimated prevalence of hATTR among Black individuals in the US is 3.4%. Unfortunately, hATTR is often underdiagnosed and misdiagnosed because many symptoms of the disease overlap with other cardiac conditions. Due to the progressive nature of the disease, multi-systemic manifestations that can lead to a shortened lifespan, and the availability of free genetic testing and promising FDA-approved therapies that enhance treatability, early identification of individuals with a pathogenic hATTR variant is important, as this can significantly impact medical management for patients and their relatives. Furthermore, recent literature suggests that TTR genetic testing should be performed in all patients with suspicion of TTR-related cardiomyopathy, regardless of age, and that follow-up with genetic counseling services is recommended. Relatives of patients with hATTR benefit from genetic testing because testing can identify carriers early and allow relatives to receive regular screening and management. Despite the striking prevalence of hATTR among Black individuals, hATTR remains underdiagnosed in this patient population, and germline genetic testing for hATTR in Black individuals seems to be underrepresented, though the reasons for this have not yet been brought to light. Historically, Black patients experience a number of barriers to seeking healthcare that has been hypothesized to perpetuate the underdiagnosis of hATTR, such as lack of access and mistrust of healthcare professionals. Prior research has described a myriad of factors that shape an individual’s decision about whether to pursue presymptomatic genetic testing for a familial pathogenic variant, such as family closeness and communication, family dynamics, and a desire to inform other family members about potential health risks. This study explores these factors through 10 in-depth interviews with patients with hATTR about what factors may be contributing to the underdiagnosis of hATTR in the Black population. Participants were selected from the Emory University Amyloidosis clinic based on having a molecular diagnosis of hATTR. Interviews were recorded and transcribed verbatim, then coded using MAXQDA software. Thematic analysis was completed to draw commonalities between participants. Upon preliminary analysis, several themes have emerged. Barriers identified include i) Misdiagnosis and a prolonged diagnostic odyssey, ii) Family communication and dynamics surrounding health issues, iii) Perceptions of healthcare and one’s own health risks, and iv) The need for more intimate provider-patient relationships and communication. Overall, this study gleaned valuable insight from members of the Black community about possible factors contributing to the underdiagnosis of hATTR, as well as potential solutions to go about resolving this issue.

Keywords: cardiac amyloidosis, heart failure, TTR, genetic testing

Procedia PDF Downloads 97
4769 A Prospective Study of a Clinically Significant Anatomical Change in Head and Neck Intensity-Modulated Radiation Therapy Using Transit Electronic Portal Imaging Device Images

Authors: Wilai Masanga, Chirapha Tannanonta, Sangutid Thongsawad, Sasikarn Chamchod, Todsaporn Fuangrod

Abstract:

The major factors of radiotherapy for head and neck (HN) cancers include patient’s anatomical changes and tumour shrinkage. These changes can significantly affect the planned dose distribution that causes the treatment plan deterioration. A measured transit EPID images compared to a predicted EPID images using gamma analysis has been clinically implemented to verify the dose accuracy as part of adaptive radiotherapy protocol. However, a global gamma analysis dose not sensitive to some critical organ changes as the entire treatment field is compared. The objective of this feasibility study is to evaluate the dosimetric response to patient anatomical changes during the treatment course in HN IMRT (Head and Neck Intensity-Modulated Radiation Therapy) using a novel comparison method; organ-of-interest gamma analysis. This method provides more sensitive to specific organ change detection. Random replanned 5 HN IMRT patients with causes of tumour shrinkage and patient weight loss that critically affect to the parotid size changes were selected and evaluated its transit dosimetry. A comprehensive physics-based model was used to generate a series of predicted transit EPID images for each gantry angle from original computed tomography (CT) and replan CT datasets. The patient structures; including left and right parotid, spinal cord, and planning target volume (PTV56) were projected to EPID level. The agreement between the transit images generated from original CT and replanned CT was quantified using gamma analysis with 3%, 3mm criteria. Moreover, only gamma pass-rate is calculated within each projected structure. The gamma pass-rate in right parotid and PTV56 between predicted transit of original CT and replan CT were 42.8%( ± 17.2%) and 54.7%( ± 21.5%). The gamma pass-rate for other projected organs were greater than 80%. Additionally, the results of organ-of-interest gamma analysis were compared with 3-dimensional cone-beam computed tomography (3D-CBCT) and the rational of replan by radiation oncologists. It showed that using only registration of 3D-CBCT to original CT does not provide the dosimetric impact of anatomical changes. Using transit EPID images with organ-of-interest gamma analysis can provide additional information for treatment plan suitability assessment.

Keywords: re-plan, anatomical change, transit electronic portal imaging device, EPID, head, and neck

Procedia PDF Downloads 216
4768 Improving the Flow Capacity (CV) of the Valves

Authors: Pradeep A. G, Gorantla Giridhar, Vijay Turaga, Vinod Srinivasa

Abstract:

The major problem in the flow control valve is of lower Cv, which will reduce the overall efficiency of the flow circuit. Designers are continuously working to improve the Cv of the valve, but they need to validate the design ideas they have regarding the improvement of Cv. The traditional method of prototyping and testing takes a lot of time. That is where CFD comes into the picture with very quick and accurate validation along with visualization, which is not possible with the traditional testing method. We have developed a method to predict Cv value using CFD analysis by iterating on various Boundary conditions, solver settings and by carrying out grid convergence studies to establish the correlation between the CFD model and Test data. The present study investigates 3 different ideas put forward by the designers for improving the flow capacity of the valves, like reducing the cage thickness, changing the port position, and using the parabolic plug to guide the flow. Using CFD, we analyzed all design changes using the established methodology that we developed. We were able to evaluate the effect of these design changes on the Valve Cv. We optimized the wetted surface of the valve further by suggesting the design modification to the lower part of the valve to make the flow more streamlined. We could find that changing cage thickness and port position has little impact on the valve Cv. The combination of optimized wetted surface and introduction of parabolic plug improved the Flow capacity (Cv) of the valve significantly.

Keywords: flow control valves, flow capacity (Cv), CFD simulations, design validation

Procedia PDF Downloads 164
4767 Creep Compliance Characteristics of Cement Dust Asphalt Concrete Mixtures

Authors: Ayman Othman, Tallat Abd el Wahed

Abstract:

The current research is directed towards studying the creep compliance characteristics of asphalt concrete mixtures modified with cement dust. This study can aid in assessing the permanent deformation potential of asphalt concrete mixtures. Cement dust was added to the mixture as mineral filler and compared with regular lime stone filler. A power law model was used to characterize the creep compliance behavior of the studied mixtures. Creep testing results have revealed that the creep compliance power law parameters have a strong relationship with mixture type. Testing results of the studied mixtures, as indicated by the creep compliance parameters revealed an enhancement in the creep resistance, Marshall stability, indirect tensile strength and compressive strength for cement dust mixtures as compared to mixtures with traditional lime stone filler. It is concluded that cement dust can be successfully used to decrease the potential of asphalt concrete mixture to permanent deformation and improve its mechanical properties. This is in addition to the environmental benefits that can be gained when using cement dust in asphalt paving technology.

Keywords: cement dust, asphalt concrete mixtures, creep compliance, Marshall stability, indirect tensile strength, compressive strength

Procedia PDF Downloads 427
4766 Application of Bioreactors in Regenerative Dentistry: Literature Review

Authors: Neeraj Malhotra

Abstract:

Background: Bioreactors in tissue engineering are used as devices that apply mechanical means to influence biological processes. They are commonly employed for stem cell culturing, growth and expansion as well as in 3D tissue culture. Contemporarily there use is well established and is tested extensively in the medical sciences, for tissue-regeneration and tissue engineering of organs like bone, cartilage, blood vessels, skin grafts, cardiac muscle etc. Methodology: Literature search, both electronic and hand search, was done using the following MeSH and keywords: bioreactors, bioreactors and dentistry, bioreactors & dental tissue engineering, bioreactors and regenerative dentistry. Articles published only in English language were included for review. Results: Bioreactors like, spinner flask-, rotating wall-, flow perfusion-, and micro-bioreactors and in-vivo bioreactor have been employed and tested for the regeneration of dental and like-tissues. These include gingival tissue, periodontal ligament, alveolar bone, mucosa, cementum and blood vessels. Based on their working dynamics they can be customized in future for regeneration of pulp tissue and whole tooth regeneration. Apart from this, they have been successfully used in testing the clinical efficacy and biological safety of dental biomaterials. Conclusion: Bioreactors have potential use in testing dental biomaterials and tissue engineering approaches aimed at regenerative dentistry.

Keywords: bioreactors, biological process, mechanical stimulation, regenerative dentistry, stem cells

Procedia PDF Downloads 209
4765 Utility of Cardiac Biomarkers in Combination with Exercise Stress Testing in Patients with Suspected Ischemic Heart Disease

Authors: Rawa Delshada, Sanaa G. Hamab, Rastee D. Koyeec

Abstract:

Eighty patients with suspected ischemic heart disease were enrolled in the present study. They were classified into two groups: patients with positive exercise stress test results (n=40) and control group with negative exercise stress test results (n=40). Serum concentration of troponin I, Heart-type Fatty Acid Binding Protein (H-FABP) and Ischemia Modified Albumin (IMA) were measured one hour after performing stress test. Enzyme Linked Immunosorbent Assay was used to measure both troponin I, H-FABP levels, while IMA levels were measured by albumin cobalt binding test. There was no statistically significant difference in the mean concentration of troponin I between two groups (0.75±0.55ng/ml) for patients with positive test result vs. (0.71±0.55ng/ml) for negative test result group with P>0.05. Contrary to our expectation, mean IMA level was slightly higher among control group (70.88±39.76U/ml) compared to (62.7±51.9U/ml) in positive test result group, but still with no statistically significant difference (P>0.05). Median H-FABP level was also higher among negative exercise stress testing group compared the positive one (2ng/ml vs. 1.9ng/ml respectively), but failed to reach statistically significant difference (P>0.05). When quartiles model used to explore the possible association between each study biomarkers with the others; serum H-FABP level was lowest (1.7ng/ml) in highest quartile of IMA and lowest H-FABP (1.8ng/ml) in highest quartile of troponin I but with no statistically significant association (P>0.05). Myocardial ischemia, more likely occurred after exercise stress test, is not capable of causing troponin I release. Furthermore, an increase in H-FABP and IMA levels after stress test are not reflecting myocardial ischemia. Moreover, the combination of troponin I, H-FABP and IMA after measuring their post exercise levels does not improve the diagnostic utility of exercise stress test enormously.

Keywords: cardiac biomarkers, ischemic heart disease, troponin I, ischemia modified albumin, heart-type fatty acid binding protein, exercise stress testing

Procedia PDF Downloads 248
4764 Regenerating Historic Buildings: Policy Gaps

Authors: Joseph Falzon, Margaret Nelson

Abstract:

Background: Policy makers at European Union (EU) and national levels address the re-use of historic buildings calling for sustainable practices and approaches. Implementation stages of policy are crucial so that EU and national strategic objectives for historic building sustainability are achieved. Governance remains one of the key objectives to ensure resource sustainability. Objective: The aim of the research was to critically examine policies for the regeneration and adaptive re-use of historic buildings in the EU and national level, and to analyse gaps between EU and national legislation and policies, taking Malta as a case study. The impact of policies on regeneration and re-use of historic buildings was also studied. Research Design: Six semi-structured interviews with stakeholders including architects, investors and community representatives informed the research. All interviews were audio recorded and transcribed in the English language. Thematic analysis utilising Atlas.ti was conducted for the semi-structured interviews. All phases of the study were governed by research ethics. Findings: Findings were grouped in main themes: resources, experiences and governance. Other key issues included identification of gaps in policies, key lessons and quality of regeneration. Abandonment of heritage buildings was discussed, for which main reasons had been attributed to governance related issues both from the policy making perspective as well as the attitudes of certain officials representing the authorities. The role of authorities, co-ordination between government entities, fairness in decision making, enforcement and management brought high criticism from stakeholders along with time factors due to the lengthy procedures taken by authorities. Policies presented an array from different perspectives of same stakeholder groups. Rather than policy, it is the interpretation of policy that presented certain gaps. Interpretations depend highly on the stakeholders putting forward certain arguments. All stakeholders acknowledged the value of heritage in regeneration. Conclusion: Active stakeholder involvement is essential in policy framework development. Research informed policies and streamlining of policies are necessary. National authorities need to shift from a segmented approach to a holistic approach.

Keywords: adaptive re-use, historic buildings, policy, sustainable

Procedia PDF Downloads 393
4763 A Resilience-Based Approach for Assessing Social Vulnerability in New Zealand's Coastal Areas

Authors: Javad Jozaei, Rob G. Bell, Paula Blackett, Scott A. Stephens

Abstract:

In the last few decades, Social Vulnerability Assessment (SVA) has been a favoured means in evaluating the susceptibility of social systems to drivers of change, including climate change and natural disasters. However, the application of SVA to inform responsive and practical strategies to deal with uncertain climate change impacts has always been challenging, and typically agencies resort back to conventional risk/vulnerability assessment. These challenges include complex nature of social vulnerability concepts which influence its applicability, complications in identifying and measuring social vulnerability determinants, the transitory social dynamics in a changing environment, and unpredictability of the scenarios of change that impacts the regime of vulnerability (including contention of when these impacts might emerge). Research suggests that the conventional quantitative approaches in SVA could not appropriately address these problems; hence, the outcomes could potentially be misleading and not fit for addressing the ongoing uncertain rise in risk. The second phase of New Zealand’s Resilience to Nature’s Challenges (RNC2) is developing a forward-looking vulnerability assessment framework and methodology that informs the decision-making and policy development in dealing with the changing coastal systems and accounts for complex dynamics of New Zealand’s coastal systems (including socio-economic, environmental and cultural). Also, RNC2 requires the new methodology to consider plausible drivers of incremental and unknowable changes, create mechanisms to enhance social and community resilience; and fits the New Zealand’s multi-layer governance system. This paper aims to analyse the conventional approaches and methodologies in SVA and offer recommendations for more responsive approaches that inform adaptive decision-making and policy development in practice. The research adopts a qualitative research design to examine different aspects of the conventional SVA processes, and the methods to achieve the research objectives include a systematic review of the literature and case study methods. We found that the conventional quantitative, reductionist and deterministic mindset in the SVA processes -with a focus the impacts of rapid stressors (i.e. tsunamis, floods)- show some deficiencies to account for complex dynamics of social-ecological systems (SES), and the uncertain, long-term impacts of incremental drivers. The paper will focus on addressing the links between resilience and vulnerability; and suggests how resilience theory and its underpinning notions such as the adaptive cycle, panarchy, and system transformability could address these issues, therefore, influence the perception of vulnerability regime and its assessment processes. In this regard, it will be argued that how a shift of paradigm from ‘specific resilience’, which focuses on adaptive capacity associated with the notion of ‘bouncing back’, to ‘general resilience’, which accounts for system transformability, regime shift, ‘bouncing forward’, can deliver more effective strategies in an era characterised by ongoing change and deep uncertainty.

Keywords: complexity, social vulnerability, resilience, transformation, uncertain risks

Procedia PDF Downloads 101
4762 Aerodynamics and Aeroelastics Studies of Hanger Bridge with H-Beam Profile Using Wind Tunnel

Authors: Matza Gusto Andika, Malinda Sabrina, Syarie Fatunnisa

Abstract:

Aerodynamic and aeroelastics studies on the hanger bridge profile are important to analyze the aerodynamic phenomenon and Aeroelastics stability of hanger. Wind tunnel tests were conducted on a model of H-beam profile from hanger bridge. The purpose of this study is to investigate steady aerodynamic characteristics such as lift coefficient (Cl), drag coefficient (Cd), and moment coefficient (Cm) under the different angle of attack for preliminary prediction of aeroelastics stability problems. After investigation the steady aerodynamics characteristics from the model, dynamic testing is also conducted in wind tunnel to know the aeroelastics phenomenon which occurs at the H-beam hanger bridge profile. The studies show that the torsional vortex induced vibration occur when the wind speed is 7.32 m/s until 9.19 m/s with maximum amplitude occur when the wind speed is 8.41 m/s. The result of wind tunnel testing is matching to hanger vibration where occur in the field, so wind tunnel studies has successful to model the problem. In order that the H-beam profile is not good enough for the hanger bridge and need to be modified to minimize the Aeroelastics problem. The modification can be done with structure dynamics modification or aerodynamics modification.

Keywords: aerodynamics, aeroelastic, hanger bridge, h-beam profile, vortex induced vibration, wind tunnel

Procedia PDF Downloads 350
4761 Asia Pacific University of Technology and Innovation

Authors: Esther O. Adebitan, Florence Oyelade

Abstract:

The Millennium Development Goals (MDGs) was initiated by the UN member nations’ aspiration for the betterment of human life. It is expressed in a set of numerical ‎and time-bound targets. In more recent time, the aspiration is shifting away from just the achievement to the sustainability of achieved MDGs beyond the 2015 target. The main objective of this study was assessing how much the hotel industry within the Nigerian Federal Capital Territory (FCT) as a member of the global community is involved in the achievement of sustainable MDGs within the FCT. The study had two population groups consisting of 160 hotels and the communities where these are located. Stratified random sampling technique was adopted in selecting 60 hotels based on large, medium ‎and small hotels categorisation, while simple random sampling technique was used to elicit information from 30 residents of three of the hotels host communities. The study was guided by tree research questions and two hypotheses aimed to ascertain if hotels see the need to be involved in, and have policies in pursuit of achieving sustained MDGs, and to determine public opinion regarding hotels contribution towards the achievement of the MDGs in their communities. A 22 item questionnaire was designed ‎and administered to hotel managers while 11 item questionnaire was designed ‎and administered to hotels’ host communities. Frequency distribution and percentage as well as Chi-square were used to analyse data. Results showed no significant involvement of the hotel industry in achieving sustained MDGs in the FCT and that there was disconnect between the hotels and their immediate communities. The study recommended that hotels should, as part of their Corporate Social Responsibility pick at least one of the goals to work on in order to be involved in the attainment of enduring Millennium Development Goals.

Keywords: MDGs, hotels, FCT, host communities, corporate social responsibility

Procedia PDF Downloads 417
4760 Rt-Pcr Negative COVID-19 Infection in a Bodybuilding Competitor Using Anabolic Steroids: A Case Report

Authors: Mariana Branco, Nahida Sobrino, Cristina Neves, Márcia Santos, Afonso Granja, João Rosa Oliveira, Joana Costa, Luísa Castro Leite

Abstract:

This case reports a COVID-19 infection in an unvaccinated adult man with no history of COVID-19 and no relevant clinical history besides anabolic steroid use, undergoing weaning with tamoxifen after a bodybuilding competition. The patient presented a 4cm cervical mass 3 weeks after COVID-19 infection in his cohabitants. He was otherwise asymptomatic and tested negative to multiple RT-PCR tests. Nevertheless, the IgG COVID-19 antibody was positive, suggesting the previous infection. This report raises a potential link between anabolic steroid use and atypical COVID-19 onset. Objectives: The goals of this paper are to raise a potential link between anabolic steroid use and atypical COVID-19 onset but also to report an uncommon case of COVID-19 infection with consecutive negative gold standard tests. Methodology: The authors used CARE guidelines for case report writing. Introduction: This case reports a COVID-19 infection case in an unvaccinated adult man, with multiple serial negative reverse transcription polymerase chain reaction (RT-PCR) test results, presenting with single cervical lymphadenopathy. Although the association between COVID-19 and lymphadenopathy is well established, there are no cases with this presentation, and consistently negative RT-PCR tests have been reported. Methodologies: The authors used CARE guidelines for case report writing. Case presentation: This case reports a 28-year-old Caucasian man with no previous history of COVID-19 infection or vaccination and no relevant clinical history besides anabolic steroid use undergoing weaning with tamoxifendue to participation in a bodybuilding competition. He visits his primary care physician because of a large (4 cm) cervical lump, present for 3 days prior to the consultation. There was a positive family history for COVID-19 infection 3 weeks prior to the visit, during which the patient cohabited with the infected family members. The patient never had any previous clinical manifestation of COVID-19 infection and, despite multiple consecutive RT-PCR testing, never tested positive. The patient was treated with an NSAID and a broad-spectrum antibiotic, with little to no effect. Imagiological testing was performed via a cervical ultrasound, followed by a needle biopsy for histologic analysis. Serologic testing for COVID-19 immunity was conducted, revealing a positive Anti-SARS-CoV-2 IgG (Spike S1) antibody, suggesting the previous infection, given the unvaccinated status of our patient Conclusion: In patients with a positive epidemiologic context and cervical lymphadenopathy, physicians should still consider COVID-19 infection as a differential diagnosis, despite negative PCR testing. This case also raises a potential link between anabolic steroid use and atypical COVID-19 onset, never before reported in scientific literature.

Keywords: COVID-19, cervical lymphadenopathy, anabolic steroids, primary care

Procedia PDF Downloads 116
4759 The Causes and Effects of Delinquent Behaviour among Students in Juvenile Home: A Case Study of Osun State

Authors: Baleeqs, O. Adegoke, Adeola, O. Aburime

Abstract:

Juvenile delinquency is fast becoming one of the largest problems facing many societies due to many different factors ranging from parental factors to bullying at schools all which had led to different theoretical notions by different scholars. Delinquency is an illegal or immoral behaviour, especially by the young person who behaves in a way that is illegal or that society does not approve of. The purpose of the study was to investigate causes and effects of delinquent behaviours among adolescent in juvenile home in Osun State. A descriptive survey research type was employed. The random sampling technique was used to select 100 adolescents in Juvenile home in Osun State. Questionnaires were developed and given to them. The data collected from this study were analyzed using frequency counts and percentage for the demographic data in section A, while the two research hypotheses postulated for this study were tested using t-test statistics at the significance level of 0.05. Findings revealed that the greatest school effects of delinquent behaviours among adolescent in juvenile home in Osun by respondents were their aggressive behaviours. Findings revealed that there was a significant difference in the causes and effects of delinquent behaviours among adolescent in juvenile home in Osun State. It was also revealed that there was no significant difference in the causes and effects of delinquent behaviours among secondary school students in Osun based on gender. These recommendations were made in order to address the findings of this study: More number of teachers should be appointed in the observation home so that it will be possible to provide teaching to the different age group of delinquents. Developing the infrastructure facilities of short stay homes and observation home is a top priority. Proper counseling session’s interval is highly essential for these juveniles.

Keywords: behaviour, delinquency, juvenile, random sampling, statistical techniques, survey

Procedia PDF Downloads 191