Search results for: critical Rayleigh number
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 14677

Search results for: critical Rayleigh number

11167 The Impact of the New Head Injury Pathway on the Number of CTs Performed in a Paediatric Population

Authors: Amel M. A. Osman, Roy Mahony, Lisa Dann, McKenna S.

Abstract:

Background: Computed Tomography (CT) is a significant source of radiation in the pediatric population. A new head injury (HI) pathway was introduced in 2021, which altered the previous process of HI being jointly admitted with general pediatrics and surgery to admit these patients under the Emergency Medicine Team. Admitted patients included those with positive CT findings not requiring immediate neurosurgical intervention and those who did not meet current criteria for urgent CT brain as per NICE guidelines but were still symptomatic for prolonged observations. This approach aims to decrease the number of CT scans performed. The main aim is to assess the variation in CT scanning rates since the change in the admitting process. A retrospective review of patients presenting to CHI PECU with HI over 6-month period (01/01/19-31/05/19) compared to a 6-month period post introduction of the new pathway (01/06/2022-31/12/2022). Data was collected from the electronic record databases, symphony, and PACS. Results: In 2019, there were 869 presentations of HI, among which 32 (3.68%) had CT scans performed. 2 (6.25%) of those scanned had positive findings. In 2022, there were 1122 HI presentations, with 47 (4.19%) CT scans performed and positive findings in 5 (10.6%) cases. 57 patients were admitted under the new pathway for observation, with 1 having a CT scan following admission. Conclusion: Quantitative lifetime radiation risks for children are not negligible. While there was no statistically significant reduction in CTs performed amongst HIs presenting to our department, a significant group met the criteria for admission under the PECU consultant for prolonged monitoring. There was also a greater proportion of abnormalities on CT scans performed in 2022, demonstrating improved patient selection for imaging. Further data analysis is ongoing to determine if those who were admitted would have previously been scanned under the old pathway.

Keywords: head injury, CT, admission, guidline

Procedia PDF Downloads 48
11166 Estimation of Enantioresolution of Multiple Stereogenic Drugs Using Mobilized and/or Immobilized Polysaccharide-Based HPLC Chiral Stationary Phases

Authors: Mohamed Hefnawy, Abdulrahman Al-Majed, Aymen Al-Suwailem

Abstract:

Enantioseparation of drugs with multiple stereogenic centers is challenging. This study objectives to evaluate the efficiency of different mobilized and/or immobilized polysaccharide-based chiral stationary phases to separate enantiomers of some drugs containing multiple stereogenic centers namely indenolol, nadolol, labetalol. The critical mobile phase variables (composition of organic solvents, acid/base ratios) were carefully studied to compare the retention time and elution order of all isomers. Different chromatographic parameters such as capacity factor (k), selectivity (α) and resolution (Rs) were calculated. Experimental conditions and the possible chiral recognition mechanisms have been discussed.

Keywords: HPLC, polysaccharide columns, enantio-resolution, indenolol, nadolol, labetalol

Procedia PDF Downloads 444
11165 Basins of Attraction for Quartic-Order Methods

Authors: Young Hee Geum

Abstract:

We compare optimal quartic order method for the multiple zeros of nonlinear equations illustrating the basins of attraction. To construct basins of attraction effectively, we take a 600×600 uniform grid points at the origin of the complex plane and paint the initial values on the basins of attraction with different colors according to the iteration number required for convergence.

Keywords: basins of attraction, convergence, multiple-root, nonlinear equation

Procedia PDF Downloads 247
11164 The Study on Mechanical Properties of Graphene Using Molecular Mechanics

Authors: I-Ling Chang, Jer-An Chen

Abstract:

The elastic properties and fracture of two-dimensional graphene were calculated purely from the atomic bonding (stretching and bending) based on molecular mechanics method. Considering the representative unit cell of graphene under various loading conditions, the deformations of carbon bonds and the variations of the interlayer distance could be realized numerically under the geometry constraints and minimum energy assumption. In elastic region, it was found that graphene was in-plane isotropic. Meanwhile, the in-plane deformation of the representative unit cell is not uniform along armchair direction due to the discrete and non-uniform distributions of the atoms. The fracture of graphene could be predicted using fracture criteria based on the critical bond length, over which the bond would break. It was noticed that the fracture behavior were directional dependent, which was consistent with molecular dynamics simulation results.

Keywords: energy minimization, fracture, graphene, molecular mechanics

Procedia PDF Downloads 398
11163 The Origin and Development of Entrepreneurial Cognition: The Impact of Entrepreneurship Education on Cognitive Style and Subsequent Entrepreneurial Intention

Authors: Salma Hussein, Hadia Aziz

Abstract:

Entrepreneurship plays a significant and imperative role in economic and social growth, and therefore, is stimulated and encouraged by governments and academics as a mean of creating job opportunities, innovation, and wealth. Indicative of its importance, it is essential to identify factors that encourage and promote entrepreneurial behavior. This is particularly true for developing countries where the need for entrepreneurial development is high and the resources are scarce, thus, there is a need to maximize the outcomes of investing in entrepreneurial development. Entrepreneurial education has been the center of attention and interest among researchers as it is believed to be one of the most critical factors in promoting entrepreneurship over the long run. Accordingly, the urgency to encourage entrepreneurship education and develop an enterprise culture is now a main concern in Egypt. Researchers have postulated that cognition has the potential to make a significant contribution to the study of entrepreneurship. One such contribution that future studies need to consider in entrepreneurship research is the cognitive processes that occur within the individual such as cognitive style. During the past decade, there has been an increasing interest in cognitive style among researchers and practitioners specifically in innovation and entrepreneurship field. Limited studies pay attention to study the antecedent dynamics that fuel entrepreneurial cognition to better understand its role in entrepreneurship. Moreover, while many studies were conducted on entrepreneurship education, scholars are still hesitant regarding the teachability of entrepreneurship due to the lack of clear evidence of its impact. Furthermore, the relation between cognitive style and entrepreneurial intentions, has yet to be discovered. Hence, this research aims to test the impact of entrepreneurship education on cognitive style and subsequent intention in order to evaluate whether student’s and potential entrepreneur’s cognitive styles are affected by entrepreneurial education and in turn affect their intentions. Understanding the impact of Entrepreneurship Education on ways of thinking and intention is critical for the development of effective education and training in entrepreneurship field. It is proposed that students who are exposed to entrepreneurship education programs will have a more balanced thinking style compared to those students who are not exposed. Moreover, it is hypothesized that students having a balanced cognitive style will exhibit higher levels of entrepreneurial intentions than students having an intuitive or analytical cognitive style. Finally, it is proposed that non-formal entrepreneurship education will be more positively associated with entrepreneurial intentions than will formal entrepreneurship education. The proposed methodology is a pre and post Experimental Design. The sample will include young adults, their age range from 18 till 35 years old including both students enrolled in formal entrepreneurship education programs in private universities as well as young adults who are willing to participate in a Non-Formal entrepreneurship education programs in Egypt. Attention is now given on how far individuals are analytical or intuitive in their cognitive style, to what extent it is possible to have a balanced thinking style and whether or not this can be aided by training or education. Therefore, there is an urge need for further research on entrepreneurial cognition in educational contexts.

Keywords: cognitive style, entrepreneurial intention, entrepreneurship education, experimental design

Procedia PDF Downloads 196
11162 Building an Opinion Dynamics Model from Experimental Data

Authors: Dino Carpentras, Paul J. Maher, Caoimhe O'Reilly, Michael Quayle

Abstract:

Opinion dynamics is a sub-field of agent-based modeling that focuses on people’s opinions and their evolutions over time. Despite the rapid increase in the number of publications in this field, it is still not clear how to apply these models to real-world scenarios. Indeed, there is no agreement on how people update their opinion while interacting. Furthermore, it is not clear if different topics will show the same dynamics (e.g., more polarized topics may behave differently). These problems are mostly due to the lack of experimental validation of the models. Some previous studies started bridging this gap in the literature by directly measuring people’s opinions before and after the interaction. However, these experiments force people to express their opinion as a number instead of using natural language (and then, eventually, encoding it as numbers). This is not the way people normally interact, and it may strongly alter the measured dynamics. Another limitation of these studies is that they usually average all the topics together, without checking if different topics may show different dynamics. In our work, we collected data from 200 participants on 5 unpolarized topics. Participants expressed their opinions in natural language (“agree” or “disagree”). We also measured the certainty of their answer, expressed as a number between 1 and 10. However, this value was not shown to other participants to keep the interaction based on natural language. We then showed the opinion (and not the certainty) of another participant and, after a distraction task, we repeated the measurement. To make the data compatible with opinion dynamics models, we multiplied opinion and certainty to obtain a new parameter (here called “continuous opinion”) ranging from -10 to +10 (using agree=1 and disagree=-1). We firstly checked the 5 topics individually, finding that all of them behaved in a similar way despite having different initial opinions distributions. This suggested that the same model could be applied for different unpolarized topics. We also observed that people tend to maintain similar levels of certainty, even when they changed their opinion. This is a strong violation of what is suggested from common models, where people starting at, for example, +8, will first move towards 0 instead of directly jumping to -8. We also observed social influence, meaning that people exposed with “agree” were more likely to move to higher levels of continuous opinion, while people exposed with “disagree” were more likely to move to lower levels. However, we also observed that the effect of influence was smaller than the effect of random fluctuations. Also, this configuration is different from standard models, where noise, when present, is usually much smaller than the effect of social influence. Starting from this, we built an opinion dynamics model that explains more than 80% of data variance. This model was also able to show the natural conversion of polarization from unpolarized states. This experimental approach offers a new way to build models grounded on experimental data. Furthermore, the model offers new insight into the fundamental terms of opinion dynamics models.

Keywords: experimental validation, micro-dynamics rule, opinion dynamics, update rule

Procedia PDF Downloads 107
11161 Application of Particle Swarm Optimization to Thermal Sensor Placement for Smart Grid

Authors: Hung-Shuo Wu, Huan-Chieh Chiu, Xiang-Yao Zheng, Yu-Cheng Yang, Chien-Hao Wang, Jen-Cheng Wang, Chwan-Lu Tseng, Joe-Air Jiang

Abstract:

Dynamic Thermal Rating (DTR) provides crucial information by estimating the ampacity of transmission lines to improve power dispatching efficiency. To perform the DTR, it is necessary to install on-line thermal sensors to monitor conductor temperature and weather variables. A simple and intuitive strategy is to allocate a thermal sensor to every span of transmission lines, but the cost of sensors might be too high to bear. To deal with the cost issue, a thermal sensor placement problem must be solved. This research proposes and implements a hybrid algorithm which combines proper orthogonal decomposition (POD) with particle swarm optimization (PSO) methods. The proposed hybrid algorithm solves a multi-objective optimization problem that concludes the minimum number of sensors and the minimum error on conductor temperature, and the optimal sensor placement is determined simultaneously. The data of 345 kV transmission lines and the hourly weather data from the Taiwan Power Company and Central Weather Bureau (CWB), respectively, are used by the proposed method. The simulated results indicate that the number of sensors could be reduced using the optimal placement method proposed by the study and an acceptable error on conductor temperature could be achieved. This study provides power companies with a reliable reference for efficiently monitoring and managing their power grids.

Keywords: dynamic thermal rating, proper orthogonal decomposition, particle swarm optimization, sensor placement, smart grid

Procedia PDF Downloads 426
11160 Cutting Plane Methods for Integer Programming: NAZ Cut and Its Variations

Authors: A. Bari

Abstract:

Integer programming is a branch of mathematical programming techniques in operations research in which some or all of the variables are required to be integer valued. Various cuts have been used to solve these problems. We have also developed cuts known as NAZ cut & A-T cut to solve the integer programming problems. These cuts are used to reduce the feasible region and then reaching the optimal solution in minimum number of steps.

Keywords: Integer Programming, NAZ cut, A-T cut, Cutting plane method

Procedia PDF Downloads 359
11159 Sociology Perspective on Emotional Maltreatment: Retrospective Case Study in a Japanese Elementary School

Authors: Nozomi Fujisaka

Abstract:

This sociological case study analyzes a sequence of student maltreatment in an elementary school in Japan, based on narratives from former students. Among various forms of student maltreatment, emotional maltreatment has received less attention. One reason for this is that emotional maltreatment is often considered part of education and is difficult to capture in surveys. To discuss the challenge of recognizing emotional maltreatment, it's necessary to consider the social background in which student maltreatment occurs. Therefore, from the perspective of the sociology of education, this study aims to clarify the process through which emotional maltreatment was embraced by students within a Japanese classroom. The focus of this study is a series of educational interactions by a homeroom teacher with 11- or 12-year-old students at a small public elementary school approximately 10 years ago. The research employs retrospective narrative data collected through interviews and autoethnography. The semi-structured interviews, lasting one to three hours each, were conducted with 11 young people who were enrolled in the same class as the researcher during their time in elementary school. Autoethnography, as a critical research method, contributes to existing theories and studies by providing a critical representation of the researcher's own experiences. Autoethnography enables researchers to collect detailed data that is often difficult to verbalize in interviews. These research methods are well-suited for this study, which aims to shift the focus from teachers' educational intentions to students' perspectives and gain a deeper understanding of student maltreatment. The research results imply a pattern of emotional maltreatment that is challenging to differentiate from education. In this study's case, the teacher displayed calm and kind behavior toward students after a threat and an explosion of anger. Former students frequently mentioned this behavior of the teacher and perceived emotional maltreatment as part of education. It was not uncommon for former students to offer positive evaluations of the teacher despite experiencing emotional distress. These findings are analyzed and discussed in conjunction with the deschooling theory and the cycle of violence theory. The deschooling theory provides a sociological explanation for how emotional maltreatment can be overlooked in society. The cycle of violence theory, originally developed within the context of domestic violence, explains how violence between romantic partners can be tolerated due to prevailing social norms. Analyzing the case in association with these two theories highlights the characteristics of teachers' behaviors that rationalize maltreatment as education and hinder students from escaping emotional maltreatment. This study deepens our understanding of the causes of student maltreatment and provides a new perspective for future qualitative and quantitative research. Furthermore, since this research is based on the sociology of education, it has the potential to expand research in the fields of pedagogy and sociology, in addition to psychology and social welfare.

Keywords: emotional maltreatment, education, student maltreatment, Japan

Procedia PDF Downloads 74
11158 A Two-Phase Flow Interface Tracking Algorithm Using a Fully Coupled Pressure-Based Finite Volume Method

Authors: Shidvash Vakilipour, Scott Ormiston, Masoud Mohammadi, Rouzbeh Riazi, Kimia Amiri, Sahar Barati

Abstract:

Two-phase and multi-phase flows are common flow types in fluid mechanics engineering. Among the basic and applied problems of these flow types, two-phase parallel flow is the one that two immiscible fluids flow in the vicinity of each other. In this type of flow, fluid properties (e.g. density, viscosity, and temperature) are different at the two sides of the interface of the two fluids. The most challenging part of the numerical simulation of two-phase flow is to determine the location of interface accurately. In the present work, a coupled interface tracking algorithm is developed based on Arbitrary Lagrangian-Eulerian (ALE) approach using a cell-centered, pressure-based, coupled solver. To validate this algorithm, an analytical solution for fully developed two-phase flow in presence of gravity is derived, and then, the results of the numerical simulation of this flow are compared with analytical solution at various flow conditions. The results of the simulations show good accuracy of the algorithm despite using a nearly coarse and uniform grid. Temporal variations of interface profile toward the steady-state solution show that a greater difference between fluids properties (especially dynamic viscosity) will result in larger traveling waves. Gravity effect studies also show that favorable gravity will result in a reduction of heavier fluid thickness and adverse gravity leads to increasing it with respect to the zero gravity condition. However, the magnitude of variation in favorable gravity is much more than adverse gravity.

Keywords: coupled solver, gravitational force, interface tracking, Reynolds number to Froude number, two-phase flow

Procedia PDF Downloads 308
11157 Applying Systems Thinking and a System of Systems Approach to Facilitate Sustainable Grid Integration of Variable Renewable Energy

Authors: Edward B. Ssekulima, Amir Etemadi

Abstract:

This paper presents a Systems Thinking and System of Systems (SoS) viewpoint for managing requirements complexity in the grid integration of Variable Renewable Energy (VRE). To achieve a SoS approach, it is often necessary to inculcate a Systems Thinking (ST) perspective in the planning and design of the attendant system. We show how this approach can support the enhanced integration of VRE (wind, solar small hydro) for which intermittency is a key inhibiting factor to their sustainable grid integration. The results indicate that a ST and SoS approach are a critical tool for decision makers in the planning, design and deployment of VRE Sources for their sustainable grid-integration in accordance with relevant techno-economic, social and environmental requirements.

Keywords: sustainable grid-integration, system of systems, systems thinking, variable energy resources

Procedia PDF Downloads 114
11156 An Adaptive Oversampling Technique for Imbalanced Datasets

Authors: Shaukat Ali Shahee, Usha Ananthakumar

Abstract:

A data set exhibits class imbalance problem when one class has very few examples compared to the other class, and this is also referred to as between class imbalance. The traditional classifiers fail to classify the minority class examples correctly due to its bias towards the majority class. Apart from between-class imbalance, imbalance within classes where classes are composed of a different number of sub-clusters with these sub-clusters containing different number of examples also deteriorates the performance of the classifier. Previously, many methods have been proposed for handling imbalanced dataset problem. These methods can be classified into four categories: data preprocessing, algorithmic based, cost-based methods and ensemble of classifier. Data preprocessing techniques have shown great potential as they attempt to improve data distribution rather than the classifier. Data preprocessing technique handles class imbalance either by increasing the minority class examples or by decreasing the majority class examples. Decreasing the majority class examples lead to loss of information and also when minority class has an absolute rarity, removing the majority class examples is generally not recommended. Existing methods available for handling class imbalance do not address both between-class imbalance and within-class imbalance simultaneously. In this paper, we propose a method that handles between class imbalance and within class imbalance simultaneously for binary classification problem. Removing between class imbalance and within class imbalance simultaneously eliminates the biases of the classifier towards bigger sub-clusters by minimizing the error domination of bigger sub-clusters in total error. The proposed method uses model-based clustering to find the presence of sub-clusters or sub-concepts in the dataset. The number of examples oversampled among the sub-clusters is determined based on the complexity of sub-clusters. The method also takes into consideration the scatter of the data in the feature space and also adaptively copes up with unseen test data using Lowner-John ellipsoid for increasing the accuracy of the classifier. In this study, neural network is being used as this is one such classifier where the total error is minimized and removing the between-class imbalance and within class imbalance simultaneously help the classifier in giving equal weight to all the sub-clusters irrespective of the classes. The proposed method is validated on 9 publicly available data sets and compared with three existing oversampling techniques that rely on the spatial location of minority class examples in the euclidean feature space. The experimental results show the proposed method to be statistically significantly superior to other methods in terms of various accuracy measures. Thus the proposed method can serve as a good alternative to handle various problem domains like credit scoring, customer churn prediction, financial distress, etc., that typically involve imbalanced data sets.

Keywords: classification, imbalanced dataset, Lowner-John ellipsoid, model based clustering, oversampling

Procedia PDF Downloads 411
11155 The Impact of Artificial Intelligence on Pharmacy and Pharmacology

Authors: Mamdouh Milad Adly Morkos

Abstract:

Despite having the greatest rates of mortality and morbidity in the world, low- and middle-income (LMIC) nations trail high-income nations in terms of the number of clinical trials, the number of qualified researchers, and the amount of research information specific to their people. Health inequities and the use of precision medicine may be hampered by a lack of local genomic data, clinical pharmacology and pharmacometrics competence, and training opportunities. These issues can be solved by carrying out health care infrastructure development, which includes data gathering and well-designed clinical pharmacology training in LMICs. It will be advantageous if there is international cooperation focused at enhancing education and infrastructure and promoting locally motivated clinical trials and research. This paper outlines various instances where clinical pharmacology knowledge could be put to use, including pharmacogenomic opportunities that could lead to better clinical guideline recommendations. Examples of how clinical pharmacology training can be successfully implemented in LMICs are also provided, including clinical pharmacology and pharmacometrics training programmes in Africa and a Tanzanian researcher's personal experience while on a training sabbatical in the United States. These training initiatives will profit from advocacy for clinical pharmacologists' employment prospects and career development pathways, which are gradually becoming acknowledged and established in LMICs. The advancement of training and research infrastructure to increase clinical pharmacologists' knowledge in LMICs would be extremely beneficial because they have a significant role to play in global health

Keywords: electromagnetic solar system, nano-material, nano pharmacology, pharmacovigilance, quantum theoryclinical simulation, education, pharmacology, simulation, virtual learning low- and middle-income, clinical pharmacology, pharmacometrics, career development pathways

Procedia PDF Downloads 74
11154 Behavior of Reinforced Concrete Structures Subjected to Multiple Floor Fire Loads

Authors: Suresh Narayana, Chaitanya Akkannavar

Abstract:

Assessment of behavior of reinforced concrete structures subjected to fire load, and its behavior for the multi-floor fire have been presented in this paper. This research is the part of the study to evaluate the performance of ten storied RC structure when it is subjected to fire loads at multiple floors and to evaluate the post-fire effects on structure such as deflection and stresses occurring due to combined effect of static and thermal loading. Thermal loading has been assigned to different floor levels to estimate the critical floors that initiate the collapse of the structure. The structure has been modeled and analyzed in Solid Works and commercially available Finite Element Software ABAQUS. Results are analyzed, and particular design solution has been suggested.

Keywords: collapse mechanism, fire analysis, RC structure, stress vs temperature

Procedia PDF Downloads 467
11153 A Gauge Repeatability and Reproducibility Study for Multivariate Measurement Systems

Authors: Jeh-Nan Pan, Chung-I Li

Abstract:

Measurement system analysis (MSA) plays an important role in helping organizations to improve their product quality. Generally speaking, the gauge repeatability and reproducibility (GRR) study is performed according to the MSA handbook stated in QS9000 standards. Usually, GRR study for assessing the adequacy of gauge variation needs to be conducted prior to the process capability analysis. Traditional MSA only considers a single quality characteristic. With the advent of modern technology, industrial products have become very sophisticated with more than one quality characteristic. Thus, it becomes necessary to perform multivariate GRR analysis for a measurement system when collecting data with multiple responses. In this paper, we take the correlation coefficients among tolerances into account to revise the multivariate precision-to-tolerance (P/T) ratio as proposed by Majeske (2008). We then compare the performance of our revised P/T ratio with that of the existing ratios. The simulation results show that our revised P/T ratio outperforms others in terms of robustness and proximity to the actual value. Moreover, the optimal allocation of several parameters such as the number of quality characteristics (v), sample size of parts (p), number of operators (o) and replicate measurements (r) is discussed using the confidence interval of the revised P/T ratio. Finally, a standard operating procedure (S.O.P.) to perform the GRR study for multivariate measurement systems is proposed based on the research results. Hopefully, it can be served as a useful reference for quality practitioners when conducting such study in industries. Measurement system analysis (MSA) plays an important role in helping organizations to improve their product quality. Generally speaking, the gauge repeatability and reproducibility (GRR) study is performed according to the MSA handbook stated in QS9000 standards. Usually, GRR study for assessing the adequacy of gauge variation needs to be conducted prior to the process capability analysis. Traditional MSA only considers a single quality characteristic. With the advent of modern technology, industrial products have become very sophisticated with more than one quality characteristic. Thus, it becomes necessary to perform multivariate GRR analysis for a measurement system when collecting data with multiple responses. In this paper, we take the correlation coefficients among tolerances into account to revise the multivariate precision-to-tolerance (P/T) ratio as proposed by Majeske (2008). We then compare the performance of our revised P/T ratio with that of the existing ratios. The simulation results show that our revised P/T ratio outperforms others in terms of robustness and proximity to the actual value. Moreover, the optimal allocation of several parameters such as the number of quality characteristics (v), sample size of parts (p), number of operators (o) and replicate measurements (r) is discussed using the confidence interval of the revised P/T ratio. Finally, a standard operating procedure (S.O.P.) to perform the GRR study for multivariate measurement systems is proposed based on the research results. Hopefully, it can be served as a useful reference for quality practitioners when conducting such study in industries.

Keywords: gauge repeatability and reproducibility, multivariate measurement system analysis, precision-to-tolerance ratio, Gauge repeatability

Procedia PDF Downloads 256
11152 Working Women and Leave in India

Authors: Ankita Verma

Abstract:

Women transform the group of people into a family and a house into a home. When a woman embraces motherhood, she undergoes several stresses – both physical and mental. Therefore, to be supportive of women during this critical stage is a societal responsibility. India is in the league of many developed nations in formulating women-friendly policies. One such initiative is the Maternity Benefits Act; first passed in 1961 and later amended from time to time with the latest amended Act of 2017. This review paper critically analyzes provisions of the Act, its implementation, and the legal issues arising out of implementation of the Act. The review suggests that the Act has made a positive impact and the judiciary also has played its role in streamlining the process of implementation of the Act. However, at the same time, it is also felt that employers often hesitate in hiring a mother or an expectant mother.

Keywords: maternity benefits, maternity benefits act 1961 & 2017, motherhood, maternity and paternity leave, medical bonus, work environment

Procedia PDF Downloads 168
11151 Capital Accumulation and Unemployment in Namibia, Nigeria and South Africa

Authors: Abubakar Dikko

Abstract:

The research investigates the causes of unemployment in Namibia, Nigeria and South Africa, and the role of Capital Accumulation in reducing the unemployment profile of these economies as proposed by the post-Keynesian economics. This is conducted through extensive review of literature on the NAIRU models and focused on the post-Keynesian view of unemployment within the NAIRU framework. The NAIRU (non-accelerating inflation rate of unemployment) model has become a dominant framework used in macroeconomic analysis of unemployment. The study views the post-Keynesian economics arguments that capital accumulation is a major determinant of unemployment. Unemployment remains the fundamental socio-economic challenge facing African economies. It has been a burden to citizens of those economies. Namibia, Nigeria and South Africa are great African nations battling with high unemployment rates. In 2013, the countries recorded high unemployment rates of 16.9%, 23.9% and 24.9% respectively. Most of the unemployed in these economies comprises of youth. Roughly about 40% working age South Africans has jobs, whereas in Nigeria and Namibia is less than that. Unemployment in Africa has wide implications on households which has led to extensive poverty and inequality, and created a rampant criminality. Recently in South Africa there has been a case of xenophobic attacks which were caused by the citizens of the country as a result of unemployment. The high unemployment rate in the country led the citizens to chase away foreigners in the country claiming that they have taken away their jobs. The study proposes that there is a strong relationship between capital accumulation and unemployment in Namibia, Nigeria and South Africa, and capital accumulation is responsible for high unemployment rates in these countries. For the economies to achieve steady state level of employment and satisfactory level of economic growth and development there is need for capital accumulation to take place. The countries in the study have been selected after a critical research and investigations. They are selected based on the following criteria; African economies with high unemployment rates above 15% and have about 40% of their workforce unemployed. This level of unemployment is the critical level of unemployment in Africa as expressed by International Labour Organization (ILO). The African countries with low level of capital accumulation. Adequate statistical measures have been employed using a time-series analysis in the study and the results revealed that capital accumulation is the main driver of unemployment performance in the chosen African countries. An increase in the accumulation of capital causes unemployment to reduce significantly. The results of the research work will be useful and relevant to federal governments and ministries, departments and agencies (MDAs) of Namibia, Nigeria and South Africa to resolve the issue of high and persistent unemployment rates in their economies which are great burden that slows growth and development of developing economies. Also, the result can be useful to World Bank, African Development Bank and International Labour Organization (ILO) in their further research and studies on how to tackle unemployment in developing and emerging economies.

Keywords: capital accumulation, unemployment, NAIRU, Post-Keynesian economics

Procedia PDF Downloads 257
11150 Design of Experiment for Optimizing Immunoassay Microarray Printing

Authors: Alex J. Summers, Jasmine P. Devadhasan, Douglas Montgomery, Brittany Fischer, Jian Gu, Frederic Zenhausern

Abstract:

Immunoassays have been utilized for several applications, including the detection of pathogens. Our laboratory is in the development of a tier 1 biothreat panel utilizing Vertical Flow Assay (VFA) technology for simultaneous detection of pathogens and toxins. One method of manufacturing VFA membranes is with non-contact piezoelectric dispensing, which provides advantages, such as low-volume and rapid dispensing without compromising the structural integrity of antibody or substrate. Challenges of this processinclude premature discontinuation of dispensing and misaligned spotting. Preliminary data revealed the Yp 11C7 mAb (11C7)reagent to exhibit a large angle of failure during printing which may have contributed to variable printing outputs. A Design of Experiment (DOE) was executed using this reagent to investigate the effects of hydrostatic pressure and reagent concentration on microarray printing outputs. A Nano-plotter 2.1 (GeSIM, Germany) was used for printing antibody reagents ontonitrocellulose membrane sheets in a clean room environment. A spotting plan was executed using Spot-Front-End software to dispense volumes of 11C7 reagent (20-50 droplets; 1.5-5 mg/mL) in a 6-test spot array at 50 target membrane locations. Hydrostatic pressure was controlled by raising the Pressure Compensation Vessel (PCV) above or lowering it below our current working level. It was hypothesized that raising or lowering the PCV 6 inches would be sufficient to cause either liquid accumulation at the tip or discontinue droplet formation. After aspirating 11C7 reagent, we tested this hypothesis under stroboscope.75% of the effective raised PCV height and of our hypothesized lowered PCV height were used. Humidity (55%) was maintained using an Airwin BO-CT1 humidifier. The number and quality of membranes was assessed after staining printed membranes with dye. The droplet angle of failure was recorded before and after printing to determine a “stroboscope score” for each run. The DOE set was analyzed using JMP software. Hydrostatic pressure and reagent concentration had a significant effect on the number of membranes output. As hydrostatic pressure was increased by raising the PCV 3.75 inches or decreased by lowering the PCV -4.5 inches, membrane output decreased. However, with the hydrostatic pressure closest to equilibrium, our current working level, membrane output, reached the 50-membrane target. As the reagent concentration increased from 1.5 to 5 mg/mL, the membrane output also increased. Reagent concentration likely effected the number of membrane output due to the associated dispensing volume needed to saturate the membranes. However, only hydrostatic pressure had a significant effect on stroboscope score, which could be due to discontinuation of dispensing, and thus the stroboscope check could not find a droplet to record. Our JMP predictive model had a high degree of agreement with our observed results. The JMP model predicted that dispensing the highest concentration of 11C7 at our current PCV working level would yield the highest number of quality membranes, which correlated with our results. Acknowledgements: This work was supported by the Chemical Biological Technologies Directorate (Contract # HDTRA1-16-C-0026) and the Advanced Technology International (Contract # MCDC-18-04-09-002) from the Department of Defense Chemical and Biological Defense program through the Defense Threat Reduction Agency (DTRA).

Keywords: immunoassay, microarray, design of experiment, piezoelectric dispensing

Procedia PDF Downloads 176
11149 Revolutionizing Healthcare Communication: The Transformative Role of Natural Language Processing and Artificial Intelligence

Authors: Halimat M. Ajose-Adeogun, Zaynab A. Bello

Abstract:

Artificial Intelligence (AI) and Natural Language Processing (NLP) have transformed computer language comprehension, allowing computers to comprehend spoken and written language with human-like cognition. NLP, a multidisciplinary area that combines rule-based linguistics, machine learning, and deep learning, enables computers to analyze and comprehend human language. NLP applications in medicine range from tackling issues in electronic health records (EHR) and psychiatry to improving diagnostic precision in orthopedic surgery and optimizing clinical procedures with novel technologies like chatbots. The technology shows promise in a variety of medical sectors, including quicker access to medical records, faster decision-making for healthcare personnel, diagnosing dysplasia in Barrett's esophagus, boosting radiology report quality, and so on. However, successful adoption requires training for healthcare workers, fostering a deep understanding of NLP components, and highlighting the significance of validation before actual application. Despite prevailing challenges, continuous multidisciplinary research and collaboration are critical for overcoming restrictions and paving the way for the revolutionary integration of NLP into medical practice. This integration has the potential to improve patient care, research outcomes, and administrative efficiency. The research methodology includes using NLP techniques for Sentiment Analysis and Emotion Recognition, such as evaluating text or audio data to determine the sentiment and emotional nuances communicated by users, which is essential for designing a responsive and sympathetic chatbot. Furthermore, the project includes the adoption of a Personalized Intervention strategy, in which chatbots are designed to personalize responses by merging NLP algorithms with specific user profiles, treatment history, and emotional states. The synergy between NLP and personalized medicine principles is critical for tailoring chatbot interactions to each user's demands and conditions, hence increasing the efficacy of mental health care. A detailed survey corroborated this synergy, revealing a remarkable 20% increase in patient satisfaction levels and a 30% reduction in workloads for healthcare practitioners. The poll, which focused on health outcomes and was administered to both patients and healthcare professionals, highlights the improved efficiency and favorable influence on the broader healthcare ecosystem.

Keywords: natural language processing, artificial intelligence, healthcare communication, electronic health records, patient care

Procedia PDF Downloads 70
11148 Software Development for Both Small Wind Performance Optimization and Structural Compliance Analysis with International Safety Regulations

Authors: K. M. Yoo, M. H. Kang

Abstract:

Conventional commercial wind turbine design software is limited to large wind turbines due to not incorporating with low Reynold’s Number aerodynamic characteristics typically for small wind turbines. To extract maximum annual energy product from an intermediately designed small wind turbine associated with measured wind data, numerous simulation is highly recommended to have a best fitting planform design with proper airfoil configuration. Since depending upon wind distribution with average wind speed, an optimal wind turbine planform design changes accordingly. It is theoretically not difficult, though, it is very inconveniently time-consuming design procedure to finalize conceptual layout of a desired small wind turbine. Thus, to help simulations easier and faster, a GUI software is developed to conveniently iterate and change airfoil types, wind data, and geometric blade data as well. With magnetic generator torque curve, peak power tracking simulation is also available to better match with the magnetic generator. Small wind turbine often lacks starting torque due to blade optimization. Thus this simulation is also embedded along with yaw design. This software provides various blade cross section details at user’s design convenience such as skin thickness control with fiber direction option, spar shape, and their material properties. Since small wind turbine is under international safety regulations with fatigue damage during normal operations and safety load analyses with ultimate excessive loads, load analyses are provided with each category mandated in the safety regulations.

Keywords: GUI software, Low Reynold’s number aerodynamics, peak power tracking, safety regulations, wind turbine performance optimization

Procedia PDF Downloads 298
11147 Factors Affecting Air Surface Temperature Variations in the Philippines

Authors: John Christian Lequiron, Gerry Bagtasa, Olivia Cabrera, Leoncio Amadore, Tolentino Moya

Abstract:

Changes in air surface temperature play an important role in the Philippine’s economy, industry, health, and food production. While increasing global mean temperature in the recent several decades has prompted a number of climate change and variability studies in the Philippines, most studies still focus on rainfall and tropical cyclones. This study aims to investigate the trend and variability of observed air surface temperature and determine its major influencing factor/s in the Philippines. A non-parametric Mann-Kendall trend test was applied to monthly mean temperature of 17 synoptic stations covering 56 years from 1960 to 2015 and a mean change of 0.58 °C or a positive trend of 0.0105 °C/year (p < 0.05) was found. In addition, wavelet decomposition was used to determine the frequency of temperature variability show a 12-month, 30-80-month and more than 120-month cycles. This indicates strong annual variations, interannual variations that coincide with ENSO events, and interdecadal variations that are attributed to PDO and CO2 concentrations. Air surface temperature was also correlated with smoothed sunspot number and galactic cosmic rays, the results show a low to no effect. The influence of ENSO teleconnection on temperature, wind pattern, cloud cover, and outgoing longwave radiation on different ENSO phases had significant effects on regional temperature variability. Particularly, an anomalous anticyclonic (cyclonic) flow east of the Philippines during the peak and decay phase of El Niño (La Niña) events leads to the advection of warm southeasterly (cold northeasterly) air mass over the country. Furthermore, an apparent increasing cloud cover trend is observed over the West Philippine Sea including portions of the Philippines, and this is believed to lessen the effect of the increasing air surface temperature. However, relative humidity was also found to be increasing especially on the central part of the country, which results in a high positive trend of heat index, exacerbating the effects on human discomfort. Finally, an assessment of gridded temperature datasets was done to look at the viability of using three high-resolution datasets in future climate analysis and model calibration and verification. Several error statistics (i.e. Pearson correlation, Bias, MAE, and RMSE) were used for this validation. Results show that gridded temperature datasets generally follows the observed surface temperature change and anomalies. In addition, it is more representative of regional temperature rather than a substitute to station-observed air temperature.

Keywords: air surface temperature, carbon dioxide, ENSO, galactic cosmic rays, smoothed sunspot number

Procedia PDF Downloads 312
11146 Application of Rapidly Exploring Random Tree Star-Smart and G2 Quintic Pythagorean Hodograph Curves to the UAV Path Planning Problem

Authors: Luiz G. Véras, Felipe L. Medeiros, Lamartine F. Guimarães

Abstract:

This work approaches the automatic planning of paths for Unmanned Aerial Vehicles (UAVs) through the application of the Rapidly Exploring Random Tree Star-Smart (RRT*-Smart) algorithm. RRT*-Smart is a sampling process of positions of a navigation environment through a tree-type graph. The algorithm consists of randomly expanding a tree from an initial position (root node) until one of its branches reaches the final position of the path to be planned. The algorithm ensures the planning of the shortest path, considering the number of iterations tending to infinity. When a new node is inserted into the tree, each neighbor node of the new node is connected to it, if and only if the extension of the path between the root node and that neighbor node, with this new connection, is less than the current extension of the path between those two nodes. RRT*-smart uses an intelligent sampling strategy to plan less extensive routes by spending a smaller number of iterations. This strategy is based on the creation of samples/nodes near to the convex vertices of the navigation environment obstacles. The planned paths are smoothed through the application of the method called quintic pythagorean hodograph curves. The smoothing process converts a route into a dynamically-viable one based on the kinematic constraints of the vehicle. This smoothing method models the hodograph components of a curve with polynomials that obey the Pythagorean Theorem. Its advantage is that the obtained structure allows computation of the curve length in an exact way, without the need for quadratural techniques for the resolution of integrals.

Keywords: path planning, path smoothing, Pythagorean hodograph curve, RRT*-Smart

Procedia PDF Downloads 164
11145 Adding a Degree of Freedom to Opinion Dynamics Models

Authors: Dino Carpentras, Alejandro Dinkelberg, Michael Quayle

Abstract:

Within agent-based modeling, opinion dynamics is the field that focuses on modeling people's opinions. In this prolific field, most of the literature is dedicated to the exploration of the two 'degrees of freedom' and how they impact the model’s properties (e.g., the average final opinion, the number of final clusters, etc.). These degrees of freedom are (1) the interaction rule, which determines how agents update their own opinion, and (2) the network topology, which defines the possible interaction among agents. In this work, we show that the third degree of freedom exists. This can be used to change a model's output up to 100% of its initial value or to transform two models (both from the literature) into each other. Since opinion dynamics models are representations of the real world, it is fundamental to understand how people’s opinions can be measured. Even for abstract models (i.e., not intended for the fitting of real-world data), it is important to understand if the way of numerically representing opinions is unique; and, if this is not the case, how the model dynamics would change by using different representations. The process of measuring opinions is non-trivial as it requires transforming real-world opinion (e.g., supporting most of the liberal ideals) to a number. Such a process is usually not discussed in opinion dynamics literature, but it has been intensively studied in a subfield of psychology called psychometrics. In psychometrics, opinion scales can be converted into each other, similarly to how meters can be converted to feet. Indeed, psychometrics routinely uses both linear and non-linear transformations of opinion scales. Here, we analyze how this transformation affects opinion dynamics models. We analyze this effect by using mathematical modeling and then validating our analysis with agent-based simulations. Firstly, we study the case of perfect scales. In this way, we show that scale transformations affect the model’s dynamics up to a qualitative level. This means that if two researchers use the same opinion dynamics model and even the same dataset, they could make totally different predictions just because they followed different renormalization processes. A similar situation appears if two different scales are used to measure opinions even on the same population. This effect may be as strong as providing an uncertainty of 100% on the simulation’s output (i.e., all results are possible). Still, by using perfect scales, we show that scales transformations can be used to perfectly transform one model to another. We test this using two models from the standard literature. Finally, we test the effect of scale transformation in the case of finite precision using a 7-points Likert scale. In this way, we show how a relatively small-scale transformation introduces both changes at the qualitative level (i.e., the most shared opinion at the end of the simulation) and in the number of opinion clusters. Thus, scale transformation appears to be a third degree of freedom of opinion dynamics models. This result deeply impacts both theoretical research on models' properties and on the application of models on real-world data.

Keywords: degrees of freedom, empirical validation, opinion scale, opinion dynamics

Procedia PDF Downloads 118
11144 Motivation of Doctors and its Impact on the Quality of Working Life

Authors: E. V. Fakhrutdinova, K. R. Maksimova, P. B. Chursin

Abstract:

At the present stage of the society progress the health care is an integral part of both the economic system and social, while in the second case the medicine is a major component of a number of basic and necessary social programs. Since the foundation of the health system are highly qualified health professionals, it is logical proposition that increase of doctor`s professionalism improves the effectiveness of the system as a whole. Professionalism of the doctor is a collection of many components, essential role played by such personal-psychological factors as honesty, willingness and desire to help people, and motivation. A number of researchers consider motivation as an expression of basic human needs that have passed through the “filter” which is a worldview and values learned in the process of socialization by the individual, to commit certain actions designed to achieve the expected result. From this point of view a number of researchers propose the following classification of highly skilled employee’s needs: 1. the need for confirmation the competence (setting goals that meet the professionalism and receipt of positive emotions in their decision), 2. The need for independence (the ability to make their own choices in contentious situations arising in the process carry out specialist functions), 3. The need for ownership (in the case of health care workers, to the profession and accordingly, high in the eyes of the public status of the doctor). Nevertheless, it is important to understand that in a market economy a significant motivator for physicians (both legal and natural persons) is to maximize its own profits. In the case of health professionals duality motivational structure creates an additional contrast, as in the public mind the image of the ideal physician; usually a altruistically minded person thinking is not primarily about their own benefit, and to assist others. In this context, the question of the real motivation of health workers deserves special attention. The survey conducted by the American researcher Harrison Terni for the magazine "Med Tech" in 2010 revealed the opinion of more than 200 medical students starting courses, and the primary motivation in a profession choice is "desire to help people", only 15% said that they want become a doctor, "to earn a lot". From the point of view of most of the classical theories of motivation this trend can be called positive, as intangible incentives are more effective. However, it is likely that over time the opinion of the respondents may change in the direction of mercantile motives. Thus, it is logical to assume that well-designed system of motivation of doctor`s labor should be based on motivational foundations laid during training in higher education.

Keywords: motivation, quality of working life, health system, personal-psychological factors, motivational structure

Procedia PDF Downloads 353
11143 Nonlinear Aerodynamic Parameter Estimation of a Supersonic Air to Air Missile by Using Artificial Neural Networks

Authors: Tugba Bayoglu

Abstract:

Aerodynamic parameter estimation is very crucial in missile design phase, since accurate high fidelity aerodynamic model is required for designing high performance and robust control system, developing high fidelity flight simulations and verification of computational and wind tunnel test results. However, in literature, there is not enough missile aerodynamic parameter identification study for three main reasons: (1) most air to air missiles cannot fly with constant speed, (2) missile flight test number and flight duration are much less than that of fixed wing aircraft, (3) variation of the missile aerodynamic parameters with respect to Mach number is higher than that of fixed wing aircraft. In addition to these challenges, identification of aerodynamic parameters for high wind angles by using classical estimation techniques brings another difficulty in the estimation process. The reason for this, most of the estimation techniques require employing polynomials or splines to model the behavior of the aerodynamics. However, for the missiles with a large variation of aerodynamic parameters with respect to flight variables, the order of the proposed model increases, which brings computational burden and complexity. Therefore, in this study, it is aimed to solve nonlinear aerodynamic parameter identification problem for a supersonic air to air missile by using Artificial Neural Networks. The method proposed will be tested by using simulated data which will be generated with a six degree of freedom missile model, involving a nonlinear aerodynamic database. The data will be corrupted by adding noise to the measurement model. Then, by using the flight variables and measurements, the parameters will be estimated. Finally, the prediction accuracy will be investigated.

Keywords: air to air missile, artificial neural networks, open loop simulation, parameter identification

Procedia PDF Downloads 271
11142 An Investigation on Engineering Students’ Perceptions Towards E-learning in the UK

Authors: Vida Razzaghifard

Abstract:

E-learning, also known as online learning, has indicated an increased growth in recent years. One of the critical factors in the successful application of e-learning in higher education is students’ perceptions towards it. The main purpose of this paper is to investigate the perceptions of engineering students about e-learning in UK. For the purpose of the present study, 145 second year Engineering students were randomly selected from the total population of 1280 participants. The participants were asked to complete a questionnaire containing 16 items. The data collected from the questionnaire were analyzed through the Statistical Package for Social Science (SPSS) software. The findings of the study revealed that the majority of participants have negative perceptions on e-learning. Most of the students had trouble interacting effectively during online classes. Furthermore, the majority of participants had negative experiences with the learning platform they used during e-learning. Suggestions were made on what could be done to improve the students’ perceptions towards e-learning.

Keywords: E-learning, higher, education, engineering education, online learning

Procedia PDF Downloads 92
11141 Identifying Chaotic Architecture: Origins of Nonlinear Design Theory

Authors: Mohammadsadegh Zanganehfar

Abstract:

Since the modernism, movement, and appearance of modern architecture, an aggressive desire for a general design theory in the theoretical works of architects in the form of books and essays emerges. Since Robert Venturi and Denise Scott Brown’s published complexity and contradiction in architecture in 1966, the discourse of complexity and volumetric composition has been an important and controversial issue in the discipline. Ever since various theories and essays were involved in this discourse, this paper attempt to identify chaos theory as a scientific model of complexity and its relation to architecture design theory by conducting a qualitative analysis and multidisciplinary critical approach through architecture and basic sciences resources. As a result, we identify chaotic architecture as the correlation of chaos theory and architecture as an independent nonlinear design theory with specific characteristics and properties.

Keywords: architecture complexity, chaos theory, fractals, nonlinear dynamic systems, nonlinear ontology

Procedia PDF Downloads 369
11140 Estimation of the Nutritive Value of Local Forage Cowpea Cultivars in Different Environments

Authors: Salem Alghamdi

Abstract:

Genotypes collected from farmers at a different region of Saudi Arabia as well as from Egyptian cultivar and a new line from Yamen. Seeds of these genotypes were grown in Dirab Agriculture Research Station, (Middle Region) and Al-Ahsa Palms and Dates Research Center (East region), during summer of 2015. Field experiments were laid out in randomized complete block design on the first week of June with three replications. Each experiment plot contained 6 rows 3m in length. Inter- and intra-row spacing was 60 and 25cm, respectively. Seed yield and its components were estimated in addition to qualitative characters on cowpea plants grown only in Dirab using cowpea descriptor from IPGRI, 1982. Seeds for chemical composite and antioxidant contents were analyzed. Highly significant differences were detected between genotypes in both locations and the combined of two locations for seed yield and its components. Mean data clearly show exceeded determine genotypes in seed yield while indeterminate genotypes had higher biological yield that divided cowpea genotypes to two main groups 1- forage genotypes (KSU-CO98, KSU-CO99, KSU-CO100, and KSU-CO104) that were taller and produce higher branches, biological yield and these are suitable to feed on haulm 2- food genotypes (KSU-CO101, KSU-CO102, and KSU-CO103) that produce higher seed yield with lower haulm and also these genotypes characters by high seed index and light seed color. Highly significant differences were recorded for locations in all studied characters except the number of branches, seed index, and biological yield, however, the interaction of genotype x location was significant only for plant height, the number of pods and seed yield per plant.

Keywords: Cowpea, genotypes, antioxidant contents, yield

Procedia PDF Downloads 250
11139 Assessing Significance of Correlation with Binomial Distribution

Authors: Vijay Kumar Singh, Pooja Kushwaha, Prabhat Ranjan, Krishna Kumar Ojha, Jitendra Kumar

Abstract:

Present day high-throughput genomic technologies, NGS/microarrays, are producing large volume of data that require improved analysis methods to make sense of the data. The correlation between genes and samples has been regularly used to gain insight into many biological phenomena including, but not limited to, co-expression/co-regulation, gene regulatory networks, clustering and pattern identification. However, presence of outliers and violation of assumptions underlying Pearson correlation is frequent and may distort the actual correlation between the genes and lead to spurious conclusions. Here, we report a method to measure the strength of association between genes. The method assumes that the expression values of a gene are Bernoulli random variables whose outcome depends on the sample being probed. The method considers the two genes as uncorrelated if the number of sample with same outcome for both the genes (Ns) is equal to certainly expected number (Es). The extent of correlation depends on how far Ns can deviate from the Es. The method does not assume normality for the parent population, fairly unaffected by the presence of outliers, can be applied to qualitative data and it uses the binomial distribution to assess the significance of association. At this stage, we would not claim about the superiority of the method over other existing correlation methods, but our method could be another way of calculating correlation in addition to existing methods. The method uses binomial distribution, which has not been used until yet, to assess the significance of association between two variables. We are evaluating the performance of our method on NGS/microarray data, which is noisy and pierce by the outliers, to see if our method can differentiate between spurious and actual correlation. While working with the method, it has not escaped our notice that the method could also be generalized to measure the association of more than two variables which has been proven difficult with the existing methods.

Keywords: binomial distribution, correlation, microarray, outliers, transcriptome

Procedia PDF Downloads 407
11138 Juvenile Justice Reforms for the 21st Century: Promising Approaches in Bangladesh

Authors: Nahid Ferdousi

Abstract:

Juvenile justice is a key component of the child rights to keep the best interest and completely different from criminal justice. After independence of Bangladesh in 1971, the Children Act 1974 and the Children Rules 1976 were considered as the basic law for juvenile justice which written before many international instruments on children’s rights came into existence, did not align with the international mandate set by those instruments. These Acts were not really child rights-based and modern concept such as diversion, restorative justice and community-based rehabilitation has not developed accordingly. In this backdrop, government has enacted the new Children Act 2013 and introduced extensive reforms to the juvenile justice system in Bangladesh. The Act has been adopted with the provisions for child-friendly juvenile courts in each district and different kinds of child-oriented practices in a number of settings, such as, child affairs police officer, probation officer, national child welfare board, diversion, alternative preventive measures on the basis of international principles. Prior to the Act, there had been a number of High Court rulings which considered the international standards for juvenile justice. But the recent reforms to juvenile justice system hail a new commitment to the country’s international obligations to its children and a change in the philosophy guiding the treatment of offender children. This is high time to create an effective juvenile justice system for the 21st century in Bangladesh by the proper implementation of the Children Act 2013. Additionally, the new Children Rules should be enacted and juvenile courts along with correctional institutions should be established in each district in Bangladesh. This study assesses the juvenile justice reforms in Bangladesh over the five decades (1974-2014) and focuses on changes that will improve the system as a whole and enable us to better achieve the ends of fair juvenile justice.

Keywords: Juvenile justice reforms, international obligations, child-oriented practices, commitment of the state

Procedia PDF Downloads 420