Search results for: minimum fuzziness criterion
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 2613

Search results for: minimum fuzziness criterion

2073 Parallel Genetic Algorithms Clustering for Handling Recruitment Problem

Authors: Walid Moudani, Ahmad Shahin

Abstract:

This research presents a study to handle the recruitment services system. It aims to enhance a business intelligence system by embedding data mining in its core engine and to facilitate the link between job searchers and recruiters companies. The purpose of this study is to present an intelligent management system for supporting recruitment services based on data mining methods. It consists to apply segmentation on the extracted job postings offered by the different recruiters. The details of the job postings are associated to a set of relevant features that are extracted from the web and which are based on critical criterion in order to define consistent clusters. Thereafter, we assign the job searchers to the best cluster while providing a ranking according to the job postings of the selected cluster. The performance of the proposed model used is analyzed, based on a real case study, with the clustered job postings dataset and classified job searchers dataset by using some metrics.

Keywords: job postings, job searchers, clustering, genetic algorithms, business intelligence

Procedia PDF Downloads 329
2072 Political Economy on the Recent Labor Condition in the Philippines: A Literature Review

Authors: Lloyd B. Ranises

Abstract:

The Philippine labor force has been affected by the pandemic recently. The situation was added by the high inflation rate, which makes matter worse. Since the Philippines has a new government after the 2022 national election, the labor condition under the previous government has been passed on to the new one. To understand the labor challenges the present government faces, this study revisits the labor conditions and responses of the previous government from 2016 to 2022. Thus, this study reviews the labor force of the Philippines within the time frame. It explores the challenges in the labor market and examines government policy. This study uses secondary sources in tracing the labor conditions and government actions that addressed them. The Literatures are consolidated to see its relevance to the new government’s labor policy. This study found that the labor force had a sluggish growth earlier until 2018 and thrived on but was affected by the pandemic. By 2020, the National Capital Region’s labor force dropped, although, after which, it begins to thrive again, showing recovery. However, its composition is much more complex. Cognitive skill is high in demand that requires tertiary education. But the production of goods and services is low in the scientific workforce in addition to the mismatch between position and profession. Moreover, Philippine labor has poor female participation. In addition to these complexities, the agricultural rural areas have high underemployment, which implies surplus labor of low skill. Overseas employment, on the other, is significant to the decrease in domestic production. The major responses of the previous government, by far, have been focused on the minimum wage increase and the social services and health insurance, which are appropriate to the post-pandemic needs. Yet still, some issues are unattended. This study concludes that the previous government’s policy needs to be fleshed out substantially. It necessitates that the new administration shall consider encompassing all aspects of the Philippine labor force to sustain and strengthen the economy of the country.

Keywords: cognitive skills, minimum wage, national capital region, underemployment

Procedia PDF Downloads 111
2071 Application of the Standard Deviation in Regulating Design Variation of Urban Solutions Generated through Evolutionary Computation

Authors: Mohammed Makki, Milad Showkatbakhsh, Aiman Tabony

Abstract:

Computational applications of natural evolutionary processes as problem-solving tools have been well established since the mid-20th century. However, their application within architecture and design has only gained ground in recent years, with an increasing number of academics and professionals in the field electing to utilize evolutionary computation to address problems comprised from multiple conflicting objectives with no clear optimal solution. Recent advances in computer science and its consequent constructive influence on the architectural discourse has led to the emergence of multiple algorithmic processes capable of simulating the evolutionary process in nature within an efficient timescale. Many of the developed processes of generating a population of candidate solutions to a design problem through an evolutionary based stochastic search process are often driven through the application of both environmental and architectural parameters. These methods allow for conflicting objectives to be simultaneously, independently, and objectively optimized. This is an essential approach in design problems with a final product that must address the demand of a multitude of individuals with various requirements. However, one of the main challenges encountered through the application of an evolutionary process as a design tool is the ability for the simulation to maintain variation amongst design solutions in the population while simultaneously increasing in fitness. This is most commonly known as the ‘golden rule’ of balancing exploration and exploitation over time; the difficulty of achieving this balance in the simulation is due to the tendency of either variation or optimization being favored as the simulation progresses. In such cases, the generated population of candidate solutions has either optimized very early in the simulation, or has continued to maintain high levels of variation to which an optimal set could not be discerned; thus, providing the user with a solution set that has not evolved efficiently to the objectives outlined in the problem at hand. As such, the experiments presented in this paper seek to achieve the ‘golden rule’ by incorporating a mathematical fitness criterion for the development of an urban tissue comprised from the superblock as its primary architectural element. The mathematical value investigated in the experiments is the standard deviation factor. Traditionally, the standard deviation factor has been used as an analytical value rather than a generative one, conventionally used to measure the distribution of variation within a population by calculating the degree by which the majority of the population deviates from the mean. A higher standard deviation value delineates a higher number of the population is clustered around the mean and thus limited variation within the population, while a lower standard deviation value is due to greater variation within the population and a lack of convergence towards an optimal solution. The results presented will aim to clarify the extent to which the utilization of the standard deviation factor as a fitness criterion can be advantageous to generating fitter individuals in a more efficient timeframe when compared to conventional simulations that only incorporate architectural and environmental parameters.

Keywords: architecture, computation, evolution, standard deviation, urban

Procedia PDF Downloads 133
2070 In Vitro Antibacterial Activity of Some Medicinal Plants Against Biofilm-Forming Methicillin-Resistant Staphylococcus aureus

Authors: Tesleem Adewale Ibrahim

Abstract:

Introduction: The prevalence of methicillin-resistant Staphylococcus aureus (MRSA) has been slowly rising in Nigeria for the past few decades. Therefore, novel classes of antibiotics are indispensable to combat the increased incidence of newly emerging multidrug-resistant bacteria like MRSA. Plants have been commonly used in popular medicine of most cultures for the treatment of disease. The in vitro antibacterial activity of some Nigerian common medicinal plants used in traditional medicine has been reported. The aim of this study was to investigate the antibacterial and anti-biofilm of these native plants (Entada abysinnica (leaves), Croton macrostachyus (leaves), Bridelia speciosa (seeds, bark), and Aframomum melegueta (leaves, seeds, and stem) collected in Southwestern Nigeria against a panel of seven biofilm-forming MRSA. Methods: Minimum inhibitory concentrations (MIC) and minimum bactericidal concentrations (MBC) of the plant extracts against MRSA were determined by the broth dilution method, and the anti-biofilm assay of the most potent plant extract was performed. Result: The results revealed that, of the four plants, water extracts of leaves of Entada abysinnica, leaves of Croton macrostachyus, seeds and bark Bridelia speciosa, and seeds of Aframomum melegueta exhibited significant antibacterial activity. Based on the MIC/MBC ratio, the extracts of these plants were determined to be bacteriostatic in nature. Anti-biofilm assay showed that the extract of seeds of Aframomum melegueta and leaves of Croton macrostachyus fairly inhibited the growth of MRSA in the preformed biofilm matrix. Conclusion: These four medicinal plant species may represent a source of alternative drugs derived from plant extracts based on folklore use and ethnobotanical knowledge from southwest Nigeria.

Keywords: extract, MRSA, antibacterial, biofilm, medicinal plants

Procedia PDF Downloads 125
2069 The Roles of Pay Satisfaction and Intent to Leave on Counterproductive Work Behavior among Non-Academic University Employees

Authors: Abiodun Musbau Lawal, Sunday Samson Babalola, Uzor Friday Ordu

Abstract:

Issue of employees counterproductive work behavior in government owned organization in emerging economies has continued to be a major concern. This study investigated the factors of pay satisfaction, intent to leave and age as predictors of counterproductive work behavior among non-academic employee in a Nigerian federal government owned university. A sample of 200 non-academic employees completed questionnaires. Hierarchical multiple regression was conducted to determine the contribution of each of the predictor variables on the criterion variable on counterproductive work behavior. Results indicate that age of participants (β = -.18; p < .05) significantly independently predicted CWB by accounting for 3% of the explained variance. Addition of pay satisfaction (β = -.14; p < .05) significantly accounted for 5% of the explained variance, while intent to leave (β = -.17; p < .05) further resulted in 8% of the explained variance in counterproductive work behavior. The importance of these findings with regards to reduction in counterproductive work behavior is highlighted.

Keywords: counterproductive, work behaviour, pay satisfaction, intent to leave

Procedia PDF Downloads 383
2068 Evaluation of Anti-Typhoid Effects of Azadirachta indica L. Fractions

Authors: A. Adetutu, T. M. Awodugba, O. A. Owoade

Abstract:

The development of resistance to currently known conventional anti-typhoid drugs has necessitated search into cheap, more potent and less toxic anti-typhoid drugs of plant origin. Therefore, this study investigated the anti-typhoid activity of fractions of A. indica in Salmonella typhi infected rats. Leaves of A. indica were extracted in methanol and fractionated into n-hexane, chloroform, ethyl-acetate, and aqueous fractions. The anti-salmonella potentials of fractions of A. indica were assessed via in-vitro inhibition of S. typhi using agar well diffusion, Minimum Inhibitory Concentration (MIC), Minimum Bactericidal Concentration (MBC) and biofilm assays. The biochemical and haematological parameters were determined by spectrophotometric methods. The histological analysis was performed using Haematoxylin and Eosin staining methods. Data analysis was performed by one-way ANOVA. Results of this study showed that S. typhi was sensitive to aqueous and chloroform fractions of A. indica, and the fractions showed biofilm inhibition at concentrations of 12.50, 1.562, and 0.39 mg/mL. In the in-vivo study, the extract and chloroform fraction had significant (p < 0.05) effects on the number of viable S. typhi recovered from the blood and stopped salmonellosis after 6 days of treatment of rats at 500 mg/kg b.w. Treatments of infected rats with chloroform and aqueous fractions of A. indica normalized the haematological parameters in the animals. Similarly, treatment with fractions of the plants sustained a normal antioxidant status when compared with the normal control group. Chloroform and ethyl-acetate fractions of A. indica reversed the liver and intestinal degeneration induced by S. typhi infection in rats. The present investigation indicated that the aqueous and chloroform fractions of A. indica showed the potential to provide an effective treatment for salmonellosis, including typhoid fever. The results of the study may justify the ethno-medicinal use of the extract in traditional medicine for the treatment of typhoid and salmonella infections.

Keywords: Azadirachta indica L, salmonella, typhoid, leave fractions

Procedia PDF Downloads 132
2067 Deep Reinforcement Learning with Leonard-Ornstein Processes Based Recommender System

Authors: Khalil Bachiri, Ali Yahyaouy, Nicoleta Rogovschi

Abstract:

Improved user experience is a goal of contemporary recommender systems. Recommender systems are starting to incorporate reinforcement learning since it easily satisfies this goal of increasing a user’s reward every session. In this paper, we examine the most effective Reinforcement Learning agent tactics on the Movielens (1M) dataset, balancing precision and a variety of recommendations. The absence of variability in final predictions makes simplistic techniques, although able to optimize ranking quality criteria, worthless for consumers of the recommendation system. Utilizing the stochasticity of Leonard-Ornstein processes, our suggested strategy encourages the agent to investigate its surroundings. Research demonstrates that raising the NDCG (Discounted Cumulative Gain) and HR (HitRate) criterion without lowering the Ornstein-Uhlenbeck process drift coefficient enhances the diversity of suggestions.

Keywords: recommender systems, reinforcement learning, deep learning, DDPG, Leonard-Ornstein process

Procedia PDF Downloads 142
2066 Free Vibration Analysis of Pinned-Pinned and Clamped-Clamped Equal Strength Columns under Self-Weight and Tip Force Using Differential Quadrature Method

Authors: F. Waffo Tchuimmo, G. S. Kwandio Dongoua, C. U. Yves Mbono Samba, O. Dafounansou, L. Nana

Abstract:

The strength criterion is an important condition of great interest to guarantee the stability of the structural elements. The present work is based on the study of the free vibration of Euler’s Bernoulli column of equal strength in compression while considering its own weight and the axial load in compression and tension subjected to symmetrical boundary conditions. We use the differential quadrature method to investigate the first fifth naturals frequencies parameters of the column according to the different forms of geometrical sections. The results of this work give help in making a judicious choice of type of cross-section and a better boundary condition to guarantee good stability of this type of column in civil constructions.

Keywords: free vibration, equal strength, self-weight, tip force, differential quadrature method

Procedia PDF Downloads 133
2065 A General Form of Characteristics Method Applied on Minimum Length Nozzles Design

Authors: Merouane Salhi, Mohamed Roudane, Abdelkader Kirad

Abstract:

In this work, we present a new form of characteristics method, which is a technique for solving partial differential equations. Typically, it applies to first-order equations; the aim of this method is to reduce a partial differential equation to a family of ordinary differential equations along which the solution can be integrated from some initial data. This latter developed under the real gas theory, because when the thermal and the caloric imperfections of a gas increases, the specific heat and their ratio do not remain constant anymore and start to vary with the gas parameters. The gas doesn’t stay perfect. Its state equation change and it becomes for a real gas. The presented equations of the characteristics remain valid whatever area or field of study. Here we need have inserted the developed Prandtl Meyer function in the mathematical system to find a new model when the effect of stagnation pressure is taken into account. In this case, the effects of molecular size and intermolecular attraction forces intervene to correct the state equation, the thermodynamic parameters and the value of Prandtl Meyer function. However, with the assumptions that Berthelot’s state equation accounts for molecular size and intermolecular force effects, expressions are developed for analyzing the supersonic flow for thermally and calorically imperfect gas. The supersonic parameters depend directly on the stagnation parameters of the combustion chamber. The resolution has been made by the finite differences method using the corrector predictor algorithm. As results, the developed mathematical model used to design 2D minimum length nozzles under effect of the stagnation parameters of fluid flow. A comparison for air with the perfect gas PG and high temperature models on the one hand and our results by the real gas theory on the other of nozzles shapes and characteristics are made.

Keywords: numerical methods, nozzles design, real gas, stagnation parameters, supersonic expansion, the characteristics method

Procedia PDF Downloads 242
2064 Self-Organizing Maps for Exploration of Partially Observed Data and Imputation of Missing Values in the Context of the Manufacture of Aircraft Engines

Authors: Sara Rejeb, Catherine Duveau, Tabea Rebafka

Abstract:

To monitor the production process of turbofan aircraft engines, multiple measurements of various geometrical parameters are systematically recorded on manufactured parts. Engine parts are subject to extremely high standards as they can impact the performance of the engine. Therefore, it is essential to analyze these databases to better understand the influence of the different parameters on the engine's performance. Self-organizing maps are unsupervised neural networks which achieve two tasks simultaneously: they visualize high-dimensional data by projection onto a 2-dimensional map and provide clustering of the data. This technique has become very popular for data exploration since it provides easily interpretable results and a meaningful global view of the data. As such, self-organizing maps are usually applied to aircraft engine condition monitoring. As databases in this field are huge and complex, they naturally contain multiple missing entries for various reasons. The classical Kohonen algorithm to compute self-organizing maps is conceived for complete data only. A naive approach to deal with partially observed data consists in deleting items or variables with missing entries. However, this requires a sufficient number of complete individuals to be fairly representative of the population; otherwise, deletion leads to a considerable loss of information. Moreover, deletion can also induce bias in the analysis results. Alternatively, one can first apply a common imputation method to create a complete dataset and then apply the Kohonen algorithm. However, the choice of the imputation method may have a strong impact on the resulting self-organizing map. Our approach is to address simultaneously the two problems of computing a self-organizing map and imputing missing values, as these tasks are not independent. In this work, we propose an extension of self-organizing maps for partially observed data, referred to as missSOM. First, we introduce a criterion to be optimized, that aims at defining simultaneously the best self-organizing map and the best imputations for the missing entries. As such, missSOM is also an imputation method for missing values. To minimize the criterion, we propose an iterative algorithm that alternates the learning of a self-organizing map and the imputation of missing values. Moreover, we develop an accelerated version of the algorithm by entwining the iterations of the Kohonen algorithm with the updates of the imputed values. This method is efficiently implemented in R and will soon be released on CRAN. Compared to the standard Kohonen algorithm, it does not come with any additional cost in terms of computing time. Numerical experiments illustrate that missSOM performs well in terms of both clustering and imputation compared to the state of the art. In particular, it turns out that missSOM is robust to the missingness mechanism, which is in contrast to many imputation methods that are appropriate for only a single mechanism. This is an important property of missSOM as, in practice, the missingness mechanism is often unknown. An application to measurements on one type of part is also provided and shows the practical interest of missSOM.

Keywords: imputation method of missing data, partially observed data, robustness to missingness mechanism, self-organizing maps

Procedia PDF Downloads 151
2063 Investigation of Static Stability of Soil Slopes Using Numerical Modeling

Authors: Seyed Abolhasan Naeini, Elham Ghanbari Alamooti

Abstract:

Static stability of soil slopes using numerical simulation by a finite element code, ABAQUS, has been investigated, and safety factors of the slopes achieved in the case of static load of a 10-storey building. The embankments have the same soil condition but different loading distance from the slope heel. The numerical method for estimating safety factors is 'Strength Reduction Method' (SRM). Mohr-Coulomb criterion used in the numerical simulations. Two steps used for measuring the safety factors of the slopes: first is under gravity loading, and the second is under static loading of a building near the slope heel. These safety factors measured from SRM, are compared with the values from Limit Equilibrium Method, LEM. Results show that there is good agreement between SRM and LEM. Also, it is seen that by increasing the distance from slope heel, safety factors increases.

Keywords: limit equilibrium method, static stability, soil slopes, strength reduction method

Procedia PDF Downloads 163
2062 Comparative Performance of Standing Whole Body Monitor and Shielded Chair Counter for In-vivo Measurements

Authors: M. Manohari, S. Priyadharshini, K. Bajeer Sulthan, R. Santhanam, S. Chandrasekaran, B. Venkatraman

Abstract:

In-vivo monitoring facility at Indira Gandhi Centre for Atomic Research (IGCAR), Kalpakkam, caters to the monitoring of internal exposure of occupational radiation workers from various radioactive facilities of IGCAR. Internal exposure measurement is done using Na(Tl) based Scintillation detectors. Two types of whole-body counters, namely Shielded Chair Counter (SC) and Standing Whole-Body Monitor (SWBM), are being used. The shielded Chair is based on a NaI detector of 20.3 cm diameter and 10.15 cm thick. The chair of the system is shielded using lead shots of 10 cm lead equivalent and the detector with 8 cm lead bricks. Counting geometry is sitting geometry. Calibration is done using 95 percentile BOMAB phantom. The minimum Detectable Activity (MDA) for 137Cs for the 60s is 1150 Bq. Standing Wholebody monitor (SWBM) has two NaI(Tl) detectors of size 10.16 x 10.16 x 40.64 cm3 positioned serially, one over the other. It has a shielding thickness of 5cm lead equivalent. Counting is done in standup geometry. Calibration is done with the help of Ortec Phantom, having a uniform distribution of mixed radionuclides for the thyroid, thorax and pelvis. The efficiency of SWBM is 2.4 to 3.5 times higher than that of the shielded chair in the energy range of 279 to 1332 keV. MDA of 250 Bq for 137Cs can be achieved with a counting time of 60s. MDA for 131I in the thyroid was estimated as 100 Bq from the MDA of whole-body for one-day post intake. Standing whole body monitor is better in terms of efficiency, MDA and ease of positioning. In case of emergency situations, the optimal MDAs for in-vivo monitoring service are 1000 Bq for 137Cs and 100 Bq for 131I. Hence, SWBM is more suitable for the rapid screening of workers as well as the public in the case of an emergency. While a person reports for counting, there is a potential for external contamination. In SWBM, there is a feasibility to discriminate them as the subject can be counted in anterior or posterior geometry which is not possible in SC.

Keywords: minimum detectable activity, shielded chair, shielding thickness, standing whole body monitor

Procedia PDF Downloads 46
2061 Review and Evaluation of Viscose Damper on Structural Responses

Authors: Ehsan Sadie

Abstract:

Developments in the field of damping technology and advances in the area of dampers in equipping many structures have been the result of efforts and testing by researchers in this field. In this paper, a sample of a two-story building is simulated with the help of SAP2000 software, and the effect of a viscous damper on the performance of the structure is explained. The effect of dampers on the response of the structure is investigated. This response involves the horizontal displacement of floors. In this case, the structure is modeled once without a damper and again with a damper. In this regard, the results are presented in the form of tables and graphs. Since the seismic behavior of the structure is studied, the responses show the appropriate effect of viscous dampers in reducing the displacement of floors, and also the energy dissipation in the structure with dampers compared to structures without dampers is significant. Therefore, it is economical to use viscous dampers in areas that have a higher relative earthquake risk.

Keywords: bending frame, displacement criterion, dynamic response spectra, earthquake, non-linear history spectrum, SAP2000 software, structural response, viscous damper

Procedia PDF Downloads 115
2060 Reliability and Validity of a Portable Inertial Sensor and Pressure Mat System for Measuring Dynamic Balance Parameters during Stepping

Authors: Emily Rowe

Abstract:

Introduction: Balance assessments can be used to help evaluate a person’s risk of falls, determine causes of balance deficits and inform intervention decisions. It is widely accepted that instrumented quantitative analysis can be more reliable and specific than semi-qualitative ordinal scales or itemised scoring methods. However, the uptake of quantitative methods is hindered by expense, lack of portability, and set-up requirements. During stepping, foot placement is actively coordinated with the body centre of mass (COM) kinematics during pre-initiation. Based on this, the potential to use COM velocity just prior to foot off and foot placement error as an outcome measure of dynamic balance is currently being explored using complex 3D motion capture. Inertial sensors and pressure mats might be more practical technologies for measuring these parameters in clinical settings. Objective: The aim of this study was to test the criterion validity and test-retest reliability of a synchronised inertial sensor and pressure mat-based approach to measure foot placement error and COM velocity while stepping. Methods: Trials were held with 15 healthy participants who each attended for two sessions. The trial task was to step onto one of 4 targets (2 for each foot) multiple times in a random, unpredictable order. The stepping target was cued using an auditory prompt and electroluminescent panel illumination. Data was collected using 3D motion capture and a combined inertial sensor-pressure mat system simultaneously in both sessions. To assess the reliability of each system, ICC estimates and their 95% confident intervals were calculated based on a mean-rating (k = 2), absolute-agreement, 2-way mixed-effects model. To test the criterion validity of the combined inertial sensor-pressure mat system against the motion capture system multi-factorial two-way repeated measures ANOVAs were carried out. Results: It was found that foot placement error was not reliably measured between sessions by either system (ICC 95% CIs; motion capture: 0 to >0.87 and pressure mat: <0.53 to >0.90). This could be due to genuine within-subject variability given the nature of the stepping task and brings into question the suitability of average foot placement error as an outcome measure. Additionally, results suggest the pressure mat is not a valid measure of this parameter since it was statistically significantly different from and much less precise than the motion capture system (p=0.003). The inertial sensor was found to be a moderately reliable (ICC 95% CIs >0.46 to >0.95) but not valid measure for anteroposterior and mediolateral COM velocities (AP velocity: p=0.000, ML velocity target 1 to 4: p=0.734, 0.001, 0.000 & 0.376). However, it is thought that with further development, the COM velocity measure validity could be improved. Possible options which could be investigated include whether there is an effect of inertial sensor placement with respect to pelvic marker placement or implementing more complex methods of data processing to manage inherent accelerometer and gyroscope limitations. Conclusion: The pressure mat is not a suitable alternative for measuring foot placement errors. The inertial sensors have the potential for measuring COM velocity; however, further development work is needed.

Keywords: dynamic balance, inertial sensors, portable, pressure mat, reliability, stepping, validity, wearables

Procedia PDF Downloads 153
2059 Exchanging Radiology Reporting System with Electronic Health Record: Designing a Conceptual Model

Authors: Azadeh Bashiri

Abstract:

Introduction: In order to better designing of electronic health record system in Iran, integration of health information systems based on a common language must be done to interpret and exchange this information with this system is required. Background: This study, provides a conceptual model of radiology reporting system using unified modeling language. The proposed model can solve the problem of integration this information system with electronic health record system. By using this model and design its service based, easily connect to electronic health record in Iran and facilitate transfer radiology report data. Methods: This is a cross-sectional study that was conducted in 2013. The student community was 22 experts that working at the Imaging Center in Imam Khomeini Hospital in Tehran and the sample was accorded with the community. Research tool was a questionnaire that prepared by the researcher to determine the information requirements. Content validity and test-retest method was used to measure validity and reliability of questioner respectively. Data analyzed with average index, using SPSS. Also, Visual Paradigm software was used to design a conceptual model. Result: Based on the requirements assessment of experts and related texts, administrative, demographic and clinical data and radiological examination results and if the anesthesia procedure performed, anesthesia data suggested as minimum data set for radiology report and based it class diagram designed. Also by identifying radiology reporting system process, use case was drawn. Conclusion: According to the application of radiology reports in electronic health record system for diagnosing and managing of clinical problem of the patient, provide the conceptual Model for radiology reporting system; in order to systematically design it, the problem of data sharing between these systems and electronic health records system would eliminate.

Keywords: structured radiology report, information needs, minimum data set, electronic health record system in Iran

Procedia PDF Downloads 253
2058 Discrete Crack Modeling of Side Face FRP-Strengthened Concrete Beam

Authors: Shahriar Shahbazpanahi, Mohammad Hemen Jannaty, Alaleh Kamgar

Abstract:

Shear strengthening can be carried out in concrete structures by external fibre reinforced polymer (FRP). In the present investigation, a new fracture mechanics model is developed to model side face of strengthened concrete beam by external FRP. Discrete crack is simulated by a spring element with softening behavior ahead of the crack tip to model the cohesive zone in concrete. A truss element is used, parallel to the spring element, to simulate the energy dissipation rate by the FRP. The strain energy release rate is calculated directly by using a virtual crack closure technique and then, the crack propagation criterion is presented. The results are found acceptable when compared to previous experimental results and ABAQUS software data. It is observed that the length of the fracture process zone (FPZ) increases with the application of FRP in side face at the same load in comparison with that of the control beam.

Keywords: FPZ, fracture, FRP, shear

Procedia PDF Downloads 534
2057 Phase II Monitoring of First-Order Autocorrelated General Linear Profiles

Authors: Yihua Wang, Yunru Lai

Abstract:

Statistical process control has been successfully applied in a variety of industries. In some applications, the quality of a process or product is better characterized and summarized by a functional relationship between a response variable and one or more explanatory variables. A collection of this type of data is called a profile. Profile monitoring is used to understand and check the stability of this relationship or curve over time. The independent assumption for the error term is commonly used in the existing profile monitoring studies. However, in many applications, the profile data show correlations over time. Therefore, we focus on a general linear regression model with a first-order autocorrelation between profiles in this study. We propose an exponentially weighted moving average charting scheme to monitor this type of profile. The simulation study shows that our proposed methods outperform the existing schemes based on the average run length criterion.

Keywords: autocorrelation, EWMA control chart, general linear regression model, profile monitoring

Procedia PDF Downloads 460
2056 Tsunami Wave Height and Flow Velocity Calculations Based on Density Measurements of Boulders: Case Studies from Anegada and Pakarang Cape

Authors: Zakiul Fuady, Michaela Spiske

Abstract:

Inundation events, such as storms and tsunamis can leave onshore sedimentary evidence like sand deposits or large boulders. These deposits store indirect information on the related inundation parameters (e.g., flow velocity, flow depth, wave height). One tool to reveal these parameters are inverse models that use the physical characteristics of the deposits to refer to the magnitude of inundation. This study used boulders of the 2004 Indian Ocean Tsunami from Thailand (Pakarang Cape) and form a historical tsunami event that inundated the outer British Virgin Islands (Anegada). For the largest boulder found in Pakarang Cape with a volume of 26.48 m³ the required tsunami wave height is 0.44 m and storm wave height are 1.75 m (for a bulk density of 1.74 g/cm³. In Pakarang Cape the highest tsunami wave height is 0.45 m and storm wave height are 1.8 m for transporting a 20.07 m³ boulder. On Anegada, the largest boulder with a diameter of 2.7 m is the asingle coral head (species Diploria sp.) with a bulk density of 1.61 g/cm³, and requires a minimum tsunami wave height of 0.31 m and storm wave height of 1.25 m. The highest required tsunami wave height on Anegada is 2.12 m for a boulder with a bulk density of 2.46 g/cm³ (volume 0.0819 m³) and the highest storm wave height is 5.48 m (volume 0.216 m³) from the same bulk density and the coral type is limestone. Generally, the higher the bulk density, volume, and weight of the boulders, the higher the minimum tsunami and storm wave heights required to initiate transport. It requires 4.05 m/s flow velocity by Nott’s equation (2003) and 3.57 m/s by Nandasena et al. (2011) to transport the largest boulder in Pakarang Cape, whereas on Anegada, it requires 3.41 m/s to transport a boulder with diameter 2.7 m for both equations. Thus, boulder equations need to be handled with caution because they make many assumptions and simplifications. Second, the physical boulder parameters, such as density and volume need to be determined carefully to minimize any errors.

Keywords: tsunami wave height, storm wave height, flow velocity, boulders, Anegada, Pakarang Cape

Procedia PDF Downloads 237
2055 Tailoring the Parameters of the Quantum MDS Codes Constructed from Constacyclic Codes

Authors: Jaskarn Singh Bhullar, Divya Taneja, Manish Gupta, Rajesh Kumar Narula

Abstract:

The existence conditions of dual containing constacyclic codes have opened a new path for finding quantum maximum distance separable (MDS) codes. Using these conditions parameters of length n=(q²+1)/2 quantum MDS codes were improved. A class of quantum MDS codes of length n=(q²+q+1)/h, where h>1 is an odd prime, have also been constructed having large minimum distance and these codes are new in the sense as these are not available in the literature.

Keywords: hermitian construction, constacyclic codes, cyclotomic cosets, quantum MDS codes, singleton bound

Procedia PDF Downloads 388
2054 On the Study of All Waterloo Automaton Semilattices

Authors: Mikhail Abramyan, Boris Melnikov

Abstract:

The aim is to study the set of subsets of grids of the Waterloo automaton and the set of covering automata defined by the grid subsets. The study was carried out using the library for working with nondeterministic finite automata NFALib implemented by one of the authors (M. Abramyan) in C#. The results are regularities obtained when considering semilattices of covering automata for the Waterloo automaton. A complete description of the obtained semilattices from the point of view of equivalence of the covering automata to the original Waterloo automaton is given, the criterion of equivalence of the covering automaton to the Waterloo automaton in terms of properties of the subset of grids defining the covering automaton is formulated. The relevance of the subject area under consideration is due to the need to research a set of regular languages and, in particular, a description of their various subclasses. Also relevant are the problems that may arise in some subclasses. This will give, among other things, the possibility of describing new algorithms for the equivalent transformation of nondeterministic finite automata.

Keywords: nondeterministic finite automata, universal automaton, grid, covering automaton, equivalent transformation algorithms, the Waterloo automaton

Procedia PDF Downloads 87
2053 The Implementation of Character Education in Code Riverbanks, Special Region of Yogyakarta, Indonesia

Authors: Ulil Afidah, Muhamad Fathan Mubin, Firdha Aulia

Abstract:

Code riverbanks Yogyakarta is a settlement area with middle to lower social classes. Socio-economic situation is affecting the behavior of society. This research aimed to find and explain the implementation and the assessment of character education which were done in elementary schools in Code riverside, Yogyakarta region of Indonesia. This research is a qualitative research which the subjects were the kids of Code riverbanks, Yogyakarta. The data were collected through interviews and document studies and analyzed qualitatively using the technique of interactive analysis model of Miles and Huberman. The results show that: (1) The learning process of character education was done by integrating all aspects such as democratic and interactive learning session also introducing role model to the students. 2) The assessment of character education was done by teacher based on teaching and learning process and an activity in outside the classroom that was the criterion on three aspects: Cognitive, affective and psychomotor.

Keywords: character, Code riverbanks, education, Yogyakarta

Procedia PDF Downloads 248
2052 Shield Tunnel Excavation Simulation of a Case Study Using a So-Called 'Stress Relaxation' Method

Authors: Shengwei Zhu, Alireza Afshani, Hirokazu Akagi

Abstract:

Ground surface settlement induced by shield tunneling is addressing increasing attention as shield tunneling becomes a popular construction technique for tunnels in urban areas. This paper discusses a 2D longitudinal FEM simulation of a tunneling case study in Japan (Tokyo Metro Yurakucho Line). Tunneling-induced field data was already collected and is used here for comparison and evaluating purposes. In this model, earth pressure, face pressure, backfilling grouting, elastic tunnel lining, and Mohr-Coulomb failure criterion for soil elements are considered. A method called ‘stress relaxation’ is also exploited to simulate the gradual tunneling excavation. Ground surface settlements obtained from numerical results using the introduced method are then compared with the measurement data.

Keywords: 2D longitudinal FEM model, tunneling case study, stress relaxation, shield tunneling excavation

Procedia PDF Downloads 330
2051 Multi-Criteria Goal Programming Model for Sustainable Development of India

Authors: Irfan Ali, Srikant Gupta, Aquil Ahmed

Abstract:

Every country needs a sustainable development (SD) for its economic growth by forming suitable policies and initiative programs for the development of different sectors of the country. This paper is comprised of modeling and optimization of different sectors of India that form a multi-criterion model. In this paper, we developed a fractional goal programming (FGP) model that helps in providing the efficient allocation of resources simultaneously by achieving the sustainable goals in gross domestic product (GDP), electricity consumption (EC) and greenhouse gasses (GHG) emission by the year 2030. Also, a weighted model of FGP is presented to obtain varying solution according to the priorities set by the policy maker for achieving future goals of GDP growth, EC, and GHG emission. The presented models provide a useful insight to the decision makers for implementing strategies in a different sector.

Keywords: sustainable and economic development, multi-objective fractional programming, fuzzy goal programming, weighted fuzzy goal programming

Procedia PDF Downloads 223
2050 Biomechanics of Atalantoaxial Complex for Various Posterior Fixation Techniques

Authors: Arun C. O., Shrijith M. B., Thakur Rajesh Singh

Abstract:

The study aims to analyze and understand the biomechanical stability of the atlantoaxial complex under different posterior fixation techniques using the finite element method in the Indian context. The conventional cadaveric studies performed show heterogeneity in biomechanical properties. The finite element method being a versatile numerical tool, is being wisely used for biomechanics analysis of atlantoaxial complex. However, the biomechanics of posterior fixation techniques for an Indian subject is missing in the literature. It is essential to study in this context as the bone density and geometry of vertebrae vary from region to region, thereby requiring different screw lengths and it can affect the range of motion(ROM), stresses generated. The current study uses CT images for developing a 3D finite element model with C1-C2 geometry without ligaments. Instrumentation is added to this geometry to develop four models for four fixation techniques, namely C1-C2 TA, C1LM-C2PS, C1LM-C2Pars, C1LM-C2TL. To simulate Flexion, extension, lateral bending, axial rotation, 1.5 Nm is applied to C1 while the bottom nodes of C2 are fixed. Then Range of Motion (ROM) is compared with the unstable model(without ligaments). All the fixation techniques showed more than 97 percent reduction in the Range of Motion. The von-mises stresses developed in the screw constructs are obtained. From the studies, it is observed that Transarticular technique is most stable in Lateral Bending, C1LM-C2 Translaminar is found most stable in Flexion/extension. The Von-Mises stresses developed minimum in Trasarticular technique in lateral bending and axial rotation, whereas stress developed in C2 pars construct minimum in Flexion/ Extension. On average, the TA technique is stable in all motions and also stresses in constructs are less in TA. Tarnsarticular technique is found to be the best fixation technique for Indian subjects among the 4 methods.

Keywords: biomechanics, cervical spine, finite element model, posterior fixation

Procedia PDF Downloads 143
2049 Planning a Haemodialysis Process by Minimum Time Control of Hybrid Systems with Sliding Motion

Authors: Radoslaw Pytlak, Damian Suski

Abstract:

The aim of the paper is to provide a computational tool for planning a haemodialysis process. It is shown that optimization methods can be used to obtain the most effective treatment focused on removing both urea and phosphorus during the process. In order to achieve that, the IV–compartment model of phosphorus kinetics is applied. This kinetics model takes into account a rebound phenomenon that can occur during haemodialysis and results in a hybrid model of the process. Furthermore, vector fields associated with the model equations are such that it is very likely that using the most intuitive objective functions in the planning problem could lead to solutions which include sliding motions. Therefore, building computational tools for solving the problem of planning a haemodialysis process has required constructing numerical algorithms for solving optimal control problems with hybrid systems. The paper concentrates on minimum time control of hybrid systems since this control objective is the most suitable for the haemodialysis process considered in the paper. The presented approach to optimal control problems with hybrid systems is different from the others in several aspects. First of all, it is assumed that a hybrid system can exhibit sliding modes. Secondly, the system’s motion on the switching surface is described by index 2 differential–algebraic equations, and that guarantees accurate tracking of the sliding motion surface. Thirdly, the gradients of the problem’s functionals are evaluated with the help of adjoint equations. The adjoint equations presented in the paper take into account sliding motion and exhibit jump conditions at transition times. The optimality conditions in the form of the weak maximum principle for optimal control problems with hybrid systems exhibiting sliding modes and with piecewise constant controls are stated. The presented sensitivity analysis can be used to construct globally convergent algorithms for solving considered problems. The paper presents numerical results of solving the haemodialysis planning problem.

Keywords: haemodialysis planning process, hybrid systems, optimal control, sliding motion

Procedia PDF Downloads 194
2048 Collapse Performance of Steel Frame with Hysteric Energy Dissipating Devices

Authors: Hyung-Joon Kim, Jin-Young Park

Abstract:

Energy dissipating devices (EDDs) have become more popular as seismic-force-resisting systems for building structures. However, there is little information on the collapse capacities of frames employing EDDs which are an important criterion for their seismic design. This study investigates the collapse capacities of steel frames with TADAS hysteric energy dissipative devices (HEDDs) that become an alternative to steel braced frames. To do this, 5-story steel ordinary concentrically braced frame and steel frame with HEDDs are designed and modeled. Nonlinear dynamic analyses and incremental dynamic analysis with 40 ground motions scaled to maximum considered earthquake are carried out. It is shown from analysis results that the significant enhancement in terms of the collapse capacities is found due to the introduction HEDDs.

Keywords: collapse capacity, incremental dynamic analysis, steel braced frame, TADAS hysteric energy dissipative device

Procedia PDF Downloads 482
2047 Integrated On-Board Diagnostic-II and Direct Controller Area Network Access for Vehicle Monitoring System

Authors: Kavian Khosravinia, Mohd Khair Hassan, Ribhan Zafira Abdul Rahman, Syed Abdul Rahman Al-Haddad

Abstract:

The CAN (controller area network) bus is introduced as a multi-master, message broadcast system. The messages sent on the CAN are used to communicate state information, referred as a signal between different ECUs, which provides data consistency in every node of the system. OBD-II Dongles that are based on request and response method is the wide-spread solution for extracting sensor data from cars among researchers. Unfortunately, most of the past researches do not consider resolution and quantity of their input data extracted through OBD-II technology. The maximum feasible scan rate is only 9 queries per second which provide 8 data points per second with using ELM327 as well-known OBD-II dongle. This study aims to develop and design a programmable, and latency-sensitive vehicle data acquisition system that improves the modularity and flexibility to extract exact, trustworthy, and fresh car sensor data with higher frequency rates. Furthermore, the researcher must break apart, thoroughly inspect, and observe the internal network of the vehicle, which may cause severe damages to the expensive ECUs of the vehicle due to intrinsic vulnerabilities of the CAN bus during initial research. Desired sensors data were collected from various vehicles utilizing Raspberry Pi3 as computing and processing unit with using OBD (request-response) and direct CAN method at the same time. Two types of data were collected for this study. The first, CAN bus frame data that illustrates data collected for each line of hex data sent from an ECU and the second type is the OBD data that represents some limited data that is requested from ECU under standard condition. The proposed system is reconfigurable, human-readable and multi-task telematics device that can be fitted into any vehicle with minimum effort and minimum time lag in the data extraction process. The standard operational procedure experimental vehicle network test bench is developed and can be used for future vehicle network testing experiment.

Keywords: CAN bus, OBD-II, vehicle data acquisition, connected cars, telemetry, Raspberry Pi3

Procedia PDF Downloads 201
2046 Robust Pattern Recognition via Correntropy Generalized Orthogonal Matching Pursuit

Authors: Yulong Wang, Yuan Yan Tang, Cuiming Zou, Lina Yang

Abstract:

This paper presents a novel sparse representation method for robust pattern classification. Generalized orthogonal matching pursuit (GOMP) is a recently proposed efficient sparse representation technique. However, GOMP adopts the mean square error (MSE) criterion and assign the same weights to all measurements, including both severely and slightly corrupted ones. To reduce the limitation, we propose an information-theoretic GOMP (ITGOMP) method by exploiting the correntropy induced metric. The results show that ITGOMP can adaptively assign small weights on severely contaminated measurements and large weights on clean ones, respectively. An ITGOMP based classifier is further developed for robust pattern classification. The experiments on public real datasets demonstrate the efficacy of the proposed approach.

Keywords: correntropy induced metric, matching pursuit, pattern classification, sparse representation

Procedia PDF Downloads 355
2045 Spatio-Temporal Analysis and Mapping of Malaria in Thailand

Authors: Krisada Lekdee, Sunee Sammatat, Nittaya Boonsit

Abstract:

This paper proposes a GLMM with spatial and temporal effects for malaria data in Thailand. A Bayesian method is used for parameter estimation via Gibbs sampling MCMC. A conditional autoregressive (CAR) model is assumed to present the spatial effects. The temporal correlation is presented through the covariance matrix of the random effects. The malaria quarterly data have been extracted from the Bureau of Epidemiology, Ministry of Public Health of Thailand. The factors considered are rainfall and temperature. The result shows that rainfall and temperature are positively related to the malaria morbidity rate. The posterior means of the estimated morbidity rates are used to construct the malaria maps. The top 5 highest morbidity rates (per 100,000 population) are in Trat (Q3, 111.70), Chiang Mai (Q3, 104.70), Narathiwat (Q4, 97.69), Chiang Mai (Q2, 88.51), and Chanthaburi (Q3, 86.82). According to the DIC criterion, the proposed model has a better performance than the GLMM with spatial effects but without temporal terms.

Keywords: Bayesian method, generalized linear mixed model (GLMM), malaria, spatial effects, temporal correlation

Procedia PDF Downloads 454
2044 Functioning of Public Distribution System and Calories Intake in the State of Maharashtra

Authors: Balasaheb Bansode, L. Ladusingh

Abstract:

The public distribution system is an important component of food security. It is a massive welfare program undertaken by Government of India and implemented by state government since India being a federal state; for achieving multiple objectives like eliminating hunger, reduction in malnutrition and making food consumption affordable. This program reaches at the community level through the various agencies of the government. The paper focuses on the accessibility of PDS at household level and how the present policy framework results in exclusion and inclusion errors. It tries to explore the sanctioned food grain quantity received by differentiated ration cards according to income criterion at household level, and also it has highlighted on the type of corruption in food distribution that is generated by the PDS system. The data used is of secondary nature from NSSO 68 round conducted in 2012. Bivariate and multivariate techniques have been used to understand the working and consumption of food for this paper.

Keywords: calories intake, entitle food quantity, poverty aliviation through PDS, target error

Procedia PDF Downloads 332