Search results for: evidence based practice
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 32321

Search results for: evidence based practice

17651 Gas Flow, Time, Distance Dynamic Modelling

Authors: A. Abdul-Ameer

Abstract:

The equations governing the distance, pressure- volume flow relationships for the pipeline transportation of gaseous mixtures, are considered. A derivation based on differential calculus, for an element of this system model, is addressed. Solutions, yielding the input- output response following pressure changes, are reviewed. The technical problems associated with these analytical results are identified. Procedures resolving these difficulties providing thereby an attractive, simple, analysis route are outlined. Computed responses, validating thereby calculated predictions, are presented.

Keywords: pressure, distance, flow, dissipation, models

Procedia PDF Downloads 464
17650 An Intelligent Controller Augmented with Variable Zero Lag Compensation for Antilock Braking System

Authors: Benjamin Chijioke Agwah, Paulinus Chinaenye Eze

Abstract:

Antilock braking system (ABS) is one of the important contributions by the automobile industry, designed to ensure road safety in such way that vehicles are kept steerable and stable when during emergency braking. This paper presents a wheel slip-based intelligent controller with variable zero lag compensation for ABS. It is required to achieve a very fast perfect wheel slip tracking during hard braking condition and eliminate chattering with improved transient and steady state performance, while shortening the stopping distance using effective braking torque less than maximum allowable torque to bring a braking vehicle to a stop. The dynamic of a vehicle braking with a braking velocity of 30 ms⁻¹ on a straight line was determined and modelled in MATLAB/Simulink environment to represent a conventional ABS system without a controller. Simulation results indicated that system without a controller was not able to track desired wheel slip and the stopping distance was 135.2 m. Hence, an intelligent control based on fuzzy logic controller (FLC) was designed with a variable zero lag compensator (VZLC) added to enhance the performance of FLC control variable by eliminating steady state error, provide improve bandwidth to eliminate the effect of high frequency noise such as chattering during braking. The simulation results showed that FLC- VZLC provided fast tracking of desired wheel slip, eliminate chattering, and reduced stopping distance by 70.5% (39.92 m), 63.3% (49.59 m), 57.6% (57.35 m) and 50% (69.13 m) on dry, wet, cobblestone and snow road surface conditions respectively. Generally, the proposed system used effective braking torque that is less than the maximum allowable braking torque to achieve efficient wheel slip tracking and overall robust control performance on different road surfaces.

Keywords: ABS, fuzzy logic controller, variable zero lag compensator, wheel slip tracking

Procedia PDF Downloads 137
17649 Genotoxic Effect of Tricyclic Antidepressant Drug “Clomipramine Hydrochloride’ on Somatic and Germ Cells of Male Mice

Authors: Samia A. El-Fiky, Fouad A. Abou-Zaid, Ibrahim M. Farag, Naira M. El-Fiky

Abstract:

Clomipramine hydrochloride is one of the most used tricyclic antidepressant drug in Egypt. This drug contains in its chemical structure on two benzene rings. Benzene is considered to be toxic and clastogenic agent. So, the present study was designed to assess the genotoxic effect of Clomipramine hydrochloride on somatic and germ cells in mice. Three dose levels 0.195 (Low), 0.26 (Medium), and 0.65 (High) mg/kg.b.wt. were used. Seven groups of male mice were utilized in this work. The first group was employed as a control. In the remaining six groups, each of the above doses was orally administrated for two groups, one of them was treated for 5 days and the other group was given the same dose for 30 days. At the end of experiments, the animals were sacrificed for cytogenetic and sperm examination as well as histopathological investigations by using hematoxylin and eosin stains (H and E stains) and electron microscope. Concerning the sperm studies, these studies were confined to 5 days treatment with different dose levels. Moreover, the ultrastructural investigation by electron microscope was restricted to 30 days treatment with drug doses. The results of the dose dependent effect of Clomipramine showed that the treatment with three different doses induced increases of frequencies of chromosome aberrations in bone marrow and spermatocyte cells as compared to control. In addition, mitotic and meiotic activities of somatic and germ cells were declined. The treatments with medium or high doses were more effective for inducing significant increases of chromosome aberrations and significant decreases of cell divisions than treatment with low dose. The effect of high dose was more pronounced for causing such genetic deleterious in respect to effect of medium dose. Moreover, the results of the time dependent effect of Clomipramine observed that the treatment with different dose levels for 30 days led to significant increases of genetic aberrations than treatment for 5 days. Sperm examinations revealed that the treatment with Clomipramine at different dose levels caused significant increase of sperm shape abnormalities and significant decrease in sperm count as compared to control. The adverse effects on sperm shape and count were more obviousness by using the treatments with medium or high doses than those found in treatment with low dose. The group of mice treated with high dose had the highest rate of sperm shape abnormalities and the lowest proportion of sperm count as compared to mice received medium dose. In histopathological investigation, hematoxylin and eosin stains showed that, the using of low dose of Clomipramine for 5 or 30 days caused a little pathological changes in liver tissue. However, using medium and high doses for 5 or 30 days induced severe damages than that observed in mice treated with low dose. The treatment with high dose for 30 days gave the worst results of pathological changes in hepatic cells. Moreover, ultrastructure examination revealed, the mice treated with low dose of Clomipramine had little differences in liver histological architecture as compared to control group. These differences were confined to cytoplasmic inclusions. Whereas, prominent pathological changes in nuclei as well as dilated of rough Endoplasmic Reticulum (rER) were observed in mice treated with medium or high doses of Clomipramine drug. In conclusion, the present study adds evidence that treatments with medium or high doses of Clomipramine have genotoxic effects on somatic and germ cells of mice, as unwanted side effects. However, the using of low dose (especially for short time, 5 days) can be utilized as a therapeutic dose, where it caused relatively similar proportions of genetic, sperm, and histopathological changes as those found in normal control.

Keywords: chromosome aberrations, clomipramine, mice, histopathology, sperm abnormalities

Procedia PDF Downloads 506
17648 A Novel Algorithm for Production Scheduling

Authors: Ali Mohammadi Bolban Abad, Fariborz Ahmadi

Abstract:

Optimization in manufacture is a method to use limited resources to obtain the best performance and reduce waste. In this paper a new algorithm based on eurygaster life is introduced to obtain a plane in which task order and completion time of resources are defined. Evaluation results show our approach has less make span when the resources are allocated with some products in comparison to genetic algorithm.

Keywords: evolutionary computation, genetic algorithm, particle swarm optimization, NP-Hard problems, production scheduling

Procedia PDF Downloads 362
17647 Integration of Big Data to Predict Transportation for Smart Cities

Authors: Sun-Young Jang, Sung-Ah Kim, Dongyoun Shin

Abstract:

The Intelligent transportation system is essential to build smarter cities. Machine learning based transportation prediction could be highly promising approach by delivering invisible aspect visible. In this context, this research aims to make a prototype model that predicts transportation network by using big data and machine learning technology. In detail, among urban transportation systems this research chooses bus system.  The research problem that existing headway model cannot response dynamic transportation conditions. Thus, bus delay problem is often occurred. To overcome this problem, a prediction model is presented to fine patterns of bus delay by using a machine learning implementing the following data sets; traffics, weathers, and bus statues. This research presents a flexible headway model to predict bus delay and analyze the result. The prototyping model is composed by real-time data of buses. The data are gathered through public data portals and real time Application Program Interface (API) by the government. These data are fundamental resources to organize interval pattern models of bus operations as traffic environment factors (road speeds, station conditions, weathers, and bus information of operating in real-time). The prototyping model is designed by the machine learning tool (RapidMiner Studio) and conducted tests for bus delays prediction. This research presents experiments to increase prediction accuracy for bus headway by analyzing the urban big data. The big data analysis is important to predict the future and to find correlations by processing huge amount of data. Therefore, based on the analysis method, this research represents an effective use of the machine learning and urban big data to understand urban dynamics.

Keywords: big data, machine learning, smart city, social cost, transportation network

Procedia PDF Downloads 241
17646 Territorial Brand as a Means of Structuring the French Wood Industry

Authors: Laetitia Dari

Abstract:

The brand constitutes a source of differentiation between competitors. It highlights specific characteristics that create value for the enterprise. Today the concept of a brand is not just about the product but can concern territories. The competition between territories, due to tourism, research, jobs, etc., leads territories to develop territorial brands to bring out their identity and specificity. Some territorial brands are based on natural resources or products characteristic of a territory. In the French wood sector, we can observe the emergence of many territorial brands. Supported by the inter-professional organization, these brands have the main objective of showcasing wood as a source of solutions at the local level in terms of construction and energy. The implementation of these collective projects raises the question of the way in which relations between companies are structured and animated. The central question of our work is to understand how the territorial brand promotes the structuring of a sector and the construction of collective relations between actors. In other words, we are interested in the conditions for the emergence of the territorial brand and the way in which it will be a means of mobilizing the actors around a common project. The objectives of the research are (1) to understand in which context a territorial brand emerges, (2) to analyze the way in which the territorial brand structures the collective relations between actors, (3) to give entry keys to the actors to successfully develop this type of project. Thus, our research is based on a qualitative methodology with semi-structured interviews conducted with the main territorial brands in France. The research will answer various academic and empirical questions. From an academic point of view, it brings elements of understanding to the construction of a collective project and to the way in which governance operates. From an empirical point of view, the interest of our work is to bring out the key success factors in the development of a territorial brand and how the brand can become an element of valuation for a territory.

Keywords: brand, marketing, strategy, territory, third party stakeholder, wood

Procedia PDF Downloads 54
17645 Optimization of Ultrasound-Assisted Extraction of Oil from Spent Coffee Grounds Using a Central Composite Rotatable Design

Authors: Malek Miladi, Miguel Vegara, Maria Perez-Infantes, Khaled Mohamed Ramadan, Antonio Ruiz-Canales, Damaris Nunez-Gomez

Abstract:

Coffee is the second consumed commodity worldwide, yet it also generates colossal waste. Proper management of coffee waste is proposed by converting them into products with higher added value to achieve sustainability of the economic and ecological footprint and protect the environment. Based on this, a study looking at the recovery of coffee waste is becoming more relevant in recent decades. Spent coffee grounds (SCG's) resulted from brewing coffee represents the major waste produced among all coffee industry. The fact that SCGs has no economic value be abundant in nature and industry, do not compete with agriculture and especially its high oil content (between 7-15% from its total dry matter weight depending on the coffee varieties, Arabica or Robusta), encourages its use as a sustainable feedstock for bio-oil production. The bio-oil extraction is a crucial step towards biodiesel production by the transesterification process. However, conventional methods used for oil extraction are not recommended due to their high consumption of energy, time, and generation of toxic volatile organic solvents. Thus, finding a sustainable, economical, and efficient extraction technique is crucial to scale up the process and to ensure more environment-friendly production. Under this perspective, the aim of this work was the statistical study to know an efficient strategy for oil extraction by n-hexane using indirect sonication. The coffee waste mixed Arabica and Robusta, which was used in this work. The temperature effect, sonication time, and solvent-to-solid ratio on the oil yield were statistically investigated as dependent variables by Central Composite Rotatable Design (CCRD) 23. The results were analyzed using STATISTICA 7 StatSoft software. The CCRD showed the significance of all the variables tested (P < 0.05) on the process output. The validation of the model by analysis of variance (ANOVA) showed good adjustment for the results obtained for a 95% confidence interval, and also, the predicted values graph vs. experimental values confirmed the satisfactory correlation between the model results. Besides, the identification of the optimum experimental conditions was based on the study of the surface response graphs (2-D and 3-D) and the critical statistical values. Based on the CCDR results, 29 ºC, 56.6 min, and solvent-to-solid ratio 16 were the better experimental conditions defined statistically for coffee waste oil extraction using n-hexane as solvent. In these conditions, the oil yield was >9% in all cases. The results confirmed the efficiency of using an ultrasound bath in extracting oil as a more economical, green, and efficient way when compared to the Soxhlet method.

Keywords: coffee waste, optimization, oil yield, statistical planning

Procedia PDF Downloads 103
17644 Big Data and Health: An Australian Perspective Which Highlights the Importance of Data Linkage to Support Health Research at a National Level

Authors: James Semmens, James Boyd, Anna Ferrante, Katrina Spilsbury, Sean Randall, Adrian Brown

Abstract:

‘Big data’ is a relatively new concept that describes data so large and complex that it exceeds the storage or computing capacity of most systems to perform timely and accurate analyses. Health services generate large amounts of data from a wide variety of sources such as administrative records, electronic health records, health insurance claims, and even smart phone health applications. Health data is viewed in Australia and internationally as highly sensitive. Strict ethical requirements must be met for the use of health data to support health research. These requirements differ markedly from those imposed on data use from industry or other government sectors and may have the impact of reducing the capacity of health data to be incorporated into the real time demands of the Big Data environment. This ‘big data revolution’ is increasingly supported by national governments, who have invested significant funds into initiatives designed to develop and capitalize on big data and methods for data integration using record linkage. The benefits to health following research using linked administrative data are recognised internationally and by the Australian Government through the National Collaborative Research Infrastructure Strategy Roadmap, which outlined a multi-million dollar investment strategy to develop national record linkage capabilities. This led to the establishment of the Population Health Research Network (PHRN) to coordinate and champion this initiative. The purpose of the PHRN was to establish record linkage units in all Australian states, to support the implementation of secure data delivery and remote access laboratories for researchers, and to develop the Centre for Data Linkage for the linkage of national and cross-jurisdictional data. The Centre for Data Linkage has been established within Curtin University in Western Australia; it provides essential record linkage infrastructure necessary for large-scale, cross-jurisdictional linkage of health related data in Australia and uses a best practice ‘separation principle’ to support data privacy and security. Privacy preserving record linkage technology is also being developed to link records without the use of names to overcome important legal and privacy constraint. This paper will present the findings of the first ‘Proof of Concept’ project selected to demonstrate the effectiveness of increased record linkage capacity in supporting nationally significant health research. This project explored how cross-jurisdictional linkage can inform the nature and extent of cross-border hospital use and hospital-related deaths. The technical challenges associated with national record linkage, and the extent of cross-border population movements, were explored as part of this pioneering research project. Access to person-level data linked across jurisdictions identified geographical hot spots of cross border hospital use and hospital-related deaths in Australia. This has implications for planning of health service delivery and for longitudinal follow-up studies, particularly those involving mobile populations.

Keywords: data integration, data linkage, health planning, health services research

Procedia PDF Downloads 208
17643 CdS Quantum Dots as Fluorescent Probes for Detection of Naphthalene

Authors: Zhengyu Yan, Yan Yu, Jianqiu Chen

Abstract:

A novel sensing system has been designed for naphthalene detection based on the quenched fluorescence signal of CdS quantum dots. The fluorescence intensity of the system reduced significantly after adding CdS quantum dots to the water pollution model because of the fluorescent static quenching f mechanism. Herein, we have demonstrated the facile methodology can offer a convenient and low analysis cost with the recovery rate as 97.43%-103.2%, which has potential application prospect.

Keywords: CdS quantum dots, modification, detection, naphthalene

Procedia PDF Downloads 475
17642 Low-Impact Development Strategies Assessment for Urban Design

Authors: Y. S. Lin, H. L. Lin

Abstract:

Climate change and land-use change caused by urban expansion increase the frequency of urban flooding. To mitigate the increase in runoff volume, low-impact development (LID) is a green approach for reducing the area of impervious surface and managing stormwater at the source with decentralized micro-scale control measures. However, the current benefit assessment and practical application of LID in Taiwan is still tending to be development plan in the community and building site scales. As for urban design, site-based moisture-holding capacity has been common index for evaluating LID’s effectiveness of urban design, which ignore the diversity, and complexity of the urban built environments, such as different densities, positive and negative spaces, volumes of building and so on. Such inflexible regulations not only probably make difficulty for most of the developed areas to implement, but also not suitable for every different types of built environments, make little benefits to some types of built environments. Looking toward to enable LID to strength the link with urban design to reduce the runoff in coping urban flooding, the research consider different characteristics of different types of built environments in developing LID strategy. Classify the built environments by doing the cluster analysis based on density measures, such as Ground Space Index (GSI), Floor Space Index (FSI), Floors (L), and Open Space Ratio (OSR), and analyze their impervious surface rates and runoff volumes. Simulate flood situations by using quasi-two-dimensional flood plain flow model, and evaluate the flood mitigation effectiveness of different types of built environments in different low-impact development strategies. The information from the results of the assessment can be more precisely implement in urban design. In addition, it helps to enact regulations of low-Impact development strategies in urban design more suitable for every different type of built environments.

Keywords: low-impact development, urban design, flooding, density measures

Procedia PDF Downloads 320
17641 Healthcare Workers' Attitudes Towards People Living With Hiv And Drug Users

Authors: Delband Yekta Moazami

Abstract:

Background: For proper care and treatment of HIV patients and drug users, the medical staff and physicians must have a correct and positive attitude and knowledge towards such patients. We aimed to assess the attitudes in a sample of health care workers (HCW) working in different hospitals and clinics and medical students in Georgia towards HIV infected people and drug users in Tbilisi. Method: We conducted a cross-sectional study to assess attitudes of health care workers towards people living with HIV and drug users in hospitals and clinics in Tbilisi. The study was carried out from 1st of May 2020 till 30th of September 2020. Data were collected using a self-administered structured online questionnaire. With this tool we evaluated four facets of attitudes: Discrimination, Acceptance of HIV/AIDS patients, Acceptance of drug users and Fear. All data were imported and analyzed with the software SPSS 22 for windows. Results: In total data was collected from168 respondents, that among them 107 (65%) were women and majority of the participants were medical doctors. Women had more acceptance attitudes rather than men towards drug abusers. We found significant differences regarding expressing negative attitudes among HCW who were more than 50 years old comparing with other age groups in all four aspects. Medical doctors expressed more acceptances towards people with HIV and drug users comparing two other groups. Also our study revealed that the group with working experience 21 years and more, showed more discriminatory attitudes comparing other groups. Conclusion: Based on our study findings, there are significant differences regarding respondent’s attitudes based on gender, medical specialty and working experience in health care system. People struggling with HIV and drug use need nonjudgmental and positive behaviors from health care workers and physicians in order to help them for harm reduction and receiving appropriate treatment.

Keywords: hiv, addiction, attitudes, healthcare workers

Procedia PDF Downloads 66
17640 Predicting Intention and Readiness to Alcohol Consumption Reduction and Cessation among Thai Teenagers Using Scales Based on the Theory of Planned Behavior

Authors: Rewadee Watakakosol, Arunya Tuicomepee, Panrapee Suttiwan, Sakkaphat T. Ngamake

Abstract:

Health problems caused by alcohol consumption not only have short-term effects at the time of drinking but also leave long-lasting health conditions. Teenagers who start drinking in their middle-high or high school years or before entering college have higher likelihood to increase their alcohol use and abuse, and they were found to be less healthy compared with their non-drinking peers when entering adulthood. This study aimed to examine factors that predict intention and readiness to reduce and quit alcohol consumption among Thai teenagers. Participants were 826 high-school and vocational school students, most of whom were females (64.4%) with the average age of 16.4 (SD = 0.9) and the average age of first drinking at 13.7 (SD = 2.2). Instruments included the scales that developed based on the Theory of Planned Behaviour theoretical framework. They were the Attitude toward Alcohol Reduction and Cessation Scale, Normative Group and Influence Scale, Perceived Behavioral Control toward Alcohol Reduction and Cessation Scale, Behavioral Intent toward Alcohol Reduction and Cessation Scale, and Readiness to Reduce and Quit Alcohol Consumption Scale. Findings revealed that readiness to reduce / quit alcohol was the most powerful predictive factor (β=. 53, p < .01), followed by attitude of easiness in alcohol reduction and cessation (β=.46, p < .01), perceived behavioral control toward alcohol reduction and cessation (β =.41, p < .01), normative group and influence (β=.15, p < .01), and attitude of being accepted from alcohol reduction and cessation (β = -.12, p < .01), respectively. Attitude of improved health after alcohol reduction and cessation did not show statistically significantly predictive power. All factors significantly predict teenagers’ alcohol reduction and cessation behavior and accounted for 59 percent of total variance of alcohol consumption reduction and cessation.

Keywords: alcohol consumption reduction and cessation, intention, readiness to change, Thai teenagers

Procedia PDF Downloads 317
17639 Uncanny Orania: White Complicity as the Abject of the Discursive Construction of Racism

Authors: Daphne Fietz

Abstract:

This paper builds on a reflection on an autobiographical experience of uncanniness during fieldwork in the white Afrikaner settlement Orania in South Africa. Drawing on Kristeva’s theory of abjection to establish a theory of Whiteness which is based on boundary threats, it is argued that the uncanny experience as the emergence of the abject points to a moment of crisis of the author’s Whiteness. The emanating abject directs the author to her closeness or convergence with Orania's inhabitants, that is a reciprocity based on mutual Whiteness. The experienced confluence appeals to the author’s White complicity to racism. With recourse to Butler’s theory of subjectivation, the abject, White complicity, inhabits both the outside of a discourse on racism, and of the 'self', as 'I' establish myself in relation to discourse. In this view, the qualities of the experienced abject are linked to the abject of discourse on racism, or, in other words, its frames of intelligibility. It then becomes clear, that discourse on (overt) racism functions as a necessary counter-image through which White morality is established instead of questioned, because here, by White reasoning, the abject of complicity to racism is successfully repressed, curbed, as completely impossible in the binary construction. Hence, such discourse endangers a preservation of racism in its pre-discursive and structural forms as long as its critique does not encompass its own location and performance in discourse. Discourse on overt racism is indispensable to White ignorance as it covers underlying racism and pre-empts further critique. This understanding directs us towards a form of critique which does necessitate self-reflection, uncertainty, and vigilance, which will be referred to as a discourse of relationality. Such a discourse diverges from the presumption of a detached author as a point of reference, and instead departs from attachment, dependence, mutuality and embraces the visceral as a resource of knowledge of relationality. A discourse of relationality points to another possibility of White engagement with Whiteness and racism and further promotes a conception of responsibility, which allows for and highlights dispossession and relationality in contrast to single agency and guilt.

Keywords: abjection, discourse, relationality, the visceral, whiteness

Procedia PDF Downloads 146
17638 Numerical Response of Coaxial HPGe Detector for Skull and Knee Measurement

Authors: Pabitra Sahu, M. Manohari, S. Priyadharshini, R. Santhanam, S. Chandrasekaran, B. Venkatraman

Abstract:

Radiation workers of reprocessing plants have a potential for internal exposure due to actinides and fission products. Radionuclides like Americium, lead, Polonium and Europium are bone seekers and get accumulated in the skeletal part. As the major skeletal content is in the skull (13%) and knee (22%), measurements of old intake have to be carried out in the skull and knee. At the Indira Gandhi Centre for Atomic Research, a twin HPGe-based actinide monitor is used for the measurement of actinides present in bone. Efficiency estimation, which is one of the prerequisites for the quantification of radionuclides, requires anthropomorphic phantoms. Such phantoms are very limited. Hence, in this study, efficiency curves for a Twin HPGe-based actinide monitoring system are established theoretically using the FLUKA Monte Carlo method and ICRP adult male voxel phantom. In the case of skull measurement, the detector is placed over the forehead, and for knee measurement, one detector is placed over each knee. The efficiency values of radionuclides present in the knee and skull vary from 3.72E-04 to 4.19E-04 CPS/photon and 5.22E-04 to 7.07E-04 CPS/photon, respectively, for the energy range 17 to 3000keV. The efficiency curves for the measurement are established, and it is found that initially, the efficiency value increases up to 100 keV and then starts decreasing. It is found that the skull efficiency values are 4% to 63% higher than that of the knee, depending on the energy for all the energies except 17.74 keV. The reason is the closeness of the detector to the skull compared to the knee. But for 17.74 keV the efficiency of the knee is more than the skull due to the higher attenuation caused in the skull bones because of its greater thickness. The Minimum Detectable Activity (MDA) for 241Am present in the skull and knee is 9 Bq. 239Pu has a MDA of 950 Bq and 1270 Bq for knee and skull, respectively, for a counting time of 1800 sec. This paper discusses the simulation method and the results obtained in the study.

Keywords: FLUKA Monte Carlo Method, ICRP adult male voxel phantom, knee, Skull.

Procedia PDF Downloads 35
17637 Seismic Fragility Assessment of Strongback Steel Braced Frames Subjected to Near-Field Earthquakes

Authors: Mohammadreza Salek Faramarzi, Touraj Taghikhany

Abstract:

In this paper, seismic fragility assessment of a recently developed hybrid structural system, known as the strongback system (SBS) is investigated. In this system, to mitigate the occurrence of the soft-story mechanism and improve the distribution of story drifts over the height of the structure, an elastic vertical truss is formed. The strengthened members of the braced span are designed to remain substantially elastic during levels of excitation where soft-story mechanisms are likely to occur and impose a nearly uniform story drift distribution. Due to the distinctive characteristics of near-field ground motions, it seems to be necessary to study the effect of these records on seismic performance of the SBS. To this end, a set of 56 near-field ground motion records suggested by FEMA P695 methodology is used. For fragility assessment, nonlinear dynamic analyses are carried out in OpenSEES based on the recommended procedure in HAZUS technical manual. Four damage states including slight, moderate, extensive, and complete damage (collapse) are considered. To evaluate each damage state, inter-story drift ratio and floor acceleration are implemented as engineering demand parameters. Further, to extend the evaluation of the collapse state of the system, a different collapse criterion suggested in FEMA P695 is applied. It is concluded that SBS can significantly increase the collapse capacity and consequently decrease the collapse risk of the structure during its life time. Comparing the observing mean annual frequency (MAF) of exceedance of each damage state against the allowable values presented in performance-based design methods, it is found that using the elastic vertical truss, improves the structural response effectively.

Keywords: IDA, near-fault, probabilistic performance assessment, seismic fragility, strongback system, uncertainty

Procedia PDF Downloads 98
17636 Model of MSD Risk Assessment at Workplace

Authors: K. Sekulová, M. Šimon

Abstract:

This article focuses on upper-extremity musculoskeletal disorders risk assessment model at workplace. In this model are used risk factors that are responsible for musculoskeletal system damage. Based on statistic calculations the model is able to define what risk of MSD threatens workers who are under risk factors. The model is also able to say how MSD risk would decrease if these risk factors are eliminated.

Keywords: ergonomics, musculoskeletal disorders, occupational diseases, risk factors

Procedia PDF Downloads 535
17635 The Current State Of Human Gait Simulator Development

Authors: Stepanov Ivan, Musalimov Viktor, Monahov Uriy

Abstract:

This report examines the current state of human gait simulator development based on the human hip joint model. This unit will create a database of human gait types, useful for setting up and calibrating mechano devices, as well as the creation of new systems of rehabilitation, exoskeletons and walking robots. The system has ample opportunity to configure the dimensions and stiffness, while maintaining relative simplicity.

Keywords: hip joint, human gait, physiotherapy, simulation

Procedia PDF Downloads 390
17634 Detection of Resistive Faults in Medium Voltage Overhead Feeders

Authors: Mubarak Suliman, Mohamed Hassan

Abstract:

Detection of downed conductors occurring with high fault resistance (reaching kilo-ohms) has always been a challenge, especially in countries like Saudi Arabia, on which earth resistivity is very high in general (reaching more than 1000 Ω-meter). The new approaches for the detection of resistive and high impedance faults are based on the analysis of the fault current waveform. These methods are still under research and development, and they are currently lacking security and dependability. The other approach is communication-based solutions which depends on voltage measurement at the end of overhead line branches and communicate the measured signals to substation feeder relay or a central control center. However, such a detection method is costly and depends on the availability of communication medium and infrastructure. The main objective of this research is to utilize the available standard protection schemes to increase the probability of detection of downed conductors occurring with a low magnitude of fault currents and at the same time avoiding unwanted tripping in healthy conditions and feeders. By specifying the operating region of the faulty feeder, use of tripping curve for discrimination between faulty and healthy feeders, and with proper selection of core balance current transformer (CBCT) and voltage transformers with fewer measurement errors, it is possible to set the pick-up of sensitive earth fault current to minimum values of few amps (i.e., Pick-up Settings = 3 A or 4 A, …) for the detection of earth faults with fault resistance more than (1 - 2 kΩ) for 13.8kV overhead network and more than (3-4) kΩ fault resistance in 33kV overhead network. By implementation of the outcomes of this study, the probability of detection of downed conductors is increased by the utilization of existing schemes (i.e., Directional Sensitive Earth Fault Protection).

Keywords: sensitive earth fault, zero sequence current, grounded system, resistive fault detection, healthy feeder

Procedia PDF Downloads 101
17633 A Computational Investigation of Potential Drugs for Cholesterol Regulation to Treat Alzheimer’s Disease

Authors: Marina Passero, Tianhua Zhai, Zuyi (Jacky) Huang

Abstract:

Alzheimer’s disease has become a major public health issue, as indicated by the increasing populations of Americans living with Alzheimer’s disease. After decades of extensive research in Alzheimer’s disease, only seven drugs have been approved by Food and Drug Administration (FDA) to treat Alzheimer’s disease. Five of these drugs were designed to treat the dementia symptoms, and only two drugs (i.e., Aducanumab and Lecanemab) target the progression of Alzheimer’s disease, especially the accumulation of amyloid-b plaques. However, controversial comments were raised for the accelerated approvals of either Aducanumab or Lecanemab, especially with concerns on safety and side effects of these two drugs. There is still an urgent need for further drug discovery to target the biological processes involved in the progression of Alzheimer’s disease. Excessive cholesterol has been found to accumulate in the brain of those with Alzheimer’s disease. Cholesterol can be synthesized in both the blood and the brain, but the majority of biosynthesis in the adult brain takes place in astrocytes and is then transported to the neurons via ApoE. The blood brain barrier separates cholesterol metabolism in the brain from the rest of the body. Various proteins contribute to the metabolism of cholesterol in the brain, which offer potential targets for Alzheimer’s treatment. In the astrocytes, SREBP cleavage-activating protein (SCAP) binds to Sterol Regulatory Element-binding Protein 2 (SREBP2) in order to transport the complex from the endoplasmic reticulum to the Golgi apparatus. Cholesterol is secreted out of the astrocytes by ATP-Binding Cassette A1 (ABCA1) transporter. Lipoprotein receptors such as triggering receptor expressed on myeloid cells 2 (TREM2) internalize cholesterol into the microglia, while lipoprotein receptors such as Low-density lipoprotein receptor-related protein 1 (LRP1) internalize cholesterol into the neuron. Cytochrome P450 Family 46 Subfamily A Member 1 (CYP46A1) converts excess cholesterol to 24S-hydroxycholesterol (24S-OHC). Cholesterol has been approved for its direct effect on the production of amyloid-beta and tau proteins. The addition of cholesterol to the brain promotes the activity of beta-site amyloid precursor protein cleaving enzyme 1 (BACE1), secretase, and amyloid precursor protein (APP), which all aid in amyloid-beta production. The reduction of cholesterol esters in the brain have been found to reduce phosphorylated tau levels in mice. In this work, a computational pipeline was developed to identify the protein targets involved in cholesterol regulation in brain and further to identify chemical compounds as the inhibitors of a selected protein target. Since extensive evidence shows the strong correlation between brain cholesterol regulation and Alzheimer’s disease, a detailed literature review on genes or pathways related to the brain cholesterol synthesis and regulation was first conducted in this work. An interaction network was then built for those genes so that the top gene targets were identified. The involvement of these genes in Alzheimer’s disease progression was discussed, which was followed by the investigation of existing clinical trials for those targets. A ligand-protein docking program was finally developed to screen 1.5 million chemical compounds for the selected protein target. A machine learning program was developed to evaluate and predict the binding interaction between chemical compounds and the protein target. The results from this work pave the way for further drug discovery to regulate brain cholesterol to combat Alzheimer’s disease.

Keywords: Alzheimer’s disease, drug discovery, ligand-protein docking, gene-network analysis, cholesterol regulation

Procedia PDF Downloads 55
17632 The Examination of Prospective ICT Teachers’ Attitudes towards Application of Computer Assisted Instruction

Authors: Agâh Tuğrul Korucu, Ismail Fatih Yavuzaslan, Lale Toraman

Abstract:

Nowadays, thanks to development of technology, integration of technology into teaching and learning activities is spreading. Increasing technological literacy which is one of the expected competencies for individuals of 21st century is associated with the effective use of technology in education. The most important factor in effective use of technology in education institutions is ICT teachers. The concept of computer assisted instruction (CAI) refers to the utilization of information and communication technology as a tool aided teachers in order to make education more efficient and improve its quality in the process of educational. Teachers can use computers in different places and times according to owned hardware and software facilities and characteristics of the subject and student in CAI. Analyzing teachers’ use of computers in education is significant because teachers are the ones who manage the course and they are the most important element in comprehending the topic by students. To accomplish computer-assisted instruction efficiently is possible through having positive attitude of teachers. Determination the level of knowledge, attitude and behavior of teachers who get the professional knowledge from educational faculties and elimination of deficiencies if any are crucial when teachers are at the faculty. Therefore, the aim of this paper is to identify ICT teachers' attitudes toward computer-assisted instruction in terms of different variables. Research group consists of 200 prospective ICT teachers studying at Necmettin Erbakan University Ahmet Keleşoğlu Faculty of Education CEIT department. As data collection tool of the study; “personal information form” developed by the researchers and used to collect demographic data and "the attitude scale related to computer-assisted instruction" are used. The scale consists of 20 items. 10 of these items show positive feature, while 10 of them show negative feature. The Kaiser-Meyer-Olkin (KMO) coefficient of the scale is found 0.88 and Barlett test significance value is found 0.000. The Cronbach’s alpha reliability coefficient of the scale is found 0.93. In order to analyze the data collected by data collection tools computer-based statistical software package used; statistical techniques such as descriptive statistics, t-test, and analysis of variance are utilized. It is determined that the attitudes of prospective instructors towards computers do not differ according to their educational branches. On the other hand, the attitudes of prospective instructors who own computers towards computer-supported education are determined higher than those of the prospective instructors who do not own computers. It is established that the departments of students who previously received computer lessons do not affect this situation so much. The result is that; the computer experience affects the attitude point regarding the computer-supported education positively.

Keywords: computer based instruction, teacher candidate, attitude, technology based instruction, information and communication technologies

Procedia PDF Downloads 281
17631 Leuco Dye-Based Thermochromic Systems for Application in Temperature Sensing

Authors: Magdalena Wilk-Kozubek, Magdalena Rowińska, Krzysztof Rola, Joanna Cybińska

Abstract:

Leuco dye-based thermochromic systems are classified as intelligent materials because they exhibit thermally induced color changes. Thanks to this feature, they are mainly used as temperature sensors in many industrial sectors. For example, placing a thermochromic material on a chemical reactor may warn about exceeding the maximum permitted temperature for a chemical process. Usually two components, a color former and a developer are needed to produce a system with irreversible color change. The color former is an electron donating (proton accepting) compound such as fluoran leuco dye. The developer is an electron accepting (proton donating) compound such as organic carboxylic acid. When the developer melts, the color former - developer complex is created and the termochromic system becomes colored. Typically, the melting point of the applied developer determines the temperature at which the color change occurs. When the lactone ring of the color former is closed, then the dye is in its colorless state. The ring opening, induced by the addition of a proton, causes the dye to turn into its colored state. Since the color former and the developer are often solid, they can be incorporated into polymer films to facilitate their practical use in industry. The objective of this research was to fabricate a leuco dye-based termochromic system that will irreversibly change color after reaching the temperature of 100°C. For this purpose, benzofluoran leuco dye (as color former) and phenoxyacetic acid (as developer with a melting point of 100°C) were introduced into the polymer films during the drop casting process. The film preparation process was optimized in order to obtain thin films with appropriate properties such as transparency, flexibility and homogeneity. Among the optimized factors were the concentration of benzofluoran leuco dye and phenoxyacetic acid, the type, average molecular weight and concentration of the polymer, and the type and concentration of the surfactant. The selected films, containing benzofluoran leuco dye and phenoxyacetic acid, were combined by mild heat treatment. Structural characterization of single and combined films was carried out by FTIR spectroscopy, morphological analysis was performed by optical microscopy and SEM, phase transitions were examined by DSC, color changes were investigated by digital photography and UV-Vis spectroscopy, while emission changes were studied by photoluminescence spectroscopy. The resulting thermochromic system is colorless at room temperature, but after reaching 100°C the developer melts and it turns irreversibly pink. Therefore, it could be used as an additional sensor to warn against boiling of water in power plants using water cooling. Currently used electronic temperature indicators are prone to faults and unwanted third-party actions. The sensor constructed in this work is transparent, thanks to which it can be unnoticed by an outsider and constitute a reliable reference for the person responsible for the apparatus.

Keywords: color developer, leuco dye, thin film, thermochromism

Procedia PDF Downloads 86
17630 On Elastic Anisotropy of Fused Filament Fabricated Acrylonitrile Butadiene Styrene Structures

Authors: Joseph Marae Djouda, Ashraf Kasmi, François Hild

Abstract:

Fused filament fabrication is one of the most widespread additive manufacturing techniques because of its low-cost implementation. Its initial development was based on part fabrication with thermoplastic materials. The influence of the manufacturing parameters such as the filament orientation through the nozzle, the deposited layer thickness, or the speed deposition on the mechanical properties of the parts has been widely experimentally investigated. It has been recorded the remarkable variations of the anisotropy in the function of the filament path during the fabrication process. However, there is a lack in the development of constitutive models describing the mechanical properties. In this study, integrated digital image correlation (I-DIC) is used for the identification of mechanical constitutive parameters of two configurations of ABS samples: +/-45° and so-called “oriented deposition.” In this last, the filament was deposited in order to follow the principal strain of the sample. The identification scheme based on the gap reduction between simulation and the experiment directly from images recorded from a single sample (single edge notched tension specimen) is developed. The macroscopic and mesoscopic analysis are conducted from images recorded in both sample surfaces during the tensile test. The elastic and elastoplastic models in isotropic and orthotropic frameworks have been established. It appears that independently of the sample configurations (filament orientation during the fabrication), the elastoplastic isotropic model gives the correct description of the behavior of samples. It is worth noting that in this model, the number of constitutive parameters is limited to the one considered in the elastoplastic orthotropic model. This leads to the fact that the anisotropy of the architectured 3D printed ABS parts can be neglected in the establishment of the macroscopic behavior description.

Keywords: elastic anisotropy, fused filament fabrication, Acrylonitrile butadiene styrene, I-DIC identification

Procedia PDF Downloads 114
17629 Urbanization and Income Inequality in Thailand

Authors: Acumsiri Tantikarnpanit

Abstract:

This paper aims to examine the relationship between urbanization and income inequality in Thailand during the period 2002–2020. Using a panel of data for 76 provinces collected from Thailand’s National Statistical Office (Labor Force Survey: LFS), as well as geospatial data from the U.S. Air Force Defense Meteorological Satellite Program (DMSP) and the Visible Infrared Imaging Radiometer Suite Day/Night band (VIIRS-DNB) satellite for nineteen selected years. This paper employs two different definitions to identify urban areas: 1) Urban areas defined by Thailand's National Statistical Office (Labor Force Survey: LFS), and 2) Urban areas estimated using nighttime light data from the DMSP and VIIRS-DNB satellite. The second method includes two sub-categories: 2.1) Determining urban areas by calculating nighttime light density with a population density of 300 people per square kilometer, and 2.2) Calculating urban areas based on nighttime light density corresponding to a population density of 1,500 people per square kilometer. The empirical analysis based on Ordinary Least Squares (OLS), fixed effects, and random effects models reveals a consistent U-shaped relationship between income inequality and urbanization. The findings from the econometric analysis demonstrate that urbanization or population density has a significant and negative impact on income inequality. Moreover, the square of urbanization shows a statistically significant positive impact on income inequality. Additionally, there is a negative association between logarithmically transformed income and income inequality. This paper also proposes the inclusion of satellite imagery, geospatial data, and spatial econometric techniques in future studies to conduct quantitative analysis of spatial relationships.

Keywords: income inequality, nighttime light, population density, Thailand, urbanization

Procedia PDF Downloads 64
17628 Survey of Web Service Composition

Authors: Wala Ben Messaoud, Khaled Ghedira, Youssef Ben Halima, Henda Ben Ghezala

Abstract:

A web service (WS) is called compound or composite when its execution involves interactions with other WS to use their features. The composition of WS specifies which services need to be invoked, in what order and how to handle exception conditions. This paper gives an overview of research efforts of WS composition. The approaches proposed in the literature are diverse, interesting and have opened important research areas. Based on many studies, we extracted the most important role of WS composition use in order to facilitate its introduction in WS concept.

Keywords: SOA, web services, composition approach, composite WS

Procedia PDF Downloads 288
17627 Perpetrator Trauma in Current World Cinema

Authors: Raya Morag

Abstract:

This paper proposes a new paradigm for cinema/trauma studies - the trauma of the perpetrator. Canonical trauma research from Freud’s Aetiology of Hysteria to the present has been carried out from the perspective of identification with the victim, as have cinema trauma research and contemporary humanities-based trauma studies, climaxing during the 1990s in widespread interest in the victim vis-à-vis the Holocaust, war, and domestic violence. Breaking over 100 years of repression of the abhorrent and rejected concept of the perpetrator in psychoanalytic-based research proposes an uncanny shift in our conception of psychoanalysis' trajectory from women's 'hysteria' to 'post-traumatic stress disorder'. This new paradigm is driven by the global emergence of new waves of films (2007-2015) representing trauma suffered by perpetrators involved in the new style of war entailing deliberate targeting of non-combatants. Analyzing prominent examples from Israeli post-second Intifada documentaries (e.g., Ari Folman’s Waltz with Bashir), and post post-Iraq (and Afghanistan) War American documentaries (e.g., Errol Morris' Standard Operating Procedure), the paper discusses the limitations of victim trauma by the firm boundaries it (rightly) set in order to defend such victims of nineteenth and especially twentieth-century catastrophes; the epistemological processes needed in order to consider perpetrators’ trauma as an inevitable part of psychiatric-psychological and cultural perspectives on trauma, and, thus, the definition of perpetrators' trauma in contrast to victims'. It also analyzes the perpetrator's figure in order to go beyond the limitation of current trauma theory's relation to the Real, thus transgressing the 'unspeakableness' of the trauma itself. The paper seeks an exploration of what perpetrator trauma teaches us not only as a counter-paradigm to victim trauma, but as a reflection on the complex intertwining of the two paradigms in the twenty-first century collective new war unconscious, and on what psychoanalysis might offer us in the first decade of this terrorized-ethnicized century.

Keywords: American war documentaries, Israeli war documentaries, 'new war', perpetrator trauma

Procedia PDF Downloads 277
17626 Inhibition Effect of Natural Junipers Extract towards Steel Corrosion in HCl Solution

Authors: L. Bammou, M. Belkhaouda R. Salghi, L. Bazzi, B. Hammouti

Abstract:

Steel and steel-based alloys of different grades steel are extensively used in numerous applications where acid solutions are widely applied such as industrial acid pickling, industrial acid cleaning and oil-well acidizing. The use of chemical inhibitors is one of the most practical methods for the protection against corrosion in acidic media. Most of the excellent acid inhibitors are organic compounds containing nitrogen, oxygen, phosphorus and sulphur. The use of non-toxic inhibitors called green or eco-friendly environmental inhibitors is one of the solutions possible to prevent the corrosion of the material. These advantages have incited us to draw a large part of program of our laboratory to examine natural substances as corrosion inhibitors such as: prickly pear seed oil, Argan oil, Argan extract, Fennel oil, Rosemary oil, Thymus oil, Lavender oil, Jojoba oil, Pennyroyal Mint oil, and Artemisia. In the present work, we investigate the corrosion inhibition of steel in 1 M HCl by junipers extract using weight loss, potentiodynamic polarization and electrochemical impedance spectroscopy (EIS) methods. The result obtained of junipers extract (JE) shows excellent inhibition properties for the corrosion of C38 steel in 1M HCl at 298K, and the inhibition efficiency increases with increasing of the JE concentration. The inhibitor efficiencies determined by weight loss, Tafel polarisation and EIS methods are in reasonable agreement. Based on the polarisation results, the investigated junipers extract can be classified as mixed inhibitor. The calculated structural parameters show increase of the obtained Rct values and decrease of the capacitance, Cdl, with JE concentration increase. It is suggested to attribute this to the increase of the thickness of the adsorption layer at steel surface. The adsorption model obeys to the Langmuir adsorption isotherm. The adsorption process is a spontaneous and exothermic process.

Keywords: corrosion inhibition, steel, friendly inhibitors, Tafel polarisation

Procedia PDF Downloads 506
17625 Modeling Binomial Dependent Distribution of the Values: Synthesis Tables of Probabilities of Errors of the First and Second Kind of Biometrics-Neural Network Authentication System

Authors: B. S.Akhmetov, S. T. Akhmetova, D. N. Nadeyev, V. Yu. Yegorov, V. V. Smogoonov

Abstract:

Estimated probabilities of errors of the first and second kind for nonideal biometrics-neural transducers 256 outputs, the construction of nomograms based error probability of 'own' and 'alien' from the mathematical expectation and standard deviation of the normalized measures Hamming.

Keywords: modeling, errors, probability, biometrics, neural network, authentication

Procedia PDF Downloads 472
17624 Application of Argumentation for Improving the Classification Accuracy in Inductive Concept Formation

Authors: Vadim Vagin, Marina Fomina, Oleg Morosin

Abstract:

This paper contains the description of argumentation approach for the problem of inductive concept formation. It is proposed to use argumentation, based on defeasible reasoning with justification degrees, to improve the quality of classification models, obtained by generalization algorithms. The experiment’s results on both clear and noisy data are also presented.

Keywords: argumentation, justification degrees, inductive concept formation, noise, generalization

Procedia PDF Downloads 423
17623 Enhancing Scalability in Ethereum Network Analysis: Methods and Techniques

Authors: Stefan K. Behfar

Abstract:

The rapid growth of the Ethereum network has brought forth the urgent need for scalable analysis methods to handle the increasing volume of blockchain data. In this research, we propose efficient methodologies for making Ethereum network analysis scalable. Our approach leverages a combination of graph-based data representation, probabilistic sampling, and parallel processing techniques to achieve unprecedented scalability while preserving critical network insights. Data Representation: We develop a graph-based data representation that captures the underlying structure of the Ethereum network. Each block transaction is represented as a node in the graph, while the edges signify temporal relationships. This representation ensures efficient querying and traversal of the blockchain data. Probabilistic Sampling: To cope with the vastness of the Ethereum blockchain, we introduce a probabilistic sampling technique. This method strategically selects a representative subset of transactions and blocks, allowing for concise yet statistically significant analysis. The sampling approach maintains the integrity of the network properties while significantly reducing the computational burden. Graph Convolutional Networks (GCNs): We incorporate GCNs to process the graph-based data representation efficiently. The GCN architecture enables the extraction of complex spatial and temporal patterns from the sampled data. This combination of graph representation and GCNs facilitates parallel processing and scalable analysis. Distributed Computing: To further enhance scalability, we adopt distributed computing frameworks such as Apache Hadoop and Apache Spark. By distributing computation across multiple nodes, we achieve a significant reduction in processing time and enhanced memory utilization. Our methodology harnesses the power of parallelism, making it well-suited for large-scale Ethereum network analysis. Evaluation and Results: We extensively evaluate our methodology on real-world Ethereum datasets covering diverse time periods and transaction volumes. The results demonstrate its superior scalability, outperforming traditional analysis methods. Our approach successfully handles the ever-growing Ethereum data, empowering researchers and developers with actionable insights from the blockchain. Case Studies: We apply our methodology to real-world Ethereum use cases, including detecting transaction patterns, analyzing smart contract interactions, and predicting network congestion. The results showcase the accuracy and efficiency of our approach, emphasizing its practical applicability in real-world scenarios. Security and Robustness: To ensure the reliability of our methodology, we conduct thorough security and robustness evaluations. Our approach demonstrates high resilience against adversarial attacks and perturbations, reaffirming its suitability for security-critical blockchain applications. Conclusion: By integrating graph-based data representation, GCNs, probabilistic sampling, and distributed computing, we achieve network scalability without compromising analytical precision. This approach addresses the pressing challenges posed by the expanding Ethereum network, opening new avenues for research and enabling real-time insights into decentralized ecosystems. Our work contributes to the development of scalable blockchain analytics, laying the foundation for sustainable growth and advancement in the domain of blockchain research and application.

Keywords: Ethereum, scalable network, GCN, probabilistic sampling, distributed computing

Procedia PDF Downloads 56
17622 Association Rules Mining Task Using Metaheuristics: Review

Authors: Abir Derouiche, Abdesslem Layeb

Abstract:

Association Rule Mining (ARM) is one of the most popular data mining tasks and it is widely used in various areas. The search for association rules is an NP-complete problem that is why metaheuristics have been widely used to solve it. The present paper presents the ARM as an optimization problem and surveys the proposed approaches in the literature based on metaheuristics.

Keywords: Optimization, Metaheuristics, Data Mining, Association rules Mining

Procedia PDF Downloads 146