Search results for: any direction angle
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 3009

Search results for: any direction angle

279 Numerical Investigation of Multiphase Flow Structure for the Flue Gas Desulfurization

Authors: Cheng-Jui Li, Chien-Chou Tseng

Abstract:

This study adopts Computational Fluid Dynamics (CFD) technique to build the multiphase flow numerical model where the interface between the flue gas and desulfurization liquid can be traced by Eulerian-Eulerian model. Inside the tower, the contact of the desulfurization liquid flow from the spray nozzles and flue gas flow can trigger chemical reactions to remove the sulfur dioxide from the exhaust gas. From experimental observations of the industrial scale plant, the desulfurization mechanism depends on the mixing level between the flue gas and the desulfurization liquid. In order to significantly improve the desulfurization efficiency, the mixing efficiency and the residence time can be increased by perforated sieve trays. Hence, the purpose of this research is to investigate the flow structure of sieve trays for the flue gas desulfurization by numerical simulation. In this study, there is an outlet at the top of FGD tower to discharge the clean gas and the FGD tower has a deep tank at the bottom, which is used to collect the slurry liquid. In the major desulfurization zone, the desulfurization liquid and flue gas have a complex mixing flow. Because there are four perforated plates in the major desulfurization zone, which spaced 0.4m from each other, and the spray array is placed above the top sieve tray, which includes 33 nozzles. Each nozzle injects desulfurization liquid that consists of the Mg(OH)2 solution. On each sieve tray, the outside diameter, the hole diameter, and the porosity are 0.6m, 20 mm and 34.3%. The flue gas flows into the FGD tower from the space between the major desulfurization zone and the deep tank can finally become clean. The desulfurization liquid and the liquid slurry goes to the bottom tank and is discharged as waste. When the desulfurization solution flow impacts the sieve tray, the downward momentum will be converted to the upper surface of the sieve tray. As a result, a thin liquid layer can be developed above the sieve tray, which is the so-called the slurry layer. And the volume fraction value within the slurry layer is around 0.3~0.7. Therefore, the liquid phase can't be considered as a discrete phase under the Eulerian-Lagrangian framework. Besides, there is a liquid column through the sieve trays. The downward liquid column becomes narrow as it interacts with the upward gas flow. After the flue gas flows into the major desulfurization zone, the flow direction of the flue gas is upward (+y) in the tube between the liquid column and the solid boundary of the FGD tower. As a result, the flue gas near the liquid column may be rolled down to slurry layer, which developed a vortex or a circulation zone between any two sieve trays. The vortex structure between two sieve trays results in a sufficient large two-phase contact area. It also increases the number of times that the flue gas interacts with the desulfurization liquid. On the other hand, the sieve trays improve the two-phase mixing, which may improve the SO2 removal efficiency.

Keywords: Computational Fluid Dynamics (CFD), Eulerian-Eulerian Model, Flue Gas Desulfurization (FGD), perforated sieve tray

Procedia PDF Downloads 261
278 A Finite Element Analysis of Hexagonal Double-Arrowhead Auxetic Structure with Enhanced Energy Absorption Characteristics and Stiffness

Authors: Keda Li, Hong Hu

Abstract:

Auxetic materials, as an emerging artificial designed metamaterial has attracted growing attention due to their promising negative Poisson’s ratio behaviors and tunable properties. The conventional auxetic lattice structures for which the deformation process is governed by a bending-dominated mechanism have faced the limitation of poor mechanical performance for many potential engineering applications. Recently, both load-bearing and energy absorption capabilities have become a crucial consideration in auxetic structure design. This study reports the finite element analysis of a class of hexagonal double-arrowhead auxetic structures with enhanced stiffness and energy absorption performance. The structure design was developed by extending the traditional double-arrowhead honeycomb to a hexagon frame, the stretching-dominated deformation mechanism was determined according to Maxwell’s stability criterion. The finite element (FE) models of 2D lattice structures established with stainless steel material were analyzed in ABAQUS/Standard for predicting in-plane structural deformation mechanism, failure process, and compressive elastic properties. Based on the computational simulation, the parametric analysis was studied to investigate the effect of the structural parameters on Poisson’s ratio and mechanical properties. The geometrical optimization was then implemented to achieve the optimal Poisson’s ratio for the maximum specific energy absorption. In addition, the optimized 2D lattice structure was correspondingly converted into a 3D geometry configuration by using the orthogonally splicing method. The numerical results of 2D and 3D structures under compressive quasi-static loading conditions were compared separately with the traditional double-arrowhead re-entrant honeycomb in terms of specific Young's moduli, Poisson's ratios, and specified energy absorption. As a result, the energy absorption capability and stiffness are significantly reinforced with a wide range of Poisson’s ratio compared to traditional double-arrowhead re-entrant honeycomb. The auxetic behaviors, energy absorption capability, and yield strength of the proposed structure are adjustable with different combinations of joint angle, struts thickness, and the length-width ratio of the representative unit cell. The numerical prediction in this study suggests the proposed concept of hexagonal double-arrowhead structure could be a suitable candidate for the energy absorption applications with a constant request of load-bearing capacity. For future research, experimental analysis is required for the validation of the numerical simulation.

Keywords: auxetic, energy absorption capacity, finite element analysis, negative Poisson's ratio, re-entrant hexagonal honeycomb

Procedia PDF Downloads 66
277 A Study on the Shear-Induced Crystallization of Aliphatic-Aromatic Copolyester

Authors: Ramin Hosseinnezhad, Iurii Vozniak, Andrzej Galeski

Abstract:

Shear-induced crystallization, originated from orientation of chains along the flow direction, is an inevitable part of most polymer processing technologies. It plays a dominant role in determining the final product properties and is affected by many factors such as shear rate, cooling rate, total strain, etc. Investigation of the shear-induced crystallization process become of great importance for preparation of nanocomposite, which requires crystallization of nanofibrous sheared inclusions at higher temperatures. Thus, the effects of shear time, shear rate, and also thermal condition of cooling on crystallization of two aliphatic-aromatic copolyesters have been investigated. This was performed using Linkam optical shearing system (CSS450) for both Ecoflex® F Blend C1200 produced by BASF and synthesized copolyester of butylene terephthalate and a mixture of butylene esters: adipate, succinate, and glutarate, (PBASGT), containing 60% of aromatic comonomer. Crystallization kinetics of these biodegradable copolyesters was studied at two different conditions of shearing. First, sample with a thickness of 60µm was heated to 60˚C above its melting point and subsequently subjected to different shear rates (100–800 sec-1) while cooling with specific rates. Second, the same type of sample was cooled down when shearing at constant temperature was finished. The intensity of transmitted depolarized light, recorded by a camera attached to the optical microscope, was used as a measure to follow the crystallization. Temperature dependencies of conversion degree of samples during cooling were collected and used to determine the half-temperature (Th), at which 50% conversion degree was reached. Shearing ecoflex films for 45 seconds with a shear rate of 100 sec-1 resulted in significant increase of Th from 56˚C to 70˚C. Moreover, the temperature range for the transition of molten samples to crystallized state decreased from 42˚C to 20˚C. Comparatively low shift of 10˚C in Th towards higher temperature was observed for PBASGT films at shear rate of 600 sec-1 for 45 seconds. However, insufficient melt flow strength and non-laminar flow due to Taylor vortices was a hindrance to reach more elevated Th at very high shear rates (600–800 sec-1). The shift in Th was smaller for the samples sheared at a constant temperature and subsequently cooled down. This may be attributed to the longer time gap between cessation of shearing and the onset of crystallization. The longer this time gap, the more possibility for crystal nucleus to re-melt at temperatures above Tm and for polymer chains to recoil and relax. It is found that the crystallization temperature, crystallization induction time and spherulite growth of aliphatic-aromatic copolyesters are dramatically influenced by both the cooling rate and the shear imposed during the process.

Keywords: induced crystallization, shear rate, aliphatic-aromatic copolyester, ecoflex

Procedia PDF Downloads 418
276 Analysis of Correlation Between Manufacturing Parameters and Mechanical Strength Followed by Uncertainty Propagation of Geometric Defects in Lattice Structures

Authors: Chetra Mang, Ahmadali Tahmasebimoradi, Xavier Lorang

Abstract:

Lattice structures are widely used in various applications, especially in aeronautic, aerospace, and medical applications because of their high performance properties. Thanks to advancement of the additive manufacturing technology, the lattice structures can be manufactured by different methods such as laser beam melting technology. However, the presence of geometric defects in the lattice structures is inevitable due to the manufacturing process. The geometric defects may have high impact on the mechanical strength of the structures. This work analyzes the correlation between the manufacturing parameters and the mechanical strengths of the lattice structures. To do that, two types of the lattice structures; body-centered cubic with z-struts (BCCZ) structures made of Inconel718, and body-centered cubic (BCC) structures made of Scalmalloy, are manufactured by laser melting beam machine using Taguchi design of experiment. Each structure is placed on the substrate with a specific position and orientation regarding the roller direction of deposed metal powder. The position and orientation are considered as the manufacturing parameters. The geometric defects of each beam in the lattice are characterized and used to build the geometric model in order to perform simulations. Then, the mechanical strengths are defined by the homogeneous response as Young's modulus and yield strength. The distribution of mechanical strengths is observed as a function of manufacturing parameters. The mechanical response of the BCCZ structure is stretch-dominated, i.e., the mechanical strengths are directly dependent on the strengths of the vertical beams. As the geometric defects of vertical beams are slightly changed based on their position/orientation on the manufacturing substrate, the mechanical strengths are less dispersed. The manufacturing parameters are less influenced on the mechanical strengths of the structure BCCZ. The mechanical response of the BCC structure is bending-dominated. The geometric defects of inclined beam are highly dispersed within a structure and also based on their position/orientation on the manufacturing substrate. For different position/orientation on the substrate, the mechanical responses are highly dispersed as well. This shows that the mechanical strengths are directly impacted by manufacturing parameters. In addition, this work is carried out to study the uncertainty propagation of the geometric defects on the mechanical strength of the BCC lattice structure made of Scalmalloy. To do that, we observe the distribution of mechanical strengths of the lattice according to the distribution of the geometric defects. A probability density law is determined based on a statistical hypothesis corresponding to the geometric defects of the inclined beams. The samples of inclined beams are then randomly drawn from the density law to build the lattice structure samples. The lattice samples are then used for simulation to characterize the mechanical strengths. The results reveal that the distribution of mechanical strengths of the structures with the same manufacturing parameters is less dispersed than one of the structures with different manufacturing parameters. Nevertheless, the dispersion of mechanical strengths due to the structures with the same manufacturing parameters are unneglectable.

Keywords: geometric defects, lattice structure, mechanical strength, uncertainty propagation

Procedia PDF Downloads 104
275 An Examination of Economic Evaluation Approaches in Mental Health Promotion Initiatives Targeted at Black and Asian Minority Ethnic Communities in the UK: A Critical Discourse Analysis

Authors: Phillipa Denise Peart

Abstract:

Black Asian and Minority Ethnic (BAME) people are more at risk of developing mental health disorders because they are more exposed to unfavorable social, economic, and environmental circumstances. These include housing, education, employment, community development, stigma, and discrimination. However, the majority of BAME mental health intervention studies focus on treatment with therapeutically effective drugs and use basic economic methods to evaluate their effectiveness; as a result, little is invested in the economic assessment of psychosocial interventions in BAME mental health. The UK government’s austerity programme and reduced funds for mental health services, has increased the need for the evaluation and assessment of initiatives to focus on value for money. The No Health without Mental Health policy (2011) provides practice guidance to practitioners, but there is little or no mention of the need to provide mental health initiatives targeted at BAME communities that are effective in terms of their impact and the cost-effectiveness. This, therefore, appears to contradict with and is at odds with the wider political discourse, which suggests there should be an increasing focus on health economic evaluation. As a consequence, it could be argued that whilst such policies provide direction to organisations to provide mental health services to the BAME community, by not requesting effective governance, assurance, and evaluation processes, they are merely paying lip service to address these problems and not helping advance knowledge and practice through evidence-based approaches. As a result, BAME communities suffer due to lack of efficient resources that can aid in the recovery process. This research study explores the mental health initiatives targeted at BAME communities, and analyses the techniques used when examining the cost effectiveness of mental health initiatives for BAME mental health communities. Using critical discourse analysis as an approach and method, mental health services will be selected as case studies, and their evaluations will be examined, alongside the political drivers that frame, shape, and direct their work. In doing so, it will analyse what the mental health policies initiatives are, how the initiatives are directed and demonstrate how economic models of evaluation are used in mental health programmes and how the value for money impacts and outcomes are articulated by mental health programme staff. It is anticipated that this study will further our understanding in order to provide adequate mental health resources and will deliver creative, supportive research to ensure evaluation is effective for the government to provide and maintain high quality and efficient mental health initiatives targeted at BAME communities.

Keywords: black, Asian and ethnic minority, economic models, mental health, health policy

Procedia PDF Downloads 89
274 Climate Change and Landslide Risk Assessment in Thailand

Authors: Shotiros Protong

Abstract:

The incidents of sudden landslides in Thailand during the past decade have occurred frequently and more severely. It is necessary to focus on the principal parameters used for analysis such as land cover land use, rainfall values, characteristic of soil and digital elevation model (DEM). The combination of intense rainfall and severe monsoons is increasing due to global climate change. Landslide occurrences rapidly increase during intense rainfall especially in the rainy season in Thailand which usually starts around mid-May and ends in the middle of October. The rain-triggered landslide hazard analysis is the focus of this research. The combination of geotechnical and hydrological data are used to determine permeability, conductivity, bedding orientation, overburden and presence of loose blocks. The regional landslide hazard mapping is developed using the Slope Stability Index SINMAP model supported on Arc GIS software version 10.1. Geological and land use data are used to define the probability of landslide occurrences in terms of geotechnical data. The geological data can indicate the shear strength and the angle of friction values for soils above given rock types, which leads to the general applicability of the approach for landslide hazard analysis. To address the research objectives, the methods are described in this study: setup and calibration of the SINMAP model, sensitivity of the SINMAP model, geotechnical laboratory, landslide assessment at present calibration and landslide assessment under future climate simulation scenario A2 and B2. In terms of hydrological data, the millimetres/twenty-four hours of average rainfall data are used to assess the rain triggered landslide hazard analysis in slope stability mapping. During 1954-2012 period, is used for the baseline of rainfall data at the present calibration. The climate change in Thailand, the future of climate scenarios are simulated by spatial and temporal scales. The precipitation impact is need to predict for the climate future, Statistical Downscaling Model (SDSM) version 4.2, is used to assess the simulation scenario of future change between latitude 16o 26’ and 18o 37’ north and between longitude 98o 52’ and 103o 05’ east by SDSM software. The research allows the mapping of risk parameters for landslide dynamics, and indicates the spatial and time trends of landslide occurrences. Thus, regional landslide hazard mapping under present-day climatic conditions from 1954 to 2012 and simulations of climate change based on GCM scenarios A2 and B2 from 2013 to 2099 related to the threshold rainfall values for the selected the study area in Uttaradit province in the northern part of Thailand. Finally, the landslide hazard mapping will be compared and shown by areas (km2 ) in both the present and the future under climate simulation scenarios A2 and B2 in Uttaradit province.

Keywords: landslide hazard, GIS, slope stability index (SINMAP), landslides, Thailand

Procedia PDF Downloads 531
273 Evaluation of the Risk Factors on the Incidence of Adjacent Segment Degeneration After Anterior Neck Discectomy and Fusion

Authors: Sayyed Mostafa Ahmadi, Neda Raeesi

Abstract:

Background and Objectives: Cervical spondylosis is a common problem that affects the adult spine and is the most common cause of radiculopathy and myelopathy in older patients. Anterior discectomy and fusion is a well-known technique in degenerative cervical disc disease. However, one of the late undesirable complications is adjacent disc degeneration, which affects about 91% of patients in ten years. Many factors can be effective in causing this complication, but some are still debatable. Discovering these risk factors and eliminating them can improve the quality of life. Methods: This is a retrospective cohort study. All patients who underwent anterior discectomy and fusion surgery in the neurosurgery ward of Imam Khomeini Hospital between 2013 and 2016 were evaluated. Their demographic information was collected. All patients were visited and examined for radiculopathy, myelopathy, and muscular force. At the same visit, all patients were asked to have a facelift, and neck profile, as well as a neck MRI(General Tesla 3). Preoperative graphs were used to measure the diameter of the cervical canal(Pavlov ratio) and to evaluate sagittal alignment(Cobb Angle). Preoperative MRI of patients was reviewed for anterior and posterior longitudinal ligament calcification. Result: In this study, 57 patients were studied. The mean age of patients was 50.63 years, and 49.1% were male. Only 3.5% of patients had anterior and posterior longitudinal ligament calcification. Symptomatic ASD was observed in 26.6%. The X-rays and MRIs showed evidence of 80.7% radiological ASD. Among patients who underwent one-level surgery, 20% had symptomatic ASD, but among patients who underwent two-level surgery, the rate of ASD was 50%.In other words, the higher the number of surfaces that are operated and fused, the higher the probability of symptomatic ASD(P-value <0.05). The X-rays and MRIs showed 80.7% of radiological ASD. Among patients who underwent surgery at one level, 78% had radiological ASD, and this number was 92% among patients who underwent two-level surgery(P-value> 0.05). Demographic variables such as age, sex, height, weight, and BMI did not have a significant effect on the incidence of radiological ASD(P-value> 0.05), but sex and height were two influential factors on symptomatic ASD(P-value <0.05). Other related variables such as family history, smoking and exercise also have no significant effect(P-value> 0.05). Radiographic variables such as Pavlov ratio and sagittal alignment were also unaffected by the incidence of radiological and symptomatic ASD(P-value> 0.05). The number of surgical surfaces and the incidence of anterior and posterior longitudinal ligament calcification before surgery also had no statistically significant effect(P-value> 0.05). In the study of the ability of the neck to move in different directions, none of these variables are statistically significant in the two groups with radiological and symptomatic ASD and the non-affected group(P-value> 0.05). Conclusion: According to the findings of this study, this disease is considered to be a multifactorial disease. The incidence of radiological ASD is much higher than symptomatic ASD (80.7% vs. 26.3%) and sex, height and number of fused surfaces are the only factors influencing the incidence of symptomatic ASD and no variable influences radiological ASD.

Keywords: risk factors, anterior neck disectomy and fusion, adjucent segment degeneration, complication

Procedia PDF Downloads 28
272 The Lopsided Burden of Non-Communicable Diseases in India: Evidences from the Decade 2004-2014

Authors: Kajori Banerjee, Laxmi Kant Dwivedi

Abstract:

India is a part of the ongoing globalization, contemporary convergence, industrialization and technical advancement that is taking place world-wide. Some of the manifestations of this evolution is rapid demographic, socio-economic, epidemiological and health transition. There has been a considerable increase in non-communicable diseases due to change in lifestyle. This study aims to assess the direction of burden of disease and compare the pressure of infectious diseases against cardio-vascular, endocrine, metabolic and nutritional diseases. The change in prevalence in a ten-year period (2004-2014) is further decomposed to determine the net contribution of various socio-economic and demographic covariates. The present study uses the recent 71st (2014) and 60th (2004) rounds of National Sample Survey. The pressure of infectious diseases against cardio-vascular (CVD), endocrine, metabolic and nutritional (EMN) diseases during 2004-2014 is calculated by Prevalence Rates (PR), Hospitalization Rates (HR) and Case Fatality Rates (CFR). The prevalence of non-communicable diseases are further used as a dependent variable in a logit regression to find the effect of various social, economic and demographic factors on the chances of suffering from the particular disease. Multivariate decomposition technique further assists in determining the net contribution of socio-economic and demographic covariates. This paper upholds evidences of stagnation of the burden of communicable diseases (CD) and rapid increase in the burden of non-communicable diseases (NCD) uniformly for all population sub-groups in India. CFR for CVD has increased drastically in 2004-2014. Logit regression indicates the chances of suffering from CVD and EMN is significantly higher among the urban residents, older ages, females, widowed/ divorced and separated individuals. Decomposition displays ample proof that improvement in quality of life markers like education, urbanization, longevity of life has positively contributed in increasing the NCD prevalence rate. In India’s current epidemiological phase, compression theory of morbidity is in action as a significant rise in the probability of contracting the NCDs over the time period among older ages is observed. Age is found to play a vital contributor in increasing the probability of having CVD and EMN over the study decade 2004-2014 in the nationally representative sample of National Sample Survey.

Keywords: cardio-vascular disease, case-fatality rate, communicable diseases, hospitalization rate, multivariate decomposition, non-communicable diseases, prevalence rate

Procedia PDF Downloads 289
271 Analysis of Thermal Comfort in Educational Buildings Using Computer Simulation: A Case Study in Federal University of Parana, Brazil

Authors: Ana Julia C. Kfouri

Abstract:

A prerequisite of any building design is to provide security to the users, taking the climate and its physical and physical-geometrical variables into account. It is also important to highlight the relevance of the right material elements, which arise between the person and the agent, and must provide improved thermal comfort conditions and low environmental impact. Furthermore, technology is constantly advancing, as well as computational simulations for projects, and they should be used to develop sustainable building and to provide higher quality of life for its users. In relation to comfort, the more satisfied the building users are, the better their intellectual performance will be. Based on that, the study of thermal comfort in educational buildings is of relative relevance, since the thermal characteristics in these environments are of vital importance to all users. Moreover, educational buildings are large constructions and when they are poorly planned and executed they have negative impacts to the surrounding environment, as well as to the user satisfaction, throughout its whole life cycle. In this line of thought, to evaluate university classroom conditions, it was accomplished a detailed case study on the thermal comfort situation at Federal University of Parana (UFPR). The main goal of the study is to perform a thermal analysis in three classrooms at UFPR, in order to address the subjective and physical variables that influence thermal comfort inside the classroom. For the assessment of the subjective components, a questionnaire was applied in order to evaluate the reference for the local thermal conditions. Regarding the physical variables, it was carried out on-site measurements, which consist of performing measurements of air temperature and air humidity, both inside and outside the building, as well as meteorological variables, such as wind speed and direction, solar radiation and rainfall, collected from a weather station. Then, a computer simulation based on results from the EnergyPlus software to reproduce air temperature and air humidity values of the three classrooms studied was conducted. The EnergyPlus outputs were analyzed and compared with the on-site measurement results to be possible to come out with a conclusion related to the local thermal conditions. The methodological approach included in the study allowed a distinct perspective in an educational building to better understand the classroom thermal performance, as well as the reason of such behavior. Finally, the study induces a reflection about the importance of thermal comfort for educational buildings and propose thermal alternatives for future projects, as well as a discussion about the significant impact of using computer simulation on engineering solutions, in order to improve the thermal performance of UFPR’s buildings.

Keywords: computer simulation, educational buildings, EnergyPlus, humidity, temperature, thermal comfort

Procedia PDF Downloads 365
270 Turkish Validation of the Nursing Outcomes for Urinary Incontinence and Their Sensitivities on Nursing Interventions

Authors: Dercan Gencbas, Hatice Bebis, Sue Moorhead

Abstract:

In the nursing process, many of the nursing classification systems were created to be used in international. From these, NANDA-I, Nursing Outcomes Classification (NOC) and Nursing Interventions Classification (NIC). In this direction, the main objective of this study is to establish a model for caregivers in hospitals and communities in Turkey and to ensure that nursing outputs are assessed by NOC-based measures. There are many scales to measure Urinary Incontinence (UI), which is very common in children, in old age, vaginal birth, NOC scales are ideal for use in the nursing process for comprehensive and holistic assessment, with surveys available. For this reason, the purpose of this study is to evaluate the validity of the NOC outputs and indicators used for UI NANDA-I. This research is a methodological study. In addition to the validity of scale indicators in the study, how much they will contribute to recovery after the nursing intervention was assessed by experts. Scope validations have been applied and calculated according to Fehring 1987 work model. According to this, nursing inclusion criteria and scores were determined. For example, if experts have at least four years of clinical experience, their score was 4 points or have at least one year of the nursing classification system, their score was 1 point. The experts were a publication experience about nursing classification, their score was 1 point, or have a doctoral degree in nursing, their score was 2 points. If the expert has a master degree, their score was 1 point. Total of 55 experts rated Fehring as a “senior degree” with a score of 90 according to the expert scoring. The nursing interventions to be applied were asked to what extent these indicators would contribute to recovery. For coverage validity tailored to Fehring's model, each NOC and NOC indicator from specialists was asked to score between 1-5. Score for the significance of indicators was from 1=no precaution to 5=very important. After the expert opinion, these weighted scores obtained for each NOC and NOC indicator were classified as 0.8 critical, 0.8 > 0.5 complements, > 0.5 are excluded. In the NANDA-I / NOC / NIC system (guideline), 5 NOCs proposed for nursing diagnoses for UI were proposed. These outputs are; Urinary Continence, Urinary Elimination, Tissue Integrity, Self CareToileting, Medication Response. After the scales are translated into Turkish, the weighted average of the scores obtained from specialists for the coverage of all 5 NOCs and the contribution of nursing initiatives exceeded 0.8. After the opinions of the experts, 79 of the 82 indicators were calculated as critical, 3 of the indicators were calculated as supplemental. Because of 0.5 > was not obtained, no substance was removed. All NOC outputs were identified as valid and usable scales in Turkey. In this study, five NOC outcomes were verified for the evaluation of the output of individuals who have received nursing knowledge of UI and variant types. Nurses in Turkey can benefit from the outputs of the NOC scale to perform the care of the elderly incontinence.

Keywords: nursing outcomes, content validity, nursing diagnosis, urinary incontinence

Procedia PDF Downloads 100
269 The Effect of the Performance Evolution System on the Productivity of Administrating and a Case Study

Authors: Ertuğrul Ferhat Yilmaz, Ali Riza Perçin

Abstract:

In the business enterprises implemented modern business enterprise principles, the most important issues are increasing the performance of workers and getting maximum income. Through the twentieth century, rapid development of the sectors of data processing and communication and because of the free trade politics arising of multilateral business enterprises have canceled the economical borders and changed the local rivalry into the spherical rivalry. In this rivalry conditions, the business enterprises have to work active and productive in order to continue their existences. The employees worked at business enterprises have formed the most important factor of product. Therefore, the business enterprises inferring the importance of the human factors in order to increase the profit have used “the performance evolution system” to increase the success and development of the employees. The evolution of the performance is aimed to increase the manpower productive by using the employees in an active way. Furthermore, this system assists the wage politics implemented in business enterprise, determining the strategically plans in business enterprises through the short and long terms, being promoted and determining the educational needs of employees, making decisions as dismissing and work rotation. It requires a great deal of effort to catch the pace of change in the working realm and to keep up ourselves up-to-date. To get the quality in people,to have an effect in workplace depends largely on the knowledge and competence of managers and prospective managers. Therefore,managers need to use the performance evaluation systems in order to base their managerial decisions on sound data. This study aims at finding whether the organizations effectively use performance evaluation systms,how much importance is put on this issue and how much the results of the evaulations have an effect on employees. Whether the organizations have the advantage of competition and can keep on their activities depend to a large extent on how they effectively and efficiently use their employees.Therefore,it is of vital importance to evaluate employees' performance and to make them better according to the results of that evaluation. The performance evaluation system which evaluates the employees according to the criteria related to that organization has become one of the most important topics for management. By means of those important ends mentioned above,performance evaluation system seems to be a tool that can be used to improve the efficiency and effectiveness of organization. Because of its contribution to organizational success, thinking performance evaluation on the axis of efficiency shows the importance of this study on a different angle. In this study, we have explained performance evaluation system ,efficiency and the relation between those two concepts. We have also analyzed the results of questionnaires conducted on the textile workers in Edirne city.We have got positive answers from the questions about the effects of performance evaluation on efficiency.After factor analysis ,the efficiency and motivation which are determined as factors of performance evaluation system have the biggest variance (%19.703) in our sample. Thus, this study shows that objective performance evaluation increases the efficiency and motivation of employees.

Keywords: performance, performance evolution system, productivity, Edirne region

Procedia PDF Downloads 280
268 Tests for Zero Inflation in Count Data with Measurement Error in Covariates

Authors: Man-Yu Wong, Siyu Zhou, Zhiqiang Cao

Abstract:

In quality of life, health service utilization is an important determinant of medical resource expenditures on Colorectal cancer (CRC) care, a better understanding of the increased utilization of health services is essential for optimizing the allocation of healthcare resources to services and thus for enhancing the service quality, especially for high expenditure on CRC care like Hong Kong region. In assessing the association between the health-related quality of life (HRQOL) and health service utilization in patients with colorectal neoplasm, count data models can be used, which account for over dispersion or extra zero counts. In our data, the HRQOL evaluation is a self-reported measure obtained from a questionnaire completed by the patients, misreports and variations in the data are inevitable. Besides, there are more zero counts from the observed number of clinical consultations (observed frequency of zero counts = 206) than those from a Poisson distribution with mean equal to 1.33 (expected frequency of zero counts = 156). This suggests that excess of zero counts may exist. Therefore, we study tests for detecting zero-inflation in models with measurement error in covariates. Method: Under classical measurement error model, the approximate likelihood function for zero-inflation Poisson regression model can be obtained, then Approximate Maximum Likelihood Estimation(AMLE) can be derived accordingly, which is consistent and asymptotically normally distributed. By calculating score function and Fisher information based on AMLE, a score test is proposed to detect zero-inflation effect in ZIP model with measurement error. The proposed test follows asymptotically standard normal distribution under H0, and it is consistent with the test proposed for zero-inflation effect when there is no measurement error. Results: Simulation results show that empirical power of our proposed test is the highest among existing tests for zero-inflation in ZIP model with measurement error. In real data analysis, with or without considering measurement error in covariates, existing tests, and our proposed test all imply H0 should be rejected with P-value less than 0.001, i.e., zero-inflation effect is very significant, ZIP model is superior to Poisson model for analyzing this data. However, if measurement error in covariates is not considered, only one covariate is significant; if measurement error in covariates is considered, only another covariate is significant. Moreover, the direction of coefficient estimations for these two covariates is different in ZIP regression model with or without considering measurement error. Conclusion: In our study, compared to Poisson model, ZIP model should be chosen when assessing the association between condition-specific HRQOL and health service utilization in patients with colorectal neoplasm. and models taking measurement error into account will result in statistically more reliable and precise information.

Keywords: count data, measurement error, score test, zero inflation

Procedia PDF Downloads 263
267 A Numerical Hybrid Finite Element Model for Lattice Structures Using 3D/Beam Elements

Authors: Ahmadali Tahmasebimoradi, Chetra Mang, Xavier Lorang

Abstract:

Thanks to the additive manufacturing process, lattice structures are replacing the traditional structures in aeronautical and automobile industries. In order to evaluate the mechanical response of the lattice structures, one has to resort to numerical techniques. Ansys is a globally well-known and trusted commercial software that allows us to model the lattice structures and analyze their mechanical responses using either solid or beam elements. In this software, a script may be used to systematically generate the lattice structures for any size. On the one hand, solid elements allow us to correctly model the contact between the substrates (the supports of the lattice structure) and the lattice structure, the local plasticity, and the junctions of the microbeams. However, their computational cost increases rapidly with the size of the lattice structure. On the other hand, although beam elements reduce the computational cost drastically, it doesn’t correctly model the contact between the lattice structures and the substrates nor the junctions of the microbeams. Also, the notion of local plasticity is not valid anymore. Moreover, the deformed shape of the lattice structure doesn’t correspond to the deformed shape of the lattice structure using 3D solid elements. In this work, motivated by the pros and cons of the 3D and beam models, a numerically hybrid model is presented for the lattice structures to reduce the computational cost of the simulations while avoiding the aforementioned drawbacks of the beam elements. This approach consists of the utilization of solid elements for the junctions and beam elements for the microbeams connecting the corresponding junctions to each other. When the global response of the structure is linear, the results from the hybrid models are in good agreement with the ones from the 3D models for body-centered cubic with z-struts (BCCZ) and body-centered cubic without z-struts (BCC) lattice structures. However, the hybrid models have difficulty to converge when the effect of large deformation and local plasticity are considerable in the BCCZ structures. Furthermore, the effect of the junction’s size of the hybrid models on the results is investigated. For BCCZ lattice structures, the results are not affected by the junction’s size. This is also valid for BCC lattice structures as long as the ratio of the junction’s size to the diameter of the microbeams is greater than 2. The hybrid model can take into account the geometric defects. As a demonstration, the point clouds of two lattice structures are parametrized in a platform called LATANA (LATtice ANAlysis) developed by IRT-SystemX. In this process, for each microbeam of the lattice structures, an ellipse is fitted to capture the effect of shape variation and roughness. Each ellipse is represented by three parameters; semi-major axis, semi-minor axis, and angle of rotation. Having the parameters of the ellipses, the lattice structures are constructed in Spaceclaim (ANSYS) using the geometrical hybrid approach. The results show a negligible discrepancy between the hybrid and 3D models, while the computational cost of the hybrid model is lower than the computational cost of the 3D model.

Keywords: additive manufacturing, Ansys, geometric defects, hybrid finite element model, lattice structure

Procedia PDF Downloads 92
266 Numerical Investigation of Combustion Chamber Geometry on Combustion Performance and Pollutant Emissions in an Ammonia-Diesel Common Rail Dual-Fuel Engine

Authors: Youcef Sehili, Khaled Loubar, Lyes Tarabet, Mahfoudh Cerdoun, Clement Lacroix

Abstract:

As emissions regulations grow more stringent and traditional fuel sources become increasingly scarce, incorporating carbon-free fuels in the transportation sector emerges as a key strategy for mitigating the impact of greenhouse gas emissions. While the utilization of hydrogen (H2) presents significant technological challenges, as evident in the engine limitation known as knocking, ammonia (NH3) provides a viable alternative that overcomes this obstacle and offers convenient transportation, storage, and distribution. Moreover, the implementation of a dual-fuel engine using ammonia as the primary gas is promising, delivering both ecological and economic benefits. However, when employing this combustion mode, the substitution of ammonia at high rates adversely affects combustion performance and leads to elevated emissions of unburnt NH3, especially under high loads, which requires special treatment of this mode of combustion. This study aims to simulate combustion in a common rail direct injection (CRDI) dual-fuel engine, considering the fundamental geometry of the combustion chamber as well as fifteen (15) alternative proposed geometries to determine the configuration that exhibits superior engine performance during high-load conditions. The research presented here focuses on improving the understanding of the equations and mechanisms involved in the combustion of finely atomized jets of liquid fuel and on mastering the CONVERGETM code, which facilitates the simulation of this combustion process. By analyzing the effect of piston bowl shape on the performance and emissions of a diesel engine operating in dual fuel mode, this work combines knowledge of combustion phenomena with proficiency in the calculation code. To select the optimal geometry, an evaluation of the Swirl, Tumble, and Squish flow patterns was conducted for the fifteen (15) studied geometries. Variations in-cylinder pressure, heat release rate, turbulence kinetic energy, turbulence dissipation rate, and emission rates were observed, while thermal efficiency and specific fuel consumption were estimated as functions of crankshaft angle. To maximize thermal efficiency, a synergistic approach involving the enrichment of intake air with oxygen (O2) and the enrichment of primary fuel with hydrogen (H2) was implemented. Based on the results obtained, it is worth noting that the proposed geometry (T8_b8_d0.6/SW_8.0) outperformed the others in terms of flow quality, reduction of pollutants emitted with a reduction of more than 90% in unburnt NH3, and an impressive improvement in engine efficiency of more than 11%.

Keywords: ammonia, hydrogen, combustion, dual-fuel engine, emissions

Procedia PDF Downloads 43
265 A Novel Chicken W Chromosome Specific Tandem Repeat

Authors: Alsu F. Saifitdinova, Alexey S. Komissarov, Svetlana A. Galkina, Elena I. Koshel, Maria M. Kulak, Stephen J. O'Brien, Elena R. Gaginskaya

Abstract:

The mystery of sex determination is one of the most ancient and still not solved until the end so far. In many species, sex determination is genetic and often accompanied by the presence of dimorphic sex chromosomes in the karyotype. Genomic sequencing gave the information about the gene content of sex chromosomes which allowed to reveal their origin from ordinary autosomes and to trace their evolutionary history. Female-specific W chromosome in birds as well as mammalian male-specific Y chromosome is characterized by the degeneration of gene content and the accumulation of repetitive DNA. Tandem repeats complicate the analysis of genomic data. Despite the best efforts chicken W chromosome assembly includes only 1.2 Mb from expected 55 Mb. Supplementing the information on the sex chromosome composition not only helps to complete the assembly of genomes but also moves us in the direction of understanding of the sex-determination systems evolution. A whole-genome survey to the assembly Gallus_gallus WASHUC 2.60 was applied for repeats search in assembled genome and performed search and assembly of high copy number repeats in unassembled reads of SRR867748 short reads datasets. For cytogenetic analysis conventional methods of fluorescent in situ hybridization was used for previously cloned W specific satellites and specifically designed directly labeled synthetic oligonucleotide DNA probe was used for bioinformatically identified repetitive sequence. Hybridization was performed with mitotic chicken chromosomes and manually isolated giant meiotic lampbrush chromosomes from growing oocytes. A novel chicken W specific satellite (GGAAA)n which is not co-localizes with any previously described classes of W specific repeats was identified and mapped with high resolution. In the composition of autosomes this repeat units was found as a part of upstream regions of gonad specific protein coding sequences. These findings may contribute to the understanding of the role of tandem repeats in sex specific differentiation regulation in birds and sex chromosome evolution. This work was supported by the postdoctoral fellowships from St. Petersburg State University (#1.50.1623.2013 and #1.50.1043.2014), the grant for Leading Scientific Schools (#3553.2014.4) and the grant from Russian foundation for basic researches (#15-04-05684). The equipment and software of Research Resource Center “Chromas” and Theodosius Dobzhansky Center for Genome Bioinformatics of Saint Petersburg State University were used.

Keywords: birds, lampbrush chromosomes, sex chromosomes, tandem repeats

Procedia PDF Downloads 363
264 Material Use & Life cycle GHG Emissions of Different Electrification Options for Long-Haul Trucks

Authors: Nafisa Mahbub, Hajo Ribberink

Abstract:

Electrification of long-haul trucks has been in discussion as a potential strategy to decarbonization. These trucks will require large batteries because of their weight and long daily driving distances. Around 245 million battery electric vehicles are predicted to be on the road by the year 2035. This huge increase in the number of electric vehicles (EVs) will require intensive mining operations for metals and other materials to manufacture millions of batteries for the EVs. These operations will add significant environmental burdens and there is a significant risk that the mining sector will not be able to meet the demand for battery materials, leading to higher prices. Since the battery is the most expensive component in the EVs, technologies that can enable electrification with smaller batteries sizes have substantial potential to reduce the material usage and associated environmental and cost burdens. One of these technologies is an ‘electrified road’ (eroad), where vehicles receive power while they are driving, for instance through an overhead catenary (OC) wire (like trolleybuses and electric trains), through wireless (inductive) chargers embedded in the road, or by connecting to an electrified rail in or on the road surface. This study assessed the total material use and associated life cycle GHG emissions of two types of eroads (overhead catenary and in-road wireless charging) for long-haul trucks in Canada and compared them to electrification using stationary plug-in fast charging. As different electrification technologies require different amounts of materials for charging infrastructure and for the truck batteries, the study included the contributions of both for the total material use. The study developed a bottom-up approach model comparing the three different charging scenarios – plug in fast chargers, overhead catenary and in-road wireless charging. The investigated materials for charging technology and batteries were copper (Cu), steel (Fe), aluminium (Al), and lithium (Li). For the plug-in fast charging technology, different charging scenarios ranging from overnight charging (350 kW) to megawatt (MW) charging (2 MW) were investigated. A 500 km of highway (1 lane of in-road charging per direction) was considered to estimate the material use for the overhead catenary and inductive charging technologies. The study considered trucks needing an 800 kWh battery under the plug-in charger scenario but only a 200 kWh battery for the OC and inductive charging scenarios. Results showed that overall the inductive charging scenario has the lowest material use followed by OC and plug-in charger scenarios respectively. The materials use for the OC and plug-in charger scenarios were 50-70% higher than for the inductive charging scenarios for the overall system including the charging infrastructure and battery. The life cycle GHG emissions from the construction and installation of the charging technology material were also investigated.

Keywords: charging technology, eroad, GHG emissions, material use, overhead catenary, plug in charger

Procedia PDF Downloads 23
263 Verification of the Supercavitation Phenomena: Investigation of the Cavity Parameters and Drag Coefficients for Different Types of Cavitator

Authors: Sezer Kefeli, Sertaç Arslan

Abstract:

Supercavitation is a pressure dependent process which gives opportunity to eliminate the wetted surface effects on the underwater vehicle due to the differences of viscosity and velocity effects between liquid (freestream) and gas phase. Cavitation process occurs depending on rapid pressure drop or temperature rising in liquid phase. In this paper, pressure based cavitation is investigated due to the fact that is encountered in the underwater world, generally. Basically, this vapor-filled pressure based cavities are unstable and harmful for any underwater vehicle because these cavities (bubbles or voids) lead to intense shock waves while collapsing. On the other hand, supercavitation is a desired and stabilized phenomena than general pressure based cavitation. Supercavitation phenomena offers the idea of minimizing form drag, and thus supercavitating vehicles are revived. When proper circumstances are set up, which are either increasing the operating speed of the underwater vehicle or decreasing the pressure difference between free stream and artificial pressure, the continuity of the supercavitation is obtainable. There are 2 types of supercavitation to obtain stable and continuous supercavitation, and these are called as natural and artificial supercavitation. In order to generate natural supercavitation, various mechanical structures are discovered, which are called as cavitators. In literature, a lot of cavitator types are studied either experimentally or numerically on a CFD platforms with intent to observe natural supercavitation since the 1900s. In this paper, firstly, experimental results are obtained, and trend lines are generated based on supercavitation parameters in terms of cavitation number (), form drag coefficientC_D, dimensionless cavity diameter (d_m/d_c), and length (L_c/d_c). After that, natural cavitation verification studies are carried out for disk and cone shape cavitators. In addition, supercavitation parameters are numerically analyzed at different operating conditions, and CFD results are fitted into trend lines of experimental results. The aims of this paper are to generate one generally accepted drag coefficient equation for disk and cone cavitators at different cavitator half angle and investigation of the supercavitation parameters with respect to cavitation number. Moreover, 165 CFD analysis are performed at different cavitation numbers on FLUENT version 21R2. Five different cavitator types are modeled on SCDM with respect tocavitator’s half angles. After that, CFD database is generated depending on numerical results, and new trend lines are generated based on supercavitation parameters. These trend lines are compared with experimental results. Finally, the generally accepted drag coefficient equation and equations of supercavitation parameters are generated.

Keywords: cavity envelope, CFD, high speed underwater vehicles, supercavitation, supercavitating flows, supercavitation parameters, drag reduction, viscous force elimination, natural cavitation verification

Procedia PDF Downloads 110
262 Event Data Representation Based on Time Stamp for Pedestrian Detection

Authors: Yuta Nakano, Kozo Kajiwara, Atsushi Hori, Takeshi Fujita

Abstract:

In association with the wave of electric vehicles (EV), low energy consumption systems have become more and more important. One of the key technologies to realize low energy consumption is a dynamic vision sensor (DVS), or we can call it an event sensor, neuromorphic vision sensor and so on. This sensor has several features, such as high temporal resolution, which can achieve 1 Mframe/s, and a high dynamic range (120 DB). However, the point that can contribute to low energy consumption the most is its sparsity; to be more specific, this sensor only captures the pixels that have intensity change. In other words, there is no signal in the area that does not have any intensity change. That is to say, this sensor is more energy efficient than conventional sensors such as RGB cameras because we can remove redundant data. On the other side of the advantages, it is difficult to handle the data because the data format is completely different from RGB image; for example, acquired signals are asynchronous and sparse, and each signal is composed of x-y coordinate, polarity (two values: +1 or -1) and time stamp, it does not include intensity such as RGB values. Therefore, as we cannot use existing algorithms straightforwardly, we have to design a new processing algorithm to cope with DVS data. In order to solve difficulties caused by data format differences, most of the prior arts make a frame data and feed it to deep learning such as Convolutional Neural Networks (CNN) for object detection and recognition purposes. However, even though we can feed the data, it is still difficult to achieve good performance due to a lack of intensity information. Although polarity is often used as intensity instead of RGB pixel value, it is apparent that polarity information is not rich enough. Considering this context, we proposed to use the timestamp information as a data representation that is fed to deep learning. Concretely, at first, we also make frame data divided by a certain time period, then give intensity value in response to the timestamp in each frame; for example, a high value is given on a recent signal. We expected that this data representation could capture the features, especially of moving objects, because timestamp represents the movement direction and speed. By using this proposal method, we made our own dataset by DVS fixed on a parked car to develop an application for a surveillance system that can detect persons around the car. We think DVS is one of the ideal sensors for surveillance purposes because this sensor can run for a long time with low energy consumption in a NOT dynamic situation. For comparison purposes, we reproduced state of the art method as a benchmark, which makes frames the same as us and feeds polarity information to CNN. Then, we measured the object detection performances of the benchmark and ours on the same dataset. As a result, our method achieved a maximum of 7 points greater than the benchmark in the F1 score.

Keywords: event camera, dynamic vision sensor, deep learning, data representation, object recognition, low energy consumption

Procedia PDF Downloads 69
261 Road Accidents to School Children’s in Dar Es Salaam, Tanzania

Authors: Kabuga Daniel

Abstract:

Road accidents resulting to deaths and injuries have become a new public health challenge especially in developing countries including Tanzania. Reports from Tanzania Traffic Police Force shows that last year 2016 accidents increased compare to previous year 2015, accident happened from 3710 up to 5219, accidents and safety data indicate that children are the most vulnerable to road crashes where 78 pupils died and 182 others were seriously injured in separate roads accident last year. A survey done by Amend indicates that Pupil mode of transport in Dar es salaam schools are by walk 87%, bus 9.21%, car 1.32%, motorcycle 0.88%, 3-wheeler 0.24%, train 0.14%, bicycle 0.10%, ferry 0.07%, and combined mode 0.44%. According to this study, majority of school children’s uses walking mode, most of school children’s agreed to continue using walking mode and request to have signs for traffic control during crossing road like STOP sign and CHILD CROSSING sign for safe crossing. Because children not only sit inside this buses (Daladala) but also they walk in a group to/from school, and few (33.2%) parents or adults are willing to supervise their children’s during working to school while 50% of parents agree to let their children walking alone to school if the public transport started from nearby street. The study used both qualitative and quantitative methods of research by conducting physical surveying on sample districts. The main objectives of this research are to carries out all factors affecting school children’s when they use public road, to promote and encourage the safe use of public road by all classes especially pupil or student through the circulation of advice, information and knowledge gain from research and to recommends future direction for the developments for road design or plan to vulnerable users. The research also critically analyze the problems causing death and injuries to school children’s in Dar es Salaam Region. This study determines the relationship between road traffic accidents and factors, such as socio-economic, status, and distance from school, number of sibling, behavioral problems, knowledge and attitudes of public and their parents towards road safety and parent educational study traffic. The study comes up with some of recommendations including Infrastructure Improvements like, safe footpaths, Safe crossings, Speed humps, Speed limits, Road signs. However, Planners and policymakers wishing to increase walking and cycling among children need to consider options that address distance constraints, the land use planners and transport professionals use better understanding of the various factors that affect children’s choices of school travel mode, results suggest that all school travel attributes should be considered during school location.

Keywords: accidents, childrens, school, Tanzania

Procedia PDF Downloads 217
260 A Study of a Diachronic Relationship between Two Weak Inflection Classes in Norwegian, with Emphasis on Unexpected Productivity

Authors: Emilija Tribocka

Abstract:

This contribution presents parts of an ongoing study of a diachronic relationship between two weak verb classes in Norwegian, the a-class (cf. the paradigm of ‘throw’: kasta – kastar – kasta – kasta) and the e-class (cf. the paradigm of ‘buy’: kjøpa – kjøper – kjøpte – kjøpt). The study investigates inflection class shifts between the two classes with Old Norse, the ancestor of Modern Norwegian, as a starting point. Examination of inflection in 38 verbs in four chosen dialect areas (106 places of attestations) demonstrates that the shifts from the a-class to the e-class are widespread to varying degrees in three out of four investigated areas and are more common than the shifts in the opposite direction. The diachronic productivity of the e-class is unexpected for several reasons. There is general agreement that type frequency is an important factor influencing productivity. The a-class (53% of all weak verbs) was more type frequent in Old Norse than the e-class (42% of all weak verbs). Thus, given the type frequency, the expansion of the e-class is unexpected. Furthermore, in the ‘core’ areas of expanded e-class inflection, the shifts disregard phonological principles creating forms with uncomfortable consonant clusters, e.g., fiskte instead of fiska, the preterit of fiska ‘fish’. Later on, these forms may be contracted, i.e., fiskte > fiste. In this contribution, two factors influencing the shifts are presented: phonological form and token frequency. Verbs with the stem ending in a consonant cluster, particularly when the cluster ends in -t, hardly ever shift to the e-class. As a matter of fact, verbs with this structure belonging to the e-class in Old Norse shift to the a-class in Modern Norwegian, e.g., ON e-class verb skipta ‘change’ shifts to the a-class. This shift occurs as a result of the lack of morpho-phonological transparency between the stem and the preterit suffix of the e-class, -te. As there is a phonological fusion between the stem ending in -t and the suffix beginning in -t, the transparent a-class inflection is chosen. Token frequency plays an important role in the shifts, too, in some dialects. In one of the investigated areas, the most token frequent verbs of the ON e-class remain in the e-class (e.g., høyra ‘hear’, leva ‘live’, kjøpa ‘buy’), while less frequent verbs may shift to the a-class. Furthermore, the results indicate that the shift from the a-class to the e-class occurs in some of the most token frequent verbs of the ON a-class in this area, e.g., lika ‘like’, lova ‘promise’, svara ‘answer’. The latter is unexpected as frequent items tend to remain stable. This study presents a case of unexpected productivity, demonstrating that minor patterns can grow and outdo major patterns. Thus, type frequency is not the only factor that determines productivity. The study addresses the role of phonological form and token frequency in the spread of inflection patterns.

Keywords: inflection class, productivity, token frequency, phonological form

Procedia PDF Downloads 32
259 Influence of Counter-Face Roughness on the Friction of Bionic Microstructures

Authors: Haytam Kasem

Abstract:

The problem of quick and easy reversible attachment has become of great importance in different fields of technology. For the reason, during the last decade, a new emerging field of adhesion science has been developed. Essentially inspired by some animals and insects, which during their natural evolution have developed fantastic biological attachment systems allowing them to adhere and run on walls and ceilings of uneven surfaces. Potential applications of engineering bio-inspired solutions include climbing robots, handling systems for wafers in nanofabrication facilities, and mobile sensor platforms, to name a few. However, despite the efforts provided to apply bio-inspired patterned adhesive-surfaces to the biomedical field, they are still in the early stages compared with their conventional uses in other industries mentioned above. In fact, there are some critical issues that still need to be addressed for the wide usage of the bio-inspired patterned surfaces as advanced biomedical platforms. For example, surface durability and long-term stability of surfaces with high adhesive capacity should be improved, but also the friction and adhesion capacities of these bio-inspired microstructures when contacting rough surfaces. One of the well-known prototypes for bio-inspired attachment systems is biomimetic wall-shaped hierarchical microstructure for gecko-like attachments. Although physical background of these attachment systems is widely understood, the influence of counter-face roughness and its relationship with the friction force generated when sliding against wall-shaped hierarchical microstructure have yet to be fully analyzed and understood. To elucidate the effect of the counter-face roughness on the friction of biomimetic wall-shaped hierarchical microstructure we have replicated the isotropic topography of 12 different surfaces using replicas made of the same epoxy material. The different counter-faces were fully characterized under 3D optical profilometer to measure roughness parameters. The friction forces generated by spatula-shaped microstructure in contact with the tested counter-faces were measured on a home-made tribometer and compared with the friction forces generated by the spatulae in contact with a smooth reference. It was found that classical roughness parameters, such as average roughness Ra and others, could not be utilized to explain topography-related variation in friction force. This has led us to the development of an integrated roughness parameter obtained by combining different parameters which are the mean asperity radius of curvature (R), the asperity density (η), the deviation of asperities high (σ) and the mean asperities angle (SDQ). This new integrated parameter is capable of explaining the variation of results of friction measurements. Based on the experimental results, we developed and validated an analytical model to predict the variation of the friction force as a function of roughness parameters of the counter-face and the applied normal load, as well.

Keywords: friction, bio-mimetic micro-structure, counter-face roughness, analytical model

Procedia PDF Downloads 217
258 An Improved Atmospheric Correction Method with Diurnal Temperature Cycle Model for MSG-SEVIRI TIR Data under Clear Sky Condition

Authors: Caixia Gao, Chuanrong Li, Lingli Tang, Lingling Ma, Yonggang Qian, Ning Wang

Abstract:

Knowledge of land surface temperature (LST) is of crucial important in energy balance studies and environment modeling. Satellite thermal infrared (TIR) imagery is the primary source for retrieving LST at the regional and global scales. Due to the combination of atmosphere and land surface of received radiance by TIR sensors, atmospheric effect correction has to be performed to remove the atmospheric transmittance and upwelling radiance. Spinning Enhanced Visible and Infrared Imager (SEVIRI) onboard Meteosat Second Generation (MSG) provides measurements every 15 minutes in 12 spectral channels covering from visible to infrared spectrum at fixed view angles with 3km pixel size at nadir, offering new and unique capabilities for LST, LSE measurements. However, due to its high temporal resolution, the atmosphere correction could not be performed with radiosonde profiles or reanalysis data since these profiles are not available at all SEVIRI TIR image acquisition times. To solve this problem, a two-part six-parameter semi-empirical diurnal temperature cycle (DTC) model has been applied to the temporal interpolation of ECMWF reanalysis data. Due to the fact that the DTC model is underdetermined with ECMWF data at four synoptic times (UTC times: 00:00, 06:00, 12:00, 18:00) in one day for each location, some approaches are adopted in this study. It is well known that the atmospheric transmittance and upwelling radiance has a relationship with water vapour content (WVC). With the aid of simulated data, the relationship could be determined under each viewing zenith angle for each SEVIRI TIR channel. Thus, the atmospheric transmittance and upwelling radiance are preliminary removed with the aid of instantaneous WVC, which is retrieved from the brightness temperature in the SEVIRI channels 5, 9 and 10, and a group of the brightness temperatures for surface leaving radiance (Tg) are acquired. Subsequently, a group of the six parameters of the DTC model is fitted with these Tg by a Levenberg-Marquardt least squares algorithm (denoted as DTC model 1). Although the retrieval error of WVC and the approximate relationships between WVC and atmospheric parameters would induce some uncertainties, this would not significantly affect the determination of the three parameters, td, ts and β (β is the angular frequency, td is the time where the Tg reaches its maximum, ts is the starting time of attenuation) in DTC model. Furthermore, due to the large fluctuation in temperature and the inaccuracy of the DTC model around sunrise, SEVIRI measurements from two hours before sunrise to two hours after sunrise are excluded. With the knowledge of td , ts, and β, a new DTC model (denoted as DTC model 2) is accurately fitted again with these Tg at UTC times: 05:57, 11:57, 17:57 and 23:57, which is atmospherically corrected with ECMWF data. And then a new group of the six parameters of the DTC model is generated and subsequently, the Tg at any given times are acquired. Finally, this method is applied to SEVIRI data in channel 9 successfully. The result shows that the proposed method could be performed reasonably without assumption and the Tg derived with the improved method is much more consistent with that from radiosonde measurements.

Keywords: atmosphere correction, diurnal temperature cycle model, land surface temperature, SEVIRI

Procedia PDF Downloads 248
257 Improved Signal-To-Noise Ratio by the 3D-Functionalization of Fully Zwitterionic Surface Coatings

Authors: Esther Van Andel, Stefanie C. Lange, Maarten M. J. Smulders, Han Zuilhof

Abstract:

False outcomes of diagnostic tests are a major concern in medical health care. To improve the reliability of surface-based diagnostic tests, it is of crucial importance to diminish background signals that arise from the non-specific binding of biomolecules, a process called fouling. The aim is to create surfaces that repel all biomolecules except the molecule of interest. This can be achieved by incorporating antifouling protein repellent coatings in between the sensor surface and it’s recognition elements (e.g. antibodies, sugars, aptamers). Zwitterionic polymer brushes are considered excellent antifouling materials, however, to be able to bind the molecule of interest, the polymer brushes have to be functionalized and so far this was only achieved at the expense of either antifouling or binding capacity. To overcome this limitation, we combined both features into one single monomer: a zwitterionic sulfobetaine, ensuring antifouling capabilities, equipped with a clickable azide moiety which allows for further functionalization. By copolymerizing this monomer together with a standard sulfobetaine, the number of azides (and with that the number of recognition elements) can be tuned depending on the application. First, the clickable azido-monomer was synthesized and characterized, followed by copolymerizing this monomer to yield functionalizable antifouling brushes. The brushes were fully characterized using surface characterization techniques like XPS, contact angle measurements, G-ATR-FTIR and XRR. As a proof of principle, the brushes were subsequently functionalized with biotin via strain-promoted alkyne azide click reactions, which yielded a fully zwitterionic biotin-containing 3D-functionalized coating. The sensing capacity was evaluated by reflectometry using avidin and fibrinogen containing protein solutions. The surfaces showed excellent antifouling properties as illustrated by the complete absence of non-specific fibrinogen binding, while at the same time clear responses were seen for the specific binding of avidin. A great increase in signal-to-noise ratio was observed, even when the amount of functional groups was lowered to 1%, compared to traditional modification of sulfobetaine brushes that rely on a 2D-approach in which only the top-layer can be functionalized. This study was performed on stoichiometric silicon nitride surfaces for future microring resonator based assays, however, this methodology can be transferred to other biosensor platforms which are currently being investigated. The approach presented herein enables a highly efficient strategy for selective binding with retained antifouling properties for improved signal-to-noise ratios in binding assays. The number of recognition units can be adjusted to a specific need, e.g. depending on the size of the analyte to be bound, widening the scope of these functionalizable surface coatings.

Keywords: antifouling, signal-to-noise ratio, surface functionalization, zwitterionic polymer brushes

Procedia PDF Downloads 286
256 Structural Analysis of Archaeoseismic Records Linked to the 5 July 408 - 410 AD Utica Strong Earthquake (NE Tunisia)

Authors: Noureddine Ben Ayed, Abdelkader Soumaya, Saïd Maouche, Ali Kadri, Mongi Gueddiche, Hayet Khayati-Ammar, Ahmed Braham

Abstract:

The archaeological monument of Utica, located in north-eastern Tunisia, was founded (8th century BC) By the Phoenicians as a port installed on the trade route connecting Phoenicia and the Straits of Gibraltar in the Mediterranean Sea. The flourishment of this city as an important settlement during the Roman period was followed by a sudden abandonment, disuse and progressive oblivion in the first half of the fifth century AD. This decadence can be attributed to the destructive earthquake of 5 July 408 - 410 AD, affecting this historic city as documented in 1906 by the seismologist Fernand De Montessus De Ballore. The magnitude of the Utica earthquake was estimated at 6.8 by the Tunisian National Institute of Meteorology (INM). In order to highlight the damage caused by this earthquake, a field survey was carried out at the Utica ruins to detect and analyse the earthquake archaeological effects (EAEs) using structural geology methods. This approach allowed us to highlight several structural damages, including: (1) folded mortar pavements, (2) cracks affecting the mosaic and walls of a water basin in the "House of the Grand Oecus", (3) displaced columns, (4) block extrusion in masonry walls, (5) undulations in mosaic pavements, (6) tilted walls. The structural analysis of these EAEs and data measurements reveal a seismic cause for all evidence of deformation in the Utica monument. The maximum horizontal strain of the ground (e.g. SHmax) inferred from the building oriented damage in Utica shows a NNW-SSE direction under a compressive tectonic regime. For the seismogenic source of this earthquake, we propose the active E-W to NE-SW trending Utique - Ghar El Melh reverse fault, passing through the Utica Monument and extending towards the Ghar El Melh Lake, as the causative tectonic structure. The active fault trace is well supported by instrumental seismicity, geophysical data (e.g., gravity, seismic profiles) and geomorphological analyses. In summary, we find that the archaeoseismic records detected at Utica are similar to those observed at many other archaeological sites affected by destructive ancient earthquakes around the world. Furthermore, the calculated orientation of the average maximum horizontal stress (SHmax) closely match the state of the actual stress field, as highlighted by some earthquake focal mechanisms in this region.

Keywords: Tunisia, utica, seimogenic fault, archaeological earthquake effects

Procedia PDF Downloads 11
255 Ethical Artificial Intelligence: An Exploratory Study of Guidelines

Authors: Ahmad Haidar

Abstract:

The rapid adoption of Artificial Intelligence (AI) technology holds unforeseen risks like privacy violation, unemployment, and algorithmic bias, triggering research institutions, governments, and companies to develop principles of AI ethics. The extensive and diverse literature on AI lacks an analysis of the evolution of principles developed in recent years. There are two fundamental purposes of this paper. The first is to provide insights into how the principles of AI ethics have been changed recently, including concepts like risk management and public participation. In doing so, a NOISE (Needs, Opportunities, Improvements, Strengths, & Exceptions) analysis will be presented. Second, offering a framework for building Ethical AI linked to sustainability. This research adopts an explorative approach, more specifically, an inductive approach to address the theoretical gap. Consequently, this paper tracks the different efforts to have “trustworthy AI” and “ethical AI,” concluding a list of 12 documents released from 2017 to 2022. The analysis of this list unifies the different approaches toward trustworthy AI in two steps. First, splitting the principles into two categories, technical and net benefit, and second, testing the frequency of each principle, providing the different technical principles that may be useful for stakeholders considering the lifecycle of AI, or what is known as sustainable AI. Sustainable AI is the third wave of AI ethics and a movement to drive change throughout the entire lifecycle of AI products (i.e., idea generation, training, re-tuning, implementation, and governance) in the direction of greater ecological integrity and social fairness. In this vein, results suggest transparency, privacy, fairness, safety, autonomy, and accountability as recommended technical principles to include in the lifecycle of AI. Another contribution is to capture the different basis that aid the process of AI for sustainability (e.g., towards sustainable development goals). The results indicate data governance, do no harm, human well-being, and risk management as crucial AI for sustainability principles. This study’s last contribution clarifies how the principles evolved. To illustrate, in 2018, the Montreal declaration mentioned eight principles well-being, autonomy, privacy, solidarity, democratic participation, equity, and diversity. In 2021, notions emerged from the European Commission proposal, including public trust, public participation, scientific integrity, risk assessment, flexibility, benefit and cost, and interagency coordination. The study design will strengthen the validity of previous studies. Yet, we advance knowledge in trustworthy AI by considering recent documents, linking principles with sustainable AI and AI for sustainability, and shedding light on the evolution of guidelines over time.

Keywords: artificial intelligence, AI for sustainability, declarations, framework, regulations, risks, sustainable AI

Procedia PDF Downloads 67
254 Industrial Hemp Agronomy and Fibre Value Chain in Pakistan: Current Progress, Challenges, and Prospects

Authors: Saddam Hussain, Ghadeer Mohsen Albadrani

Abstract:

Pakistan is one of the most vulnerable countries to climate change. Being a country where 23% of the country’s GDP relies on agriculture, this is a serious cause of concern. Introducing industrial hemp in Pakistan can help build climate resilience in the agricultural sector of the country, as hemp has recently emerged as a sustainable, eco-friendly, resource-efficient, and climate-resilient crop globally. Hemp has the potential to absorb huge amounts of CO₂, nourish the soil, and be used to create various biodegradable and eco-friendly products. Hemp is twice as effective as trees at absorbing and locking up carbon, with 1 hectare (2.5 acres) of hemp reckoned to absorb 8 to 22 tonnes of CO₂ a year, more than any woodland. Along with its high carbon-sequestration ability, it produces higher biomass and can be successfully grown as a cover crop. Hemp can grow in almost all soil conditions and does not require pesticides. It has fast-growing qualities and needs only 120 days to be ready for harvest. Compared with cotton, hemp requires 50% less water to grow and can produce three times higher fiber yield with a lower ecological footprint. Recently, the Government of Pakistan has allowed the cultivation of industrial hemp for industrial and medicinal purposes, making it possible for hemp to be reinserted into the country’s economy. Pakistan’s agro-climatic and edaphic conditions are well-suitable to produce industrial hemp, and its cultivation can bring economic benefits to the country. Pakistan can enter global markets as a new exporter of hemp products. The production of hemp in Pakistan can be most exciting to the workforce, especially for farmers participating in hemp markets. The minimum production cost of hemp makes it affordable to small holding farmers, especially those who need their cropping system to be as highly sustainable as possible. Dr. Saddam Hussain is leading the first pilot project of Industrial Hemp in Pakistan. In the past three years, he has been able to recruit high-impact research grants on industrial hemp as Principal Investigator. He has already screened the non-toxic hemp genotypes, tested the adaptability of exotic material in various agroecological conditions, formulated the production agronomy, and successfully developed the complete value chain. He has developed prototypes (fabric, denim, knitwear) using hemp fibre in collaboration with industrial partners and has optimized the indigenous fibre processing techniques. In this lecture, Dr. Hussain will talk on hemp agronomy and its complete fibre value chain. He will discuss the current progress, and will highlight the major challenges and future research direction on hemp research.

Keywords: industrial hemp, agricultural sustainability, agronomic evaluation, hemp value chain

Procedia PDF Downloads 50
253 Movie and Theater Marketing Using the Potentials of Social Networks

Authors: Seyed Reza Naghibulsadat

Abstract:

The nature of communication includes various forms of media productions, which include film and theater. In the current situation, since social networks have emerged, they have brought their own communication capabilities and have features that show speed, public access, lack of media organization and the production of extensive content, and the development of critical thinking; Also, they contain capabilities to develop access to all kinds of media productions, including movies and theater shows; Of course, this works differently in different conditions and communities. In terms of the scale of exploitation, the film has a more general audience, and the theater has a special audience. The film industry is more developed based on more modern technologies, but the theater, based on the older ways of communication, contains more intimate and emotional aspects. ; But in general, the main focus is the development of access to movies and theater shows, which is emphasized by those involved in this field due to the capabilities of social networks. In this research, we will look at these 2 areas and the relevant components for both areas through social networks and also the common points of both types of media production. The main goal of this research is to know the strengths and weaknesses of using social networks for the marketing of movies and theater shows and, at the same time are, also considered the opportunities and threats of this field. The attractions of these two types of media production, with the emergence of social networks, and the ability to change positions, can provide the opportunity to become a media with greater exploitation and higher profitability; But the main consideration is the opinions about these capabilities and the ability to use them for film and theater marketing. The main question of the research is, what are the marketing components for movies and theaters using social media capabilities? What are its strengths and weaknesses? And what opportunities and threats are facing this market? This research has been done with two methods SWOT and meta-analysis. Non-probability sampling has been used with purposeful technique. The results show that a recent approach is an approach based on eliminating threats and weaknesses and emphasizing strengths, and exploiting opportunities in the direction of developing film and theater marketing based on the capabilities of social networks within the framework of local cultural values and presenting achievements on an international scale or It is universal. This introduction leads to the introduction of authentic Iranian culture and foreign enthusiasts in the framework of movies and theater art. Therefore, for this issue, the model for using the capabilities of social networks for movie or theater marketing, according to the results obtained from Respondents, is a model based on SO strategies and, in other words, offensive strategies so that it can take advantage of the internal strengths and made maximum use of foreign situations and opportunities to develop the use of movies and theater performances.

Keywords: marketing, movies, theatrical show, social network potentials

Procedia PDF Downloads 49
252 Analyzing the Performance of the Philippine Disaster Risk Reduction and Management Act of 2010 as Framework for Managing and Recovering from Large-Scale Disasters: A Typhoon Haiyan Recovery Case Study

Authors: Fouad M. Bendimerad, Jerome B. Zayas, Michael Adrian T. Padilla

Abstract:

With the increasing scale of severity and frequency of disasters worldwide, the performance of governance systems for disaster risk reduction and management in many countries are being put to the test. In the Philippines, the Disaster Risk Reduction and Management (DRRM) Act of 2010 (Republic Act 10121 or RA 10121) as the framework for disaster risk reduction and management was tested when Super Typhoon Haiyan hit the eastern provinces of the Philippines in November 2013. Typhoon Haiyan is considered to be the strongest recorded typhoon in history to make landfall with winds exceeding 252 km/hr. In assessing the performance of RA 10121 the authors conducted document reviews of related policies, plans, programs, and key interviews and focus groups with representatives of 21 national government departments, two (2) local government units, six (6) private sector and civil society organizations, and five (5) development agencies. Our analysis will argue that enhancements are needed in RA 10121 in order to meet the challenges of large-scale disasters. The current structure where government agencies and departments organize along DRRM thematic areas such response and relief, preparedness, prevention and mitigation, and recovery and response proved to be inefficient in coordinating response and recovery and in mobilizing resources on the ground. However, experience from various disasters has shown the Philippine government’s tendency to organize major recovery programs along development sectors such as infrastructure, livelihood, shelter, and social services, which is consistent with the concept of DRM mainstreaming. We will argue that this sectoral approach is more effective than the thematic approach to DRRM. The council-type arrangement for coordination has also been rendered inoperable by Typhoon Haiyan because the agency responsible for coordination does not have decision-making authority to mobilize action and resources of other agencies which are members of the council. Resources have been devolved to agencies responsible for each thematic area and there is no clear command and direction structure for decision-making. However, experience also shows that the Philippine government has appointed ad-hoc bodies with authority over other agencies to coordinate and mobilize action and resources in recovering from large-scale disasters. We will argue that this approach be institutionalized within the government structure to enable a more efficient and effective disaster risk reduction and management system.

Keywords: risk reduction and management, recovery, governance, typhoon haiyan response and recovery

Procedia PDF Downloads 262
251 Stochastic Modelling for Mixed Mode Fatigue Delamination Growth of Wind Turbine Composite Blades

Authors: Chi Zhang, Hua-Peng Chen

Abstract:

With the increasingly demanding resources in the word, renewable and clean energy has been considered as an alternative way to replace traditional ones. Thus, one of practical examples for using wind energy is wind turbine, which has gained more attentions in recent research. Like most offshore structures, the blades, which is the most critical components of the wind turbine, will be subjected to millions of loading cycles during service life. To operate safely in marine environments, the blades are typically made from fibre reinforced composite materials to resist fatigue delamination and harsh environment. The fatigue crack development of blades is uncertain because of indeterminate mechanical properties for composite and uncertainties under offshore environment like wave loads, wind loads, and humid environments. There are three main delamination failure modes for composite blades, and the most common failure type in practices is subjected to mixed mode loading, typically a range of opening (mode 1) and shear (mode 2). However, the fatigue crack development for mixed mode cannot be predicted as deterministic values because of various uncertainties in realistic practical situation. Therefore, selecting an effective stochastic model to evaluate the mixed mode behaviour of wind turbine blades is a critical issue. In previous studies, gamma process has been considered as an appropriate stochastic approach, which simulates the stochastic deterioration process to proceed in one direction such as realistic situation for fatigue damage failure of wind turbine blades. On the basis of existing studies, various Paris Law equations are discussed to simulate the propagation of the fatigue crack growth. This paper develops a Paris model with the stochastic deterioration modelling according to gamma process for predicting fatigue crack performance in design service life. A numerical example of wind turbine composite materials is investigated to predict the mixed mode crack depth by Paris law and the probability of fatigue failure by gamma process. The probability of failure curves under different situations are obtained from the stochastic deterioration model for comparisons. Compared with the results from experiments, the gamma process can take the uncertain values into consideration for crack propagation of mixed mode, and the stochastic deterioration process shows a better agree well with realistic crack process for composite blades. Finally, according to the predicted results from gamma stochastic model, assessment strategies for composite blades are developed to reduce total lifecycle costs and increase resistance for fatigue crack growth.

Keywords: Reinforced fibre composite, Wind turbine blades, Fatigue delamination, Mixed failure mode, Stochastic process.

Procedia PDF Downloads 384
250 Crash and Injury Characteristics of Riders in Motorcycle-Passenger Vehicle Crashes

Authors: Z. A. Ahmad Noor Syukri, A. J. Nawal Aswan, S. V. Wong

Abstract:

The motorcycle has become one of the most common type of vehicles used on the road, particularly in the Asia region, including Malaysia, due to its size-convenience and affordable price. This study focuses only on crashes involving motorcycles with passenger cars consisting 43 real world crashes obtained from in-depth crash investigation process from June 2016 till July 2017. The study collected and analyzed vehicle and site parameters obtained during crash investigation and injury information acquired from the patient-treating hospital. The investigation team, consisting of two personnel, is stationed at the Emergency Department of the treatment facility, and was dispatched to the crash scene once receiving notification of the related crashes. The injury information retrieved was coded according to the level of severity using the Abbreviated Injury Scale (AIS) and classified into different body regions. The data revealed that weekend crashes were significantly higher for the night time period and the crash occurrence was the highest during morning hours (commuting to work period) for weekdays. Bad weather conditions play a minimal effect towards the occurrence of motorcycle – passenger vehicle crashes and nearly 90% involved motorcycles with single riders. Riders up to 25 years old are heavily involved in crashes with passenger vehicles (60%), followed by 26-55 year age group with 35%. Male riders were dominant in each of the age segments. The majority of the crashes involved side impacts, followed by rear impacts and cars outnumbered the rest of the passenger vehicle types in terms of crash involvement with motorcycles. The investigation data also revealed that passenger vehicles were the most at-fault counterpart (62%) when involved in crashes with motorcycles and most of the crashes involved situations whereby both of the vehicles are travelling in the same direction and one of the vehicles is in a turning maneuver. More than 80% of the involved motorcycle riders had sustained yellow severity level during triage process. The study also found that nearly 30% of the riders sustained injuries to the lower extremities, while MAIS level 3 injuries were recorded for all body regions except for thorax region. The result showed that crashes in which the motorcycles were found to be at fault were more likely to occur during night and raining conditions. These types of crashes were also found to be more likely to involve other types of passenger vehicles rather than cars and possess higher likelihood in resulting higher ISS (>6) value to the involved rider. To reduce motorcycle fatalities, it first has to understand the characteristics concerned and focus may be given on crashes involving passenger vehicles as the most dominant crash partner on Malaysian roads.

Keywords: motorcycle crash, passenger vehicle, in-depth crash investigation, injury mechanism

Procedia PDF Downloads 300