Search results for: cetane number
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 9893

Search results for: cetane number

8093 Primary Care Physicians in Urgent Care Centres of the United Kingdom

Authors: Mohammad Ansari, Ahmed Ismail, Satinder Mann

Abstract:

Overcrowding in Emergency departments (ED) of United Kingdom has become a common problem. Urgent Care centres were developed nearly a decade ago to reduce pressure on EDs. Unfortunately, the development of Urgent Care centres has failed to produce the projected effects. It was thought that nearly 40% patients attending ED would go to Urgent Care centres and these would be staffed by Primary care Physicians. Data reveals that no more than 20% patients were seen by Primary Care Physicians even when the Urgent Care Centre was based in the ED. This study was carried out at the ED of George Eliot Hospital, Nuneaton, UK where the Urgent Care centre was based in the ED and employed Primary Care Physicians with special interest in trauma for nearly one year. This was then followed by a Primary Care Physician and Advanced Nurse Practitioner. We compared the number of patients seen during these periods and the cost-effectiveness of the service.We randomly selected a week of patients seen by Primary Care Physicians with special interest in Trauma and by Primary Care Physicians and the Advanced Nurse Practitioner. We compared the number and type of patients seen during these two periods. Nearly 38% patients were seen by Primary care Physician with special interest in Trauma, whilst only 14.3% patients were seen by the Primary care Physician and Advanced Nurse Practitioner. The Primary Care Physicians with special interest in trauma were paid less. Our study confirmed that unless Primary Care Physicians are able to treat minor trauma and interpret x-rays, the urgent care service is not going to be cost effective. Numerous previous studies have shown that 15 to 20% patients attending ED can be treated by Primary Care Physicians who do not require any investigations for their management. It is advantageous to have Urgent Care Centres within the ED because if the patient deteriorates they can be transferred to ED. We recommend that the Urgent care Centres should be a part of ED. Our study shows that Urgent care Centres in the ED can be helpful and cost effective if staffed by either senior Emergency Physicians or Primary Care Physicians with special interest and experience in the management of minor trauma.

Keywords: urgent care centres, primary care physician, advanced nurse practitioner, trauma

Procedia PDF Downloads 407
8092 Real-Time Online Tracking Platform

Authors: Denis Obrul, Borut Žalik

Abstract:

We present an extendable online real-time tracking platform that can be used to track a wide variety of location-aware devices. These can range from GPS devices mounted inside a vehicle, closed and secure systems such as Teltonika and to mobile phones running multiple platforms. Special consideration is given to decentralized approach, security and flexibility. A number of different use cases are presented as a proof of concept.

Keywords: real-time, online, gps, tracking, web application

Procedia PDF Downloads 335
8091 Optimisation of Energy Harvesting for a Composite Aircraft Wing Structure Bonded with Discrete Macro Fibre Composite Sensors

Authors: Ali H. Daraji, Ye Jianqiao

Abstract:

The micro electrical devices of the wireless sensor network are continuously developed and become very small and compact with low electric power requirements using limited period life conventional batteries. The low power requirement for these devices, cost of conventional batteries and its replacement have encouraged researcher to find alternative power supply represented by energy harvesting system to provide an electric power supply with infinite period life. In the last few years, the investigation of energy harvesting for structure health monitoring has increased to powering wireless sensor network by converting waste mechanical vibration into electricity using piezoelectric sensors. Optimisation of energy harvesting is an important research topic to ensure a flowing of efficient electric power from structural vibration. The harvesting power is mainly based on the properties of piezoelectric material, dimensions of piezoelectric sensor, its position on a structure and value of an external electric load connected between sensor electrodes. Larger surface area of sensor is not granted larger power harvesting when the sensor area is covered positive and negative mechanical strain at the same time. Thus lead to reduction or cancellation of piezoelectric output power. Optimisation of energy harvesting is achieved by locating these sensors precisely and efficiently on the structure. Limited published work has investigated the energy harvesting for aircraft wing. However, most of the published studies have simplified the aircraft wing structure by a cantilever flat plate or beam. In these studies, the optimisation of energy harvesting was investigated by determination optimal value of an external electric load connected between sensor electrode terminals or by an external electric circuit or by randomly splitting piezoelectric sensor to two segments. However, the aircraft wing structures are complex than beam or flat plate and mostly constructed from flat and curved skins stiffened by stringers and ribs with more complex mechanical strain induced on the wing surfaces. This aircraft wing structure bonded with discrete macro fibre composite sensors was modelled using multiphysics finite element to optimise the energy harvesting by determination of the optimal number of sensors, location and the output resistance load. The optimal number and location of macro fibre sensors were determined based on the maximization of the open and close loop sensor output voltage using frequency response analysis. It was found different optimal distribution, locations and number of sensors bounded on the top and the bottom surfaces of the aircraft wing.

Keywords: energy harvesting, optimisation, sensor, wing

Procedia PDF Downloads 288
8090 Degradation of Petroleum Hydrocarbons Using Pseudomonas Aeruginosa Isolated from Oil Contaminated Soil Incorporated into E. coli DH5α Host

Authors: C. S. Jeba Samuel

Abstract:

Soil, especially from oil field has posed a great hazard for terrestrial and marine ecosystems. The traditional treatment of oil contaminated soil cannot degrade the crude oil completely. So far, biodegradation proves to be an efficient method. During biodegradation, crude oil is used as the carbon source and addition of nitrogenous compounds increases the microbial growth, resulting in the effective breakdown of crude oil components to low molecular weight components. The present study was carried out to evaluate the biodegradation of crude oil by hydrocarbon-degrading microorganism Pseudomonas aeruginosa isolated from natural environment like oil contaminated soil. Pseudomonas aeruginosa, an oil degrading microorganism also called as hydrocarbon utilizing microorganism (or “HUM” bug) can utilize crude oil as sole carbon source. In this study, the biodegradation of crude oil was conducted with modified mineral basal salt medium and nitrogen sources so as to increase the degradation. The efficacy of the plasmid from the isolated strain was incorporated into E.coli DH5 α host to speed up the degradation of oil. The usage of molecular techniques has increased oil degradation which was confirmed by the degradation of aromatic and aliphatic rings of hydrocarbons and was inferred by the lesser number of peaks in Fourier Transform Infrared Spectroscopy (FTIR). The gas chromatogram again confirms better degradation by transformed cells by the lesser number of components obtained in the oil treated with transformed cells. This study demonstrated the technical feasibility of using direct inoculation of transformed cells onto the oil contaminated region thereby leading to the achievement of better oil degradation in a shorter time than the degradation caused by the wild strain.

Keywords: biodegradation, aromatic rings, plasmid, hydrocarbon, Fourier Transform Infrared Spectroscopy (FTIR)

Procedia PDF Downloads 349
8089 Factors Influencing Fertility Preferences and Contraceptive Use among Reproductive Aged Married Women in Eastern Ethiopia

Authors: Heroda Gebru, Berhanu Seyoum, Melake Damena, Gezahegn Tesfaye

Abstract:

Background: In Ethiopia there is a population policy aimed at reducing fertility and increasing contraceptive prevalence. Objective: To assess the fertility preference and contraceptive use status of married women who were living in Dire Dawa administrative city. Methods: Cross sectional study which included a sample size of 421 married women of reproductive age were performed. Data was collected using structured questionnaire during house to house survey and semi-structured questionnaire during in-depth interview. Data was processed and analyzed using SPSS version 16 computer software. Univariate, bi variate and multi variate analysis was employed. Results: A total of 421 married women of reproductive age group were interviewed having a response rate of 100 percent. More than half (58.2%) of the respondent have desire of more children. While 41.8% want no more children. Regarding contraceptive use 52.5% of the respondents were using contraceptive at the time of survey. Fertility preference and contraceptive use were significantly associated with age of the respondent, history of child death, number of living children, religion and age at first birth. Conclusions: Those women with younger age group, who had no child death history and women with lesser number of surviving children were more likely desire additional children. Women with older age at first birth and protestant in religion were more likely practiced contraceptive use. Strong information and education regarding contraceptive for younger age group should be provided, advocacy at level of religious leader is important, comprehensive family planning counselling and education should be available for the community, husbands, and religious leaders and the aim for increasing contraceptive use should focus on the practical aspect.

Keywords: fertility preference, contraceptive use, univariate analysis, family planning

Procedia PDF Downloads 356
8088 Household Socioeconomic Factors Associated with Teenage Pregnancies in Kigali City, Rwanda

Authors: Dieudonne Uwizeye, Reuben Muhayiteto

Abstract:

Teenage pregnancy is a challenging problem for sustainable development due to restrictions it poses to socioeconomic opportunities for young mothers, their children and families. Being unable to take appropriate economic and social responsibilities, teen mothers get trapped into poverty and become economic burden to their family and country. Besides, teenage pregnancy is also a health problem because children born to very young mothers are vulnerable with greater risk of illnesses and deaths, and teenage mothers are more likely to be exposed to greater risk of maternal mortality and to other health and psychological problems. In Kigali city, in Rwanda, teenage pregnancy rate is currently high and its increase in recent years is worrisome. However, only individual factors influencing the teenage pregnancy tend to be the basis of interventions. It is important to understand the important socioeconomic factors at the household level that are associated with teenage pregnancy to help government, parents, and other stakeholders to appropriately address the problem with sustainable measures. This study analyzed secondary data from the Fifth Rwanda Demographic and Health Survey (RDHS-V 2014-2015) conducted by the National Institute of Statistics of Rwanda (NISR). The aim was to examine household socio-economic factors that are associated with incidence of teenage pregnancies in Kigali city. In addition to descriptive analysis, Pearson’s Chi Square and Binary Logistic Regression were used in the analysis. Findings indicate that marital status and age of household head, number of members in a household, number of rooms used for sleeping, educational level of the household head and household's wealth are significantly associated with teenage pregnancy in Rwanda ( p< 0.05). It was found that teenagers living with parents, those having parents with higher education and those from richer families are less likely to become pregnant. Age of household head was pinpointed as factor to teenage pregnancy, with teenage-headed households being more vulnerable. The findings also revealed that household composition correlates with the probability of teenage pregnancy (p < 0.05) with teenagers from households with less number of members being more vulnerable. Regarding the size of the house, the study suggested that the more rooms available in households, the less incidences of teenage pregnancy are likely to be observed (p < 0.05). However, teenage pregnancy was not significantly associated with physical violence among parents (p = 0.65) and sex of household heads (p = 0.52), except in teen-headed households of which female are predominantly heads. The study concludes that teenage pregnancy remains a serious social, economic and health problem in Rwanda. The study informs government officials, parents and other stakeholders to take interventions and preventive measures through community sex education, policies and strategies to foster effective parental guidance, care and control of young girls through meeting their necessary social and financial needs within households.

Keywords: household socio-economic factors, Rwanda, Rwanda demographic and health survey, teenage pregnancy

Procedia PDF Downloads 162
8087 Analysis of Travel Behavior Patterns of Frequent Passengers after the Section Shutdown of Urban Rail Transit - Taking the Huaqiao Section of Shanghai Metro Line 11 Shutdown During the COVID-19 Epidemic as an Example

Authors: Hongyun Li, Zhibin Jiang

Abstract:

The travel of passengers in the urban rail transit network is influenced by changes in network structure and operational status, and the response of individual travel preferences to these changes also varies. Firstly, the influence of the suspension of urban rail transit line sections on passenger travel along the line is analyzed. Secondly, passenger travel trajectories containing multi-dimensional semantics are described based on network UD data. Next, passenger panel data based on spatio-temporal sequences is constructed to achieve frequent passenger clustering. Then, the Graph Convolutional Network (GCN) is used to model and identify the changes in travel modes of different types of frequent passengers. Finally, taking Shanghai Metro Line 11 as an example, the travel behavior patterns of frequent passengers after the Huaqiao section shutdown during the COVID-19 epidemic are analyzed. The results showed that after the section shutdown, most passengers would transfer to the nearest Anting station for boarding, while some passengers would transfer to other stations for boarding or cancel their travels directly. Among the passengers who transferred to Anting station for boarding, most of passengers maintained the original normalized travel mode, a small number of passengers waited for a few days before transferring to Anting station for boarding, and only a few number of passengers stopped traveling at Anting station or transferred to other stations after a few days of boarding on Anting station. The results can provide a basis for understanding urban rail transit passenger travel patterns and improving the accuracy of passenger flow prediction in abnormal operation scenarios.

Keywords: urban rail transit, section shutdown, frequent passenger, travel behavior pattern

Procedia PDF Downloads 58
8086 Impact of Different Fuel Inlet Diameters onto the NOx Emissions in a Hydrogen Combustor

Authors: Annapurna Basavaraju, Arianna Mastrodonato, Franz Heitmeir

Abstract:

The Advisory Council for Aeronautics Research in Europe (ACARE) is creating awareness for the overall reduction of NOx emissions by 80% in its vision 2020. Hence this promotes the researchers to work on novel technologies, one such technology is the use of alternative fuels. Among these fuels hydrogen is of interest due to its one and only significant pollutant NOx. The influence of NOx formation due to hydrogen combustion depends on various parameters such as air pressure, inlet air temperature, air to fuel jet momentum ratio etc. Appropriately, this research is motivated to investigate the impact of the air to fuel jet momentum ratio onto the NOx formation in a hydrogen combustion chamber for aircraft engines. The air to jet fuel momentum is defined as the ratio of impulse/momentum of air with respect to the momentum of fuel. The experiments were performed in an existing combustion chamber that has been previously tested for methane. Premix of the reactants has not been considered due to the high reactivity of the hydrogen and high risk of a flashback. In order to create a less rich zone of reaction at the burner and to decrease the emissions, a forced internal recirculation flow has been achieved by integrating a plate similar to honeycomb structure, suitable to the geometry of the liner. The liner has been provided with an external cooling system to avoid the increase of local temperatures and in turn the reaction rate of the NOx formation. The injected air has been preheated to aim at so called flameless combustion. The air to fuel jet momentum ratio has been inspected by changing the area of fuel inlets and keeping the number of fuel inlets constant in order to alter the fuel jet momentum, thus maintaining the homogeneity of the flow. Within this analysis, promising results for a flameless combustion have been achieved. For a constant number of fuel inlets, it was seen that the reduction of the fuel inlet diameter resulted in decrease of air to fuel jet momentum ratio in turn lowering the NOx emissions.

Keywords: combustion chamber, hydrogen, jet momentum, NOx emission

Procedia PDF Downloads 280
8085 Investigating the Significance of Ground Covers and Partial Root Zone Drying Irrigation for Water Conservation Weed Suppression and Quality Traits of Wheat

Authors: Muhammad Aown Sammar Raza, Salman Ahmad, Muhammad Farrukh Saleem, Muhammad Saqlain Zaheer, Rashid Iqbal, Imran Haider, Muhammad Usman Aslam, Muhammad Adnan Nazar

Abstract:

One of the main negative effects of climate change is the increasing scarcity of water worldwide, especially for irrigation purpose. In order to ensure food security with less available water, there is a need to adopt easy and economic techniques. Two of the effective techniques are; use of ground covers and partial root zone drying (PRD). A field experiment was arranged to find out the most suitable mulch for PRD irrigation system in wheat. The experiment was comprised of two irrigation methods (I0 = irrigation on both sides of roots and I1= irrigation to only one side of the root as alternate irrigation) and four ground covers (M0= open ground without any cover, M1= black plastic cover, M2= wheat straw cover and M4= cotton sticks cover). More plant height, spike length, number of spikelets and number of grains were found in full irrigation treatment. While water use efficiency and grain nutrient (NPK) contents were more in PRD irrigation. All soil covers suppress the weeds and significantly influenced the yield attributes, final yield as well as the grain nutrient contents. However black plastic cover performed the best. It was concluded that joint use of both techniques was more effective for water conservation and increasing grain yield than their sole application and combination of PRD with black plastic mulch performed the best than other ground covers combination used in the experiment.

Keywords: ground covers, partial root zone drying, grain yield, quality traits, WUE, weed control efficiency

Procedia PDF Downloads 224
8084 From Primer Generation to Chromosome Identification: A Primer Generation Genotyping Method for Bacterial Identification and Typing

Authors: Wisam H. Benamer, Ehab A. Elfallah, Mohamed A. Elshaari, Farag A. Elshaari

Abstract:

A challenge for laboratories is to provide bacterial identification and antibiotic sensitivity results within a short time. Hence, advancement in the required technology is desirable to improve timing, accuracy and quality. Even with the current advances in methods used for both phenotypic and genotypic identification of bacteria the need is there to develop method(s) that enhance the outcome of bacteriology laboratories in accuracy and time. The hypothesis introduced here is based on the assumption that the chromosome of any bacteria contains unique sequences that can be used for its identification and typing. The outcome of a pilot study designed to test this hypothesis is reported in this manuscript. Methods: The complete chromosome sequences of several bacterial species were downloaded to use as search targets for unique sequences. Visual basic and SQL server (2014) were used to generate a complete set of 18-base long primers, a process started with reverse translation of randomly chosen 6 amino acids to limit the number of the generated primers. In addition, the software used to scan the downloaded chromosomes using the generated primers for similarities was designed, and the resulting hits were classified according to the number of similar chromosomal sequences, i.e., unique or otherwise. Results: All primers that had identical/similar sequences in the selected genome sequence(s) were classified according to the number of hits in the chromosomes search. Those that were identical to a single site on a single bacterial chromosome were referred to as unique. On the other hand, most generated primers sequences were identical to multiple sites on a single or multiple chromosomes. Following scanning, the generated primers were classified based on ability to differentiate between medically important bacterial and the initial results looks promising. Conclusion: A simple strategy that started by generating primers was introduced; the primers were used to screen bacterial genomes for match. Primer(s) that were uniquely identical to specific DNA sequence on a specific bacterial chromosome were selected. The identified unique sequence can be used in different molecular diagnostic techniques, possibly to identify bacteria. In addition, a single primer that can identify multiple sites in a single chromosome can be exploited for region or genome identification. Although genomes sequences draft of isolates of organism DNA enable high throughput primer design using alignment strategy, and this enhances diagnostic performance in comparison to traditional molecular assays. In this method the generated primers can be used to identify an organism before the draft sequence is completed. In addition, the generated primers can be used to build a bank for easy access of the primers that can be used to identify bacteria.

Keywords: bacteria chromosome, bacterial identification, sequence, primer generation

Procedia PDF Downloads 177
8083 Social Impact Bonds in the US Context

Authors: Paula M. Lantz

Abstract:

In the United States, significant socioeconomic and racial inequalities exist in many population-based indicators of health and social welfare. Although a number of effective prevention programs and interventions are available, local and state governments often do not pursue prevention in the face of budgetary constraints and more acute problems. There is growing interest in and excitement about Pay for Success” (PFS) strategies, also referred to as social impact bonds, as an approach to financing and implementing promising prevention programs and services that help the public sector either save money or achieve greater value for an investment. The PFS finance model implements evidence-based interventions using capital from investors who only receive a return on their investment from the government if agreed-upon, measurable outcomes are achieved. This paper discusses the current landscape regarding social impact bonds in the U.S., and their potential and challenges in addressing serious health and social problems. The paper presents an analysis of a number of social science issues that are fundamental to the potential for social impact bonds to successfully address social inequalities in health and social welfare. This includes: a) the economics of the intervention and a potential public payout; b) organizational and management issues in intervention implementation; c) evaluation research design and methods; d) legal/regulatory issues in public payouts to investors; e) ethical issues in the design of social impact bond deals and their evaluation; and f) political issues. Despite significant challenges in the U.S. context, there is great potential for social impact bonds as a type of social impact investing to encourage private investments in evidence-based interventions that address important public health and social problems in underserved populations and provide a return on investment.

Keywords: pay for success, public/private partnerships, social impact bonds, social impact investing

Procedia PDF Downloads 284
8082 Vulnerability Assessment of Vertically Irregular Structures during Earthquake

Authors: Pranab Kumar Das

Abstract:

Vulnerability assessment of buildings with irregularity in the vertical direction has been carried out in this study. The constructions of vertically irregular buildings are increasing in the context of fast urbanization in the developing countries including India. During two reconnaissance based survey performed after Nepal earthquake 2015 and Imphal (India) earthquake 2016, it has been observed that so many structures are damaged due to the vertically irregular configuration. These irregular buildings are necessary to perform safely during seismic excitation. Therefore, it is very urgent demand to point out the actual vulnerability of the irregular structure. So that remedial measures can be taken for protecting those structures during natural hazard as like earthquake. This assessment will be very helpful for India and as well as for the other developing countries. A sufficient number of research has been contributed to the vulnerability of plan asymmetric buildings. In the field of vertically irregular buildings, the effort has not been forwarded much to find out their vulnerability during an earthquake. Irregularity in vertical direction may be caused due to irregular distribution of mass, stiffness and geometrically irregular configuration. Detailed analysis of such structures, particularly non-linear/ push over analysis for performance based design seems to be challenging one. The present paper considered a number of models of irregular structures. Building models made of both reinforced concrete and brick masonry are considered for the sake of generality. The analyses are performed with both help of finite element method and computational method.The study, as a whole, may help to arrive at a reasonably good estimate, insight for fundamental and other natural periods of such vertically irregular structures. The ductility demand, storey drift, and seismic response study help to identify the location of critical stress concentration. Summarily, this paper is a humble step for understanding the vulnerability and framing up the guidelines for vertically irregular structures.

Keywords: ductility, stress concentration, vertically irregular structure, vulnerability

Procedia PDF Downloads 218
8081 Cadaveric Study of Lung Anatomy: A Surgical Overview

Authors: Arthi Ganapathy, Rati Tandon, Saroj Kaler

Abstract:

Introduction: A thorough knowledge of variations in lung anatomy is of prime significance during surgical procedures like lobectomy, pneumonectomy, and segmentectomy of lungs. The arrangement of structures in the lung hilum act as a guide in performing such procedures. The normal pattern of arrangement of hilar structures in the right lung is eparterial bronchus, pulmonary artery, hyparterial bronchus and pulmonary veins from above downwards. In the left lung, it is pulmonary artery, principal bronchus and pulmonary vein from above downwards. The arrangement of hilar structures from anterior to posterior in both the lungs is pulmonary vein, pulmonary artery, and principal bronchus. The bronchial arteries are very small and usually the posterior most structures in the hilum of lungs. Aim: The present study aims at reporting the variations in hilar anatomy (arrangement and number) of lungs. Methodology: 75 adult formalin fixed cadaveric lungs from the department of Anatomy AIIMS New Delhi were observed for variations in the lobar anatomy. Arrangement of pulmonary hilar structures was meticulously observed, and any deviation in the pattern of presentation was recorded. Results: Among the 75 adult lung specimens observed 36 specimens were of right lung and the rest of left lung. Seven right lung specimens showed only 2 lobes with an oblique fissure dividing them and one left lung showed 3 lobes. The normal pattern of arrangement of hilar structures was seen in 22 right lungs and 23 left lungs. Rest of the lung specimens (14 right and 16 left) showed a varied pattern of arrangement of hilar structures. Some of them showed alterations in the sequence of arrangement of pulmonary artery, pulmonary veins, bronchus, and others in the number of these structures. Conclusion: Alterations in the pattern of arrangement of structures in the lung hilum are quite frequent. A compromise in knowledge of such variations will result in inadvertent complications like intraoperative bleeding during surgical procedures.

Keywords: fissures, hilum, lobes, pulmonary

Procedia PDF Downloads 207
8080 Advancing Spatial Mapping and Monitoring of Illegal Landfills for Deprived Urban Areas in Romania

Authors: ȘercăIanu Mihai, Aldea Mihaela, Iacoboaea Cristina, Luca Oana, Nenciu Ioana

Abstract:

The emergence and neutralization of illegal waste dumps represent a global concern for waste management ecosystems with a particularly pronounced impact on disadvantaged communities. All over the world, and in this particular case in Romania, a relevant number of people resided in houses lacking any legal forms such as land ownership documents or building permits. These areas are referred to as “informal settlements”. An increasing number of regions and cities in Romania are struggling to manage their waste dumps, especially in the context of increasing poverty and lack of regulation related to informal settlements. An example of such informal settlement can be found at the terminus of Bistra Street in Câlnic, which falls under the jurisdiction of the Municipality of Reșița in Caras Severin County. The article presents a case study that focuses on employing remote sensing techniques and spatial data to monitor and map illegal waste practices, with subsequent integration into a geographic information system tailored for the Reșița community. In addition, the paper outlines the steps involved in devising strategies aimed at enhancing waste management practices in disadvantaged areas, aligning with the shift toward a circular economy. Results presented in the paper contain a spatial mapping and visualization methodology calibrated with in situ data collection applicable for identifying illegal landfills. The emergence and neutralization of illegal dumps pose a challenge in the field of waste management. These approaches, which prove effective where conventional solutions have failed, need to be replicated and adopted more wisely.

Keywords: waste dumps, waste management, monitoring, GIS, informal settlements

Procedia PDF Downloads 58
8079 Paradigmatic Approach University Management from the Perspective of Strategic Management: A Research in the Marmara Region in Turkey

Authors: Recep Yücel, Cihat Kartal, Mustafa Kara

Abstract:

On the basis of strategic management, it is believed in the necessity of a number of innovations in the postmodern management approach in the management of universities in our country. In this sense, some of these requirements are the integration of public and private universities, international integration, R & D status and increasing young population will create a dynamic structure. According to the postmodern management approach, universities, in our country despite being governed by the classical approach autonomous universities; academically are thought solid, to be non-hierarchical and creative. In fact, studies that require a multidisciplinary academic environment, universities and there is a close cooperation between formal and non-formal sub-units. Moreover, terms of postmodern management approaches, the requirements specified in the direction of solving the problem of an increasing number of universities in our country is considered to be more difficult. Therefore, considering the psychological impact on the academic personnel the university organizational structure, the study are trying to aim to propose an appropriate model of university organization. In this context, the study sought to answer the question how to have an impact innovation and international integration on the academic achievement of the classical organizational structure. Finally, in the study, due to the adoption of the classical organizational structure of the university, integration is considered to be difficult, academic cooperation between universities at the international level and maintaining it. In addition, it was understood that block the efforts of this organization structure, academic motivation, development and innovation. In this study under these purposes; on the basis of the existing organization and management structure of the universities in the Marmara Region in Turkey, a study was conducted with qualitative research methods. The data have been analyzed using content analysis and assessment was based on the results obtained.

Keywords: university, strategic management, postmodern management approaches, multidisciplinary studies

Procedia PDF Downloads 379
8078 Digital Content Strategy (DCS) Detailed Review of the Key Content Components

Authors: Oksana Razina, Shakeel Ahmad, Jessie Qun Ren, Olufemi Isiaq

Abstract:

The modern life of businesses is categorically reliant on their established position online, where digital (and particularly website) content plays a significant role as the first point of information. Digital content, therefore, becomes essential – from making the first impression to the building and development of client relationships. Despite a number of valuable papers suggesting a strategic approach when dealing with digital data, other sources often do not view or accept the approach to digital content as a holistic or continuous process. Associations are frequently made with merely a one-off marketing campaign or similar. The challenge is to establish an agreed definition for the notion of Digital Content Strategy, which currently does not exist, as DCS is viewed from an excessive number of different angles. A strategic approach to content, nonetheless, is required, both practically and contextually. The researchers, therefore, aimed at attempting to identify the key content components comprising a digital content strategy to ensure all the aspects were covered and strategically applied – from the company’s understanding of the content value to the ability to display flexibility of content and advances in technology. This conceptual project evaluated existing literature on the topic of Digital Content Strategy (DCS) and related aspects, using the PRISMA Systematic Review Method, Document Analysis, Inclusion and Exclusion Criteria, Scoping Review, Snow-Balling Technique and Thematic Analysis. The data was collected from academic and statistical sources, government and relevant trade publications. Based on the suggestions from academics and trading sources related to the issues discussed, the researchers revealed the key actions for content creation and attempted to define the notion of DCS. The major finding of the study presented Key Content Components of Digital Content Strategy and can be considered for implementation in a business retail setting.

Keywords: digital content strategy, key content components, websites, digital marketing strategy

Procedia PDF Downloads 125
8077 The Impact of Cognitive Load on Deceit Detection and Memory Recall in Children’s Interviews: A Meta-Analysis

Authors: Sevilay Çankaya

Abstract:

The detection of deception in children’s interviews is essential for statement veracity. The widely used method for deception detection is building cognitive load, which is the logic of the cognitive interview (CI), and its effectiveness for adults is approved. This meta-analysis delves into the effectiveness of inducing cognitive load as a means of enhancing veracity detection during interviews with children. Additionally, the effectiveness of cognitive load on children's total number of events recalled is assessed as a second part of the analysis. The current meta-analysis includes ten effect sizes from search using databases. For the effect size calculation, Hedge’s g was used with a random effect model by using CMA version 2. Heterogeneity analysis was conducted to detect potential moderators. The overall result indicated that cognitive load had no significant effect on veracity outcomes (g =0.052, 95% CI [-.006,1.25]). However, a high level of heterogeneity was found (I² = 92%). Age, participants’ characteristics, interview setting, and characteristics of the interviewer were coded as possible moderators to explain variance. Age was significant moderator (β = .021; p = .03, R2 = 75%) but the analysis did not reveal statistically significant effects for other potential moderators: participants’ characteristics (Q = 0.106, df = 1, p = .744), interview setting (Q = 2.04, df = 1, p = .154), and characteristics of interviewer (Q = 2.96, df = 1, p = .086). For the second outcome, the total number of events recalled, the overall effect was significant (g =4.121, 95% CI [2.256,5.985]). The cognitive load was effective in total recalled events when interviewing with children. All in all, while age plays a crucial role in determining the impact of cognitive load on veracity, the surrounding context, interviewer attributes, and inherent participant traits may not significantly alter the relationship. These findings throw light on the need for more focused, age-specific methods when using cognitive load measures. It may be possible to improve the precision and dependability of deceit detection in children's interviews with the help of more studies in this field.

Keywords: deceit detection, cognitive load, memory recall, children interviews, meta-analysis

Procedia PDF Downloads 42
8076 The 'Toshi-No-Sakon' Phenomenon: A Trend in Japanese Family Formations

Authors: Franco Lorenzo D. Morales

Abstract:

‘Toshi-no-sakon,’ which translates to as ‘age gap marriage,’ is a term that has been popularized by celebrity couples in the Japanese entertainment industry. Japan is distinct for a developed nation for its rapidly aging population, declining marital and fertility rates, and the reinforcement of traditional gender roles. Statistical data has shown that the average age of marriage in Japan is increasing every year, showing a growing tendency for late marriage. As a result, the government has been trying to curb the declining trends by encouraging marriage and childbirth among the populace. This graduate thesis seeks to analyze the ‘toshi-no-sakon’ phenomenon in lieu of Japan’s current economic and social situation, and to see what the implications are for these kinds of married couples. This research also seeks to expound more on age gaps within married couples, which is a factor rarely-touched upon in Japanese family studies. A literature review was first performed in order to provide a framework to study ‘toshi-no-sakon’ from the perspective of four fields of study—marriage, family, aging, and gender. Numerous anonymous online statements by ‘toshi-no-sakon’ couples were then collected and analyzed, which brought to light a number of concerns. Couples wherein the husband is the older partner were prioritized in order to narrow down the focus of the research, and ‘toshi-no-sakon’ is only considered when the couple’s age gap is ten years or more. Current findings suggest that one of the perceived merits for a woman to marry an older man is that financial security would be guaranteed. However, this has been shown to be untrue as a number of couples express concern regarding their financial situation, which could be attributed to their husband’s socio-economic status. Having an older husband who is approaching the age of retirement presents another dilemma as the wife would be more obliged to provide care for her aging husband. This notion of the wife being a caregiver likely stems from an arrangement once common in Japanese families in which the wife must primarily care for her husband’s elderly parents. Childbearing is another concern as couples would be pressured to have a child right away due to the age of the husband, in addition to limiting the couple’s ideal number of children. This is another problematic aspect as the husband would have to provide income until his child has finished their education, implying that retirement would have to be delayed indefinitely. It is highly recommended that future studies conduct face-to-face interviews with couples and families who fall under the category of ‘toshi-no-sakon’ in order to gain a more in-depth perspective into the phenomenon and to reveal any undiscovered trends. Cases wherein the wife is the older partner in the relationship should also be given focus in future studies involving ‘toshi-no-sakon’.

Keywords: age gap, family structure, gender roles, marriage trends

Procedia PDF Downloads 347
8075 A Novel Heuristic for Analysis of Large Datasets by Selecting Wrapper-Based Features

Authors: Bushra Zafar, Usman Qamar

Abstract:

Large data sample size and dimensions render the effectiveness of conventional data mining methodologies. A data mining technique are important tools for collection of knowledgeable information from variety of databases and provides supervised learning in the form of classification to design models to describe vital data classes while structure of the classifier is based on class attribute. Classification efficiency and accuracy are often influenced to great extent by noisy and undesirable features in real application data sets. The inherent natures of data set greatly masks its quality analysis and leave us with quite few practical approaches to use. To our knowledge first time, we present a new approach for investigation of structure and quality of datasets by providing a targeted analysis of localization of noisy and irrelevant features of data sets. Machine learning is based primarily on feature selection as pre-processing step which offers us to select few features from number of features as a subset by reducing the space according to certain evaluation criterion. The primary objective of this study is to trim down the scope of the given data sample by searching a small set of important features which may results into good classification performance. For this purpose, a heuristic for wrapper-based feature selection using genetic algorithm and for discriminative feature selection an external classifier are used. Selection of feature based on its number of occurrence in the chosen chromosomes. Sample dataset has been used to demonstrate proposed idea effectively. A proposed method has improved average accuracy of different datasets is about 95%. Experimental results illustrate that proposed algorithm increases the accuracy of prediction of different diseases.

Keywords: data mining, generic algorithm, KNN algorithms, wrapper based feature selection

Procedia PDF Downloads 304
8074 Experimental Simulations of Aerosol Effect to Landfalling Tropical Cyclones over Philippine Coast: Virtual Seeding Using WRF Model

Authors: Bhenjamin Jordan L. Ona

Abstract:

Weather modification is an act of altering weather systems that catches interest on scientific studies. Cloud seeding is a common form of weather alteration. On the same principle, tropical cyclone mitigation experiment follows the methods of cloud seeding with intensity to account for. This study will present the effects of aerosol to tropical cyclone cloud microphysics and intensity. The framework of Weather Research and Forecasting (WRF) model incorporated with Thompson aerosol-aware scheme is the prime host to support the aerosol-cloud microphysics calculations of cloud condensation nuclei (CCN) ingested into the tropical cyclones before making landfall over the Philippine coast. The coupled microphysical and radiative effects of aerosols will be analyzed using numerical data conditions of Tropical Storm Ketsana (2009), Tropical Storm Washi (2011), and Typhoon Haiyan (2013) associated with varying CCN number concentrations per simulation per typhoon: clean maritime, polluted, and very polluted having 300 cm-3, 1000 cm-3, and 2000 cm-3 aerosol number initial concentrations, respectively. Aerosol species like sulphates, sea salts, black carbon, and organic carbon will be used as cloud nuclei and mineral dust as ice nuclei (IN). To make the study as realistic as possible, investigation during the biomass burning due to forest fire in Indonesia starting October 2015 as Typhoons Mujigae/Kabayan and Koppu/Lando had been seeded with aerosol emissions mainly comprises with black carbon and organic carbon, will be considered. Emission data that will be used is from NASA's Moderate Resolution Imaging Spectroradiometer (MODIS). The physical mechanism/s of intensification or deintensification of tropical cyclones will be determined after the seeding experiment analyses.

Keywords: aerosol, CCN, IN, tropical cylone

Procedia PDF Downloads 280
8073 Liquid-Liquid Plug Flow Characteristics in Microchannel with T-Junction

Authors: Anna Yagodnitsyna, Alexander Kovalev, Artur Bilsky

Abstract:

The efficiency of certain technological processes in two-phase microfluidics such as emulsion production, nanomaterial synthesis, nitration, extraction processes etc. depends on two-phase flow regimes in microchannels. For practical application in chemistry and biochemistry it is very important to predict the expected flow pattern for a large variety of fluids and channel geometries. In the case of immiscible liquids, the plug flow is a typical and optimal regime for chemical reactions and needs to be predicted by empirical data or correlations. In this work flow patterns of immiscible liquid-liquid flow in a rectangular microchannel with T-junction are investigated. Three liquid-liquid flow systems are considered, viz. kerosene – water, paraffin oil – water and castor oil – paraffin oil. Different flow patterns such as parallel flow, slug flow, plug flow, dispersed (droplet) flow, and rivulet flow are observed for different velocity ratios. New flow pattern of the parallel flow with steady wavy interface (serpentine flow) has been found. It is shown that flow pattern maps based on Weber numbers for different liquid-liquid systems do not match well. Weber number multiplied by Ohnesorge number is proposed as a parameter to generalize flow maps. Flow maps based on this parameter are superposed well for all liquid-liquid systems of this work and other experiments. Plug length and velocity are measured for the plug flow regime. When dispersed liquid wets channel walls plug length cannot be predicted by known empirical correlations. By means of particle tracking velocimetry technique instantaneous velocity fields in a plug flow regime were measured. Flow circulation inside plug was calculated using velocity data that can be useful for mass flux prediction in chemical reactions.

Keywords: flow patterns, hydrodynamics, liquid-liquid flow, microchannel

Procedia PDF Downloads 378
8072 A Statistical Approach to Predict and Classify the Commercial Hatchability of Chickens Using Extrinsic Parameters of Breeders and Eggs

Authors: M. S. Wickramarachchi, L. S. Nawarathna, C. M. B. Dematawewa

Abstract:

Hatchery performance is critical for the profitability of poultry breeder operations. Some extrinsic parameters of eggs and breeders cause to increase or decrease the hatchability. This study aims to identify the affecting extrinsic parameters on the commercial hatchability of local chicken's eggs and determine the most efficient classification model with a hatchability rate greater than 90%. In this study, seven extrinsic parameters were considered: egg weight, moisture loss, breeders age, number of fertilised eggs, shell width, shell length, and shell thickness. Multiple linear regression was performed to determine the most influencing variable on hatchability. First, the correlation between each parameter and hatchability were checked. Then a multiple regression model was developed, and the accuracy of the fitted model was evaluated. Linear Discriminant Analysis (LDA), Classification and Regression Trees (CART), k-Nearest Neighbors (kNN), Support Vector Machines (SVM) with a linear kernel, and Random Forest (RF) algorithms were applied to classify the hatchability. This grouping process was conducted using binary classification techniques. Hatchability was negatively correlated with egg weight, breeders' age, shell width, shell length, and positive correlations were identified with moisture loss, number of fertilised eggs, and shell thickness. Multiple linear regression models were more accurate than single linear models regarding the highest coefficient of determination (R²) with 94% and minimum AIC and BIC values. According to the classification results, RF, CART, and kNN had performed the highest accuracy values 0.99, 0.975, and 0.972, respectively, for the commercial hatchery process. Therefore, the RF is the most appropriate machine learning algorithm for classifying the breeder outcomes, which are economically profitable or not, in a commercial hatchery.

Keywords: classification models, egg weight, fertilised eggs, multiple linear regression

Procedia PDF Downloads 74
8071 The Effects on Abomasal Emtying Rate of Erythromycin and Bethanechol in Healthy, Premature and Diarrheic Calves

Authors: Sebnem Canikli Engin, Mutlu Sevinc, Hasan Guzelbektes

Abstract:

In this study, we aim to define the effects of erythromycin and bethanechol which are prokinetic agents, on the value of abomasal discharge in healthy, diarrhea and premature calves. In the work, 5 healty calves, 12 diarrheaic calves and 12 premature calves, amounting to a total of 29 calves. In healty calves work; the same 5 calves were used for controlled, erythromycin and bethanechol studies (there was a 48-hour waiting period between each work). In diarrheic calves work; 12 diarrheic calves were used during the study (4 of them for control group, 4 of them bethanechol group and last 4 calves erythromycin group). In premature calves works; 12 premature calves were used during the study (4 of them for control group, 4 of them bethanechol group and last 4 calves erythromycin group). 10 mg/kg IM dose of erythromycin were applied to each erythromycin group, 0,07 mg/kg IM dose of bethanechol were applied on bethanechol group. No drugs were applied to the control group and substitution milk was given to all calves. 50 mg/kg acetominophen and 25 gram/L glucose have been added into the substitution milk to evaluate the speed of gastrointestinal motility with the test results of absorptions of acetominophen and glucose. The blood samples have been taken before substitution milk application and 30, 60, 90, 120, 180, 240 and 300 minutes after substitution milk application. Respiratory rates and number of heartbeats were also recorded during the test time. No changes were observed in the number of heartbeats, respiratory rates and general conditions for all groups after drug application. It is observed that, the feces of some calves became slightly watery and viscous and premature calves generaly defecated after 180 minutes. When Cmax, Tmax and AUC values of acetaminophen and glucose are compared with control group’s after applying erythromycin on the calves in the premature group, we obtain higher Cmax (P<0,05), shorter Tmax and greather AUC (P>0,05) values. In conclusion, according to clinical and laboratory findings, it may be stated that the application of 10 mg/kg doze of erythromycin IM has provided faster abomazal emptying in premature calves.

Keywords: abomazal emptying, bethanechol, calf, erythromycin

Procedia PDF Downloads 319
8070 New Ways of Vocabulary Enlargement

Authors: S. Pesina, T. Solonchak

Abstract:

Lexical invariants, being a sort of stereotypes within the frames of ordinary consciousness, are created by the members of a language community as a result of uniform division of reality. The invariant meaning is formed in person’s mind gradually in the course of different actualizations of secondary meanings in various contexts. We understand lexical the invariant as abstract language essence containing a set of semantic components. In one of its configurations it is the basis or all or a number of the meanings making up the semantic structure of the word.

Keywords: lexical invariant, invariant theories, polysemantic word, cognitive linguistics

Procedia PDF Downloads 309
8069 Evaluation of Possible Application of Cold Energy in Liquefied Natural Gas Complexes

Authors: А. I. Dovgyalo, S. O. Nekrasova, D. V. Sarmin, A. A. Shimanov, D. A. Uglanov

Abstract:

Usually liquefied natural gas (LNG) gasification is performed due to atmospheric heat. In order to produce a liquefied gas a sufficient amount of energy is to be consumed (about 1 kW∙h for 1 kg of LNG). This study offers a number of solutions, allowing using a cold energy of LNG. In this paper it is evaluated the application turbines installed behind the evaporator in LNG complex due to its work additional energy can be obtained and then converted into electricity. At the LNG consumption of G=1000kg/h the expansion work capacity of about 10 kW can be reached. Herewith-open Rankine cycle is realized, where a low capacity cryo-pump (about 500W) performs its normal function, providing the cycle pressure. Additionally discussed an application of Stirling engine within the LNG complex also gives a possibility to realize cold energy. Considering the fact, that efficiency coefficient of Stirling engine reaches 50 %, LNG consumption of G=1000 kg/h may result in getting a capacity of about 142 kW of such a thermal machine. The capacity of the pump, required to compensate pressure losses when LNG passes through the hydraulic channel, will make 500 W. Apart from the above-mentioned converters, it can be proposed to use thermoelectric generating packages (TGP), which are widely used now. At present, the modern thermoelectric generator line provides availability of electric capacity with coefficient of efficiency up to 15%. In the proposed complex, it is suggested to install the thermoelectric generator on the evaporator surface is such a way, that the cold end is contacted with the evaporator’s surface, and the hot one – with the atmosphere. At the LNG consumption of G=1000 kgг/h and specified coefficient of efficiency the capacity of the heat flow Qh will make about 32 kW. The derivable net electric power will be P=4,2 kW, and the number of packages will amount to about 104 pieces. The carried out calculations demonstrate the research perceptiveness in this field of propulsion plant development, as well as allow realizing the energy saving potential with the use of liquefied natural gas and other cryogenics technologies.

Keywords: cold energy, gasification, liquefied natural gas, electricity

Procedia PDF Downloads 260
8068 Investigating the Relationship Between the Auditor’s Personality Type and the Quality of Financial Reporting in Companies Listed on the Tehran Stock Exchange

Authors: Seyedmohsen Mortazavi

Abstract:

The purpose of this research is to investigate the personality types of internal auditors on the quality of financial reporting in companies admitted to the Tehran Stock Exchange. Personality type is one of the issues that emphasizes the field of auditors' behavior, and this field has attracted the attention of shareholders and stock companies today, because the auditors' personality can affect the type of financial reporting and its quality. The research is applied in terms of purpose and descriptive and correlational in terms of method, and a researcher-made questionnaire was used to check the research hypotheses. The statistical population of the research is all the auditors, accountants and financial managers of the companies admitted to the Tehran Stock Exchange, and due to their large number and the uncertainty of their exact number, 384 people have been considered as a statistical sample using Morgan's table. The researcher-made questionnaire was approved by experts in the field, and then its validity and reliability were obtained using software. For the validity of the questionnaire, confirmatory factor analysis was first examined, and then using divergent and convergent validity; Fornell-Larker and cross-sectional load test of the validity of the questionnaire were confirmed; Then, the reliability of the questionnaire was examined using Cronbach's alpha and composite reliability, and the results of these two tests showed the appropriate reliability of the questionnaire. After checking the validity and reliability of the research hypotheses, PLS software was used to check the hypotheses. The results of the research showed that the personalities of internal auditors can affect the quality of financial reporting; The personalities investigated in this research include neuroticism, extroversion, flexibility, agreeableness and conscientiousness, all of these personality types can affect the quality of financial reporting.

Keywords: flexibility, quality of financial reporting, agreeableness, conscientiousness

Procedia PDF Downloads 84
8067 Feature Evaluation Based on Random Subspace and Multiple-K Ensemble

Authors: Jaehong Yu, Seoung Bum Kim

Abstract:

Clustering analysis can facilitate the extraction of intrinsic patterns in a dataset and reveal its natural groupings without requiring class information. For effective clustering analysis in high dimensional datasets, unsupervised dimensionality reduction is an important task. Unsupervised dimensionality reduction can generally be achieved by feature extraction or feature selection. In many situations, feature selection methods are more appropriate than feature extraction methods because of their clear interpretation with respect to the original features. The unsupervised feature selection can be categorized as feature subset selection and feature ranking method, and we focused on unsupervised feature ranking methods which evaluate the features based on their importance scores. Recently, several unsupervised feature ranking methods were developed based on ensemble approaches to achieve their higher accuracy and stability. However, most of the ensemble-based feature ranking methods require the true number of clusters. Furthermore, these algorithms evaluate the feature importance depending on the ensemble clustering solution, and they produce undesirable evaluation results if the clustering solutions are inaccurate. To address these limitations, we proposed an ensemble-based feature ranking method with random subspace and multiple-k ensemble (FRRM). The proposed FRRM algorithm evaluates the importance of each feature with the random subspace ensemble, and all evaluation results are combined with the ensemble importance scores. Moreover, FRRM does not require the determination of the true number of clusters in advance through the use of the multiple-k ensemble idea. Experiments on various benchmark datasets were conducted to examine the properties of the proposed FRRM algorithm and to compare its performance with that of existing feature ranking methods. The experimental results demonstrated that the proposed FRRM outperformed the competitors.

Keywords: clustering analysis, multiple-k ensemble, random subspace-based feature evaluation, unsupervised feature ranking

Procedia PDF Downloads 315
8066 Investigating the Flow Physics within Vortex-Shockwave Interactions

Authors: Frederick Ferguson, Dehua Feng, Yang Gao

Abstract:

No doubt, current CFD tools have a great many technical limitations, and active research is being done to overcome these limitations. Current areas of limitations include vortex-dominated flows, separated flows, and turbulent flows. In general, turbulent flows are unsteady solutions to the fluid dynamic equations, and instances of these solutions can be computed directly from the equations. One of the approaches commonly implemented is known as the ‘direct numerical simulation’, DNS. This approach requires a spatial grid that is fine enough to capture the smallest length scale of the turbulent fluid motion. This approach is called the ‘Kolmogorov scale’ model. It is of interest to note that the Kolmogorov scale model must be captured throughout the domain of interest and at a correspondingly small-time step. In typical problems of industrial interest, the ratio of the length scale of the domain to the Kolmogorov length scale is so great that the required grid set becomes prohibitively large. As a result, the available computational resources are usually inadequate for DNS related tasks. At this time in its development, DNS is not applicable to industrial problems. In this research, an attempt is made to develop a numerical technique that is capable of delivering DNS quality solutions at the scale required by the industry. To date, this technique has delivered preliminary results for both steady and unsteady, viscous and inviscid, compressible and incompressible, and for both high and low Reynolds number flow fields that are very accurate. Herein, it is proposed that the Integro-Differential Scheme (IDS) be applied to a set of vortex-shockwave interaction problems with the goal of investigating the nonstationary physics within the resulting interaction regions. In the proposed paper, the IDS formulation and its numerical error capability will be described. Further, the IDS will be used to solve the inviscid and viscous Burgers equation, with the goal of analyzing their solutions over a considerable length of time, thus demonstrating the unsteady capabilities of the IDS. Finally, the IDS will be used to solve a set of fluid dynamic problems related to flow that involves highly vortex interactions. Plans are to solve the following problems: the travelling wave and vortex problems over considerable lengths of time, the normal shockwave–vortex interaction problem for low supersonic conditions and the reflected oblique shock–vortex interaction problem. The IDS solutions obtained in each of these solutions will be explored further in efforts to determine the distributed density gradients and vorticity, as well as the Q-criterion. Parametric studies will be conducted to determine the effects of the Mach number on the intensity of vortex-shockwave interactions.

Keywords: vortex dominated flows, shockwave interactions, high Reynolds number, integro-differential scheme

Procedia PDF Downloads 121
8065 Thermo-Mechanical Analysis of Composite Structures Utilizing a Beam Finite Element Based on Global-Local Superposition

Authors: Andre S. de Lima, Alfredo R. de Faria, Jose J. R. Faria

Abstract:

Accurate prediction of thermal stresses is particularly important for laminated composite structures, as large temperature changes may occur during fabrication and field application. The normal transverse deformation plays an important role in the prediction of such stresses, especially for problems involving thick laminated plates subjected to uniform temperature loads. Bearing this in mind, the present study aims to investigate the thermo-mechanical behavior of laminated composite structures using a new beam element based on global-local superposition, accounting for through-the-thickness effects. The element formulation is based on a global-local superposition in the thickness direction, utilizing a cubic global displacement field in combination with a linear layerwise local displacement distribution, which assures zig-zag behavior of the stresses and displacements. By enforcing interlaminar stress (normal and shear) and displacement continuity, as well as free conditions at the upper and lower surfaces, the number of degrees of freedom in the model is maintained independently of the number of layers. Moreover, the proposed formulation allows for the determination of transverse shear and normal stresses directly from the constitutive equations, without the need of post-processing. Numerical results obtained with the beam element were compared to analytical solutions, as well as results obtained with commercial finite elements, rendering satisfactory results for a range of length-to-thickness ratios. The results confirm the need for an element with through-the-thickness capabilities and indicate that the present formulation is a promising alternative to such analysis.

Keywords: composite beam element, global-local superposition, laminated composite structures, thermal stresses

Procedia PDF Downloads 143
8064 Evaluation of the Gamma-H2AX Expression as a Biomarker of DNA Damage after X-Ray Radiation in Angiography Patients

Authors: Reza Fardid, Aliyeh Alipour

Abstract:

Introduction: Coronary heart disease (CHD) is the most common and deadliest diseases. A coronary angiography is an important tool for the diagnosis and treatment of this disease. Because angiography is performed by exposure to ionizing radiation, it can lead to harmful effects. Ionizing radiation induces double-stranded breaks in DNA, which is a potentially life-threatening injury. The purpose of the present study is an investigation of the phosphorylation of histone H2AX in the location of the double-stranded break in Peripheral blood lymphocytes as an indication of Biological effects of radiation on angiography patients. Materials and Methods: This method is based on measurement of the phosphorylation of histone (gamma-H2AX, gH2AX) level on serine 139 after formation of DNA double-strand break. 5 cc of blood from 24 patients with angiography were sampled before and after irradiation. Blood lymphocytes were removed, fixed and were stained with specific ϒH2AX antibodies. Finally, ϒH2AX signal as an indicator of the double-strand break was measured with Flow Cytometry Technique. Results and discussion: In all patients, an increase was observed in the number of breaks in double-stranded DNA after irradiation (20.15 ± 14.18) compared to before exposure (1.52 ± 0.34). Also, the mean of DNA double-strand break was showed a linear correlation with DAP. However, although induction of DNA double-strand breaks associated with radiation dose in patients, the effect of individual factors such as radiosensitivity and regenerative capacity should not be ignored. If in future we can measure DNA damage response in every patient angiography and it will be used as a biomarker patient dose, will look very impressive on the public health level. Conclusion: Using flow cytometry readings which are done automatically, it is possible to detect ϒH2AX in the number of blood cells. Therefore, the use of this technique could play a significant role in monitoring patients.

Keywords: coronary angiography, DSB of DNA, ϒH2AX, ionizing radiation

Procedia PDF Downloads 166