Search results for: failure detection and prediction
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 7721

Search results for: failure detection and prediction

641 Evaluation of Different Liquid Scintillation Counting Methods for 222Rn Determination in Waters

Authors: Jovana Nikolov, Natasa Todorovic, Ivana Stojkovic

Abstract:

Monitoring of 222Rn in drinking or surface waters, as well as in groundwater has been performed in connection with geological, hydrogeological and hydrological surveys and health hazard studies. Liquid scintillation counting (LSC) is often preferred analytical method for 222Rn measurements in waters because it allows multiple-sample automatic analysis. LSC method implies mixing of water samples with organic scintillation cocktail, which triggers radon diffusion from the aqueous into organic phase for which it has a much greater affinity, eliminating possibility of radon emanation in that manner. Two direct LSC methods that assume different sample composition have been presented, optimized and evaluated in this study. One-phase method assumed direct mixing of 10 ml sample with 10 ml of emulsifying cocktail (Ultima Gold AB scintillation cocktail is used). Two-phase method involved usage of water-immiscible cocktails (in this study High Efficiency Mineral Oil Scintillator, Opti-Fluor O and Ultima Gold F are used). Calibration samples were prepared with aqueous 226Ra standard in glass 20 ml vials and counted on ultra-low background spectrometer Quantulus 1220TM equipped with PSA (Pulse Shape Analysis) circuit which discriminates alpha/beta spectra. Since calibration procedure is carried out with 226Ra standard, which has both alpha and beta progenies, it is clear that PSA discriminator has vital importance in order to provide reliable and precise spectra separation. Consequentially, calibration procedure was done through investigation of PSA discriminator level influence on 222Rn efficiency detection, using 226Ra calibration standard in wide range of activity concentrations. Evaluation of presented methods was based on obtained efficiency detections and achieved Minimal Detectable Activity (MDA). Comparison of presented methods, accuracy and precision as well as different scintillation cocktail’s performance was considered from results of measurements of 226Ra spiked water samples with known activity and environmental samples.

Keywords: 222Rn in water, Quantulus1220TM, scintillation cocktail, PSA parameter

Procedia PDF Downloads 201
640 Methodology for the Multi-Objective Analysis of Data Sets in Freight Delivery

Authors: Dale Dzemydiene, Aurelija Burinskiene, Arunas Miliauskas, Kristina Ciziuniene

Abstract:

Data flow and the purpose of reporting the data are different and dependent on business needs. Different parameters are reported and transferred regularly during freight delivery. This business practices form the dataset constructed for each time point and contain all required information for freight moving decisions. As a significant amount of these data is used for various purposes, an integrating methodological approach must be developed to respond to the indicated problem. The proposed methodology contains several steps: (1) collecting context data sets and data validation; (2) multi-objective analysis for optimizing freight transfer services. For data validation, the study involves Grubbs outliers analysis, particularly for data cleaning and the identification of statistical significance of data reporting event cases. The Grubbs test is often used as it measures one external value at a time exceeding the boundaries of standard normal distribution. In the study area, the test was not widely applied by authors, except when the Grubbs test for outlier detection was used to identify outsiders in fuel consumption data. In the study, the authors applied the method with a confidence level of 99%. For the multi-objective analysis, the authors would like to select the forms of construction of the genetic algorithms, which have more possibilities to extract the best solution. For freight delivery management, the schemas of genetic algorithms' structure are used as a more effective technique. Due to that, the adaptable genetic algorithm is applied for the description of choosing process of the effective transportation corridor. In this study, the multi-objective genetic algorithm methods are used to optimize the data evaluation and select the appropriate transport corridor. The authors suggest a methodology for the multi-objective analysis, which evaluates collected context data sets and uses this evaluation to determine a delivery corridor for freight transfer service in the multi-modal transportation network. In the multi-objective analysis, authors include safety components, the number of accidents a year, and freight delivery time in the multi-modal transportation network. The proposed methodology has practical value in the management of multi-modal transportation processes.

Keywords: multi-objective, analysis, data flow, freight delivery, methodology

Procedia PDF Downloads 180
639 Nonlinear Evolution of the Pulses of Elastic Waves in Geological Materials

Authors: Elena B. Cherepetskaya, Alexander A. Karabutov, Natalia B. Podymova, Ivan Sas

Abstract:

Nonlinear evolution of broadband ultrasonic pulses passed through the rock specimens is studied using the apparatus ‘GEOSCAN-02M’. Ultrasonic pulses are excited by the pulses of Q-switched Nd:YAG laser with the time duration of 10 ns and with the energy of 260 mJ. This energy can be reduced to 20 mJ by some light filters. The laser beam radius did not exceed 5 mm. As a result of the absorption of the laser pulse in the special material – the optoacoustic generator–the pulses of longitudinal ultrasonic waves are excited with the time duration of 100 ns and with the maximum pressure amplitude of 10 MPa. The immersion technique is used to measure the parameters of these ultrasonic pulses passed through a specimen, the immersion liquid is distilled water. The reference pulse passed through the cell with water has the compression and the rarefaction phases. The amplitude of the rarefaction phase is five times lower than that of the compression phase. The spectral range of the reference pulse reaches 10 MHz. The cubic-shaped specimens of the Karelian gabbro are studied with the rib length 3 cm. The ultimate strength of the specimens by the uniaxial compression is (300±10) MPa. As the reference pulse passes through the area of the specimen without cracks the compression phase decreases and the rarefaction one increases due to diffraction and scattering of ultrasound, so the ratio of these phases becomes 2.3:1. After preloading some horizontal cracks appear in the specimens. Their location is found by one-sided scanning of the specimen using the backward mode detection of the ultrasonic pulses reflected from the structure defects. Using the computer processing of these signals the images are obtained of the cross-sections of the specimens with cracks. By the increase of the reference pulse amplitude from 0.1 MPa to 5 MPa the nonlinear transformation of the ultrasonic pulse passed through the specimen with horizontal cracks results in the decrease by 2.5 times of the amplitude of the rarefaction phase and in the increase of its duration by 2.1 times. By the increase of the reference pulse amplitude from 5 MPa to 10 MPa the time splitting of the phases is observed for the bipolar pulse passed through the specimen. The compression and rarefaction phases propagate with different velocities. These features of the powerful broadband ultrasonic pulses passed through the rock specimens can be described by the hysteresis model of Preisach-Mayergoyz and can be used for the location of cracks in the optically opaque materials.

Keywords: cracks, geological materials, nonlinear evolution of ultrasonic pulses, rock

Procedia PDF Downloads 351
638 Discovering the Effects of Meteorological Variables on the Air Quality of Bogota, Colombia, by Data Mining Techniques

Authors: Fabiana Franceschi, Martha Cobo, Manuel Figueredo

Abstract:

Bogotá, the capital of Colombia, is its largest city and one of the most polluted in Latin America due to the fast economic growth over the last ten years. Bogotá has been affected by high pollution events which led to the high concentration of PM10 and NO2, exceeding the local 24-hour legal limits (100 and 150 g/m3 each). The most important pollutants in the city are PM10 and PM2.5 (which are associated with respiratory and cardiovascular problems) and it is known that their concentrations in the atmosphere depend on the local meteorological factors. Therefore, it is necessary to establish a relationship between the meteorological variables and the concentrations of the atmospheric pollutants such as PM10, PM2.5, CO, SO2, NO2 and O3. This study aims to determine the interrelations between meteorological variables and air pollutants in Bogotá, using data mining techniques. Data from 13 monitoring stations were collected from the Bogotá Air Quality Monitoring Network within the period 2010-2015. The Principal Component Analysis (PCA) algorithm was applied to obtain primary relations between all the parameters, and afterwards, the K-means clustering technique was implemented to corroborate those relations found previously and to find patterns in the data. PCA was also used on a per shift basis (morning, afternoon, night and early morning) to validate possible variation of the previous trends and a per year basis to verify that the identified trends have remained throughout the study time. Results demonstrated that wind speed, wind direction, temperature, and NO2 are the most influencing factors on PM10 concentrations. Furthermore, it was confirmed that high humidity episodes increased PM2,5 levels. It was also found that there are direct proportional relationships between O3 levels and wind speed and radiation, while there is an inverse relationship between O3 levels and humidity. Concentrations of SO2 increases with the presence of PM10 and decreases with the wind speed and wind direction. They proved as well that there is a decreasing trend of pollutant concentrations over the last five years. Also, in rainy periods (March-June and September-December) some trends regarding precipitations were stronger. Results obtained with K-means demonstrated that it was possible to find patterns on the data, and they also showed similar conditions and data distribution among Carvajal, Tunal and Puente Aranda stations, and also between Parque Simon Bolivar and las Ferias. It was verified that the aforementioned trends prevailed during the study period by applying the same technique per year. It was concluded that PCA algorithm is useful to establish preliminary relationships among variables, and K-means clustering to find patterns in the data and understanding its distribution. The discovery of patterns in the data allows using these clusters as an input to an Artificial Neural Network prediction model.

Keywords: air pollution, air quality modelling, data mining, particulate matter

Procedia PDF Downloads 259
637 Long-Term Outcome of Emergency Response Team System in In-Hospital Cardiac Arrest

Authors: Jirapat Suriyachaisawat, Ekkit Surakarn

Abstract:

Introduction: To improve early detection and mortality rate of in-hospital cardiac arrest, Emergency Response Team (ERT) system was planned and implemented since June 2009 to detect pre-arrest conditons and for any concerns. The ERT consisted of on duty physicians and nurses from emergency department. ERT calling criteria consisted of acute change of HR < 40 or > 130 beats per minute, systolic blood pressure < 90 mmHg, respiratory rate <8 or >28 breaths per minute, O2 saturation <90%, acute change in conscious state, acute chest pain or worry about the patients. From the data on ERT system implementation in our hospital in early phase (during June 2009-2011), there was no statistic significance in difference in in-hospital cardiac arrest incidence and overall hospital mortality rate. Since the introduction of the ERT service in our hospital, we have conducted continuous educational campaign to improve awareness in an attempt to increase use of the service. Methods: To investigate outcome of ERT system in in-hospital cardiac arrest and overall hospital mortality rate, we conducted a prospective, controlled before-and after examination of the long term effect of a ERT system on the incidence of cardiac arrest. We performed chi-square analysis to find statistic significance. Results: Of a total 623 ERT cases from June 2009 until December 2012, there were 72 calls in 2009, 196 calls in 2010, 139 calls in 2011 and 245 calls in 2012. The number of ERT calls per 1000 admissions in year 2009-10 was 7.69; 5.61 in 2011 and 9.38 in 2013. The number of code blue calls per 1000 admissions decreased significantly from 2.28 to 0.99 per 1000 admissions (P value < 0.001). The incidence of cardiac arrest decreased progressively from 1.19 to 0.34 per 1000 admissions and significant in difference in year 2012 (P value < 0.001 ). The overall hospital mortality rate decreased by 8 % from 15.43 to 14.43 per 1000 admissions (P value 0.095). Conclusions: ERT system implementation was associated with progressive reduction in cardiac arrests over three year period, especially statistic significant in difference in 4th year after implementation. We also found an inverse association between number of ERT use and the risk of occurrence of cardiac arrests, but we have not found difference in overall hospital mortality rate.

Keywords: cardiac arrest, outcome, in-hospital, ERT

Procedia PDF Downloads 199
636 Deconstructing and Reconstructing the Definition of Inhuman Treatment in International Law

Authors: Sonia Boulos

Abstract:

The prohibition on ‘inhuman treatment’ constitutes one of the central tenets of modern international human rights law. It is incorporated in principal international human rights instruments including Article 5 of the Universal Declaration of Human Rights, and Article 7 of the International Covenant on Civil and Political Rights. However, in the absence of any legislative definition of the term ‘inhuman’, its interpretation becomes challenging. The aim of this article is to critically analyze the interpretation of the term ‘inhuman’ in international human rights law and to suggest a new approach to construct its meaning. The article is composed of two central parts. The first part is a critical appraisal of the interpretation of the term ‘inhuman’ by supra-national human rights law institutions. It highlights the failure of supra-national institutions to provide an independent definition for the term ‘inhuman’. In fact, those institutions consistently fail to distinguish the term ‘inhuman’ from its other kin terms, i.e. ‘cruel’ and ‘degrading.’ Very often, they refer to these three prohibitions as ‘CIDT’, as if they were one collective. They were primarily preoccupied with distinguishing ‘CIDT’ from ‘torture.’ By blurring the conceptual differences between these three terms, supra-national institutions supplemented them with a long list of specific and purely descriptive subsidiary rules. In most cases, those subsidiary rules were announced in the absence of sufficient legal reasoning explaining how they were derived from abstract and evaluative standards embodied in the prohibitions collectively referred to as ‘CIDT.’ By opting for this option, supra-national institutions have created the risk for the development of an incoherent body of jurisprudence on those terms at the international level. They also have failed to provide guidance for domestic courts on how to enforce these prohibitions. While blurring the differences between the terms ‘cruel,’ ‘inhuman,’ and ‘degrading’ has consequences for the three, the term ‘inhuman’ remains the most impoverished one. It is easy to link the term ‘cruel’ to the clause on ‘cruel and unusual punishment’ originating from the English Bill of Rights of 1689. It is also easy to see that the term ‘degrading’ reflects a dignatarian ideal. However, when we turn to the term ‘inhuman’, we are left without any interpretative clue. The second part of the article suggests that the ordinary meaning of the word ‘inhuman’ should be our first clue. However, regaining the conceptual independence of the term ‘inhuman’ requires more than a mere reflection on the word-meaning of the term. Thus, the second part introduces philosophical concepts related to the understanding of what it means to be human. It focuses on ‘the capabilities approach’ and the notion of ‘human functioning’, introduced by Amartya Sen and further explored by Martha Nussbaum. Nussbaum’s work on the basic human capabilities is particularly helpful or even vital for understanding the moral and legal substance of the prohibition on ‘inhuman’ treatment.

Keywords: inhuman treatment, capabilities approach, human functioning, supra-national institutions

Procedia PDF Downloads 278
635 Non-intrusive Hand Control of Drone Using an Inexpensive and Streamlined Convolutional Neural Network Approach

Authors: Evan Lowhorn, Rocio Alba-Flores

Abstract:

The purpose of this work is to develop a method for classifying hand signals and using the output in a drone control algorithm. To achieve this, methods based on Convolutional Neural Networks (CNN) were applied. CNN's are a subset of deep learning, which allows grid-like inputs to be processed and passed through a neural network to be trained for classification. This type of neural network allows for classification via imaging, which is less intrusive than previous methods using biosensors, such as EMG sensors. Classification CNN's operate purely from the pixel values in an image; therefore they can be used without additional exteroceptive sensors. A development bench was constructed using a desktop computer connected to a high-definition webcam mounted on a scissor arm. This allowed the camera to be pointed downwards at the desk to provide a constant solid background for the dataset and a clear detection area for the user. A MATLAB script was created to automate dataset image capture at the development bench and save the images to the desktop. This allowed the user to create their own dataset of 12,000 images within three hours. These images were evenly distributed among seven classes. The defined classes include forward, backward, left, right, idle, and land. The drone has a popular flip function which was also included as an additional class. To simplify control, the corresponding hand signals chosen were the numerical hand signs for one through five for movements, a fist for land, and the universal “ok” sign for the flip command. Transfer learning with PyTorch (Python) was performed using a pre-trained 18-layer residual learning network (ResNet-18) to retrain the network for custom classification. An algorithm was created to interpret the classification and send encoded messages to a Ryze Tello drone over its 2.4 GHz Wi-Fi connection. The drone’s movements were performed in half-meter distance increments at a constant speed. When combined with the drone control algorithm, the classification performed as desired with negligible latency when compared to the delay in the drone’s movement commands.

Keywords: classification, computer vision, convolutional neural networks, drone control

Procedia PDF Downloads 212
634 Diagnostic Contribution of the MMSE-2:EV in the Detection and Monitoring of the Cognitive Impairment: Case Studies

Authors: Cornelia-Eugenia Munteanu

Abstract:

The goal of this paper is to present the diagnostic contribution that the screening instrument, Mini-Mental State Examination-2: Expanded Version (MMSE-2:EV), brings in detecting the cognitive impairment or in monitoring the progress of degenerative disorders. The diagnostic signification is underlined by the interpretation of the MMSE-2:EV scores, resulted from the test application to patients with mild and major neurocognitive disorders. The original MMSE is one of the most widely used screening tools for detecting the cognitive impairment, in clinical settings, but also in the field of neurocognitive research. Now, the practitioners and researchers are turning their attention to the MMSE-2. To enhance its clinical utility, the new instrument was enriched and reorganized in three versions (MMSE-2:BV, MMSE-2:SV and MMSE-2:EV), each with two forms: blue and red. The MMSE-2 was adapted and used successfully in Romania since 2013. The cases were selected from current practice, in order to cover vast and significant neurocognitive pathology: mild cognitive impairment, Alzheimer’s disease, vascular dementia, mixed dementia, Parkinson’s disease, conversion of the mild cognitive impairment into Alzheimer’s disease. The MMSE-2:EV version was used: it was applied one month after the initial assessment, three months after the first reevaluation and then every six months, alternating the blue and red forms. Correlated with age and educational level, the raw scores were converted in T scores and then, with the mean and the standard deviation, the z scores were calculated. The differences of raw scores between the evaluations were analyzed from the point of view of statistic signification, in order to establish the progression in time of the disease. The results indicated that the psycho-diagnostic approach for the evaluation of the cognitive impairment with MMSE-2:EV is safe and the application interval is optimal. The alternation of the forms prevents the learning phenomenon. The diagnostic accuracy and efficient therapeutic conduct derive from the usage of the national test norms. In clinical settings with a large flux of patients, the application of the MMSE-2:EV is a safe and fast psycho-diagnostic solution. The clinicians can draw objective decisions and for the patients: it doesn’t take too much time and energy, it doesn’t bother them and it doesn’t force them to travel frequently.

Keywords: MMSE-2, dementia, cognitive impairment, neuropsychology

Procedia PDF Downloads 515
633 Reconceptualising the Voice of Children in Child Protection

Authors: Sharon Jackson, Lynn Kelly

Abstract:

This paper proposes a conceptual review of the interdisciplinary literature which has theorised the concept of ‘children’s voices’. The primary aim is to identify and consider the theoretical relevance of conceptual thought on ‘children’s voices’ for research and practice in child protection contexts. Attending to the ‘voice of the child’ has become a core principle of social work practice in contemporary child protection contexts. Discourses of voice permeate the legislative, policy and practice frameworks of child protection practices within the UK and internationally. Voice is positioned within a ‘child-centred’ moral imperative to ‘hear the voices’ of children and take their preferences and perspectives into account. This practice is now considered to be central to working in a child-centered way. The genesis of this call to voice is revealed through sociological analysis of twentieth-century child welfare reform as rooted inter alia in intersecting political, social and cultural discourses which have situated children and childhood as cites of state intervention as enshrined in the 1989 United Nations Convention on the Rights of the Child ratified by the UK government in 1991 and more specifically Article 12 of the convention. From a policy and practice perspective, the professional ‘capturing’ of children’s voices has come to saturate child protection practice. This has incited a stream of directives, resources, advisory publications and ‘how-to’ guides which attempt to articulate practice methods to ‘listen’, ‘hear’ and above all – ‘capture’ the ‘voice of the child’. The idiom ‘capturing the voice of the child’ is frequently invoked within the literature to express the requirements of the child-centered practice task to be accomplished. Despite the centrality of voice, and an obsession with ‘capturing’ voices, evidence from research, inspection processes, serious case reviews, child abuse and death inquires has consistently highlighted professional neglect of ‘the voice of the child’. Notable research studies have highlighted the relative absence of the child’s voice in social work assessment practices, a troubling lack of meaningful engagement with children and the need to more thoroughly examine communicative practices in child protection contexts. As a consequence, the project of capturing ‘the voice of the child’ has intensified, and there has been an increasing focus on developing methods and professional skills to attend to voice. This has been guided by a recognition that professionals often lack the skills and training to engage with children in age-appropriate ways. We argue however that the problem with ‘capturing’ and [re]representing ‘voice’ in child protection contexts is, more fundamentally, a failure to adequately theorise the concept of ‘voice’ in the ‘voice of the child’. For the most part, ‘The voice of the child’ incorporates psychological conceptions of child development. While these concepts are useful in the context of direct work with children, they fail to consider other strands of sociological thought, which position ‘the voice of the child’ within an agentic paradigm to emphasise the active agency of the child.

Keywords: child-centered, child protection, views of the child, voice of the child

Procedia PDF Downloads 139
632 Complicity of Religion in Legalizing Corruption: Perspective from an Emerging Economy

Authors: S. Opadere Olaolu

Abstract:

Religion, as a belief-system, has been with humanity for a long time. It has been recognised to impact the lives of individuals, groups, and communities that hold it dear. Whether the impact is regarded as positive or not depends on the assessor. Thus, for reasons of likely subjectiveness, possible irrationality, and even outright deliberate abuse, most emerging economies seek to follow the pattern of separating the State from religion; yet it is certain that the influence of religion on the State is incontrovertible. Corruption, on the other hand, though difficult to define in precise terms, is clearly perceptible. It could manifest in very diverse ways, including the abuse of a position of trust for the gain of an individual, or of a group with shared ulterior motive. Religion has been perceived, among others, as a means to societal stability, marital stability, infusion of moral rectitude, and conscience with regards to right and wrong. In time past, credible and dependable characters reposed largely and almost exclusively with those bearing deep religious conviction. Even in the political circle, it was thought that the involvement of those committed to religion would bring about positive changes, for the benefit of the society at large. On the contrary, in recent times, religion has failed in these lofty expectations. The level of corruption in most developing economies, and the increase of religion seem to be advancing pari passu. For instance, religion has encroached into political space, and vice versa, without any differentiable posture to the issue of corruption. Worse still, religion appears to be aiding and abetting corruption, overtly and/or covertly. Therefore, this discourse examined from the Nigerian perspective—as a developing economy—, and from a multidisciplinary stand-point of Law and Religion, the issue of religion; secularism; corruption; romance of religion and politics; inability of religion to exemplify moral rectitude; indulgence of corruption by religion; and the need to keep religion in private sphere, with proper checks. The study employed primary and secondary sources of information. The primary sources included the Constitutions of the Federal Republic of Nigeria 1999, as amended; judicial decisions; and the Bible. The secondary sources comprised of information from books, journals, newspapers, magazines and Internet documents. Data obtained from these sources were subjected to content analysis. Findings of this study include the breach of constitutional provisions to keep religion out of State affairs; failure of religion to curb corruption; outright indulgence of corruption by religion; and religion having become a political tool. In conclusion, it is considered apposite still to keep the State out of religion, and to seek enforcement of the constitutional provisions in this respect. The stamp of legality placed on overt and covert corruption by religion should be removed by all means.

Keywords: corruption, complicity, legalizing, religion

Procedia PDF Downloads 413
631 The Impact of Artificial Intelligence on Food Industry

Authors: George Hanna Abdelmelek Henien

Abstract:

Quality and safety issues are common in Ethiopia's food processing industry, which can negatively impact consumers' health and livelihoods. The country is known for its various agricultural products that are important to the economy. However, food quality and safety policies and management practices in the food processing industry have led to many health problems, foodborne illnesses and economic losses. This article aims to show the causes and consequences of food safety and quality problems in the food processing industry in Ethiopia and discuss possible solutions to solve them. One of the main reasons for food quality and safety in Ethiopia's food processing industry is the lack of adequate regulation and enforcement mechanisms. Inadequate food safety and quality policies have led to inefficiencies in food production. Additionally, the failure to monitor and enforce existing regulations has created a good opportunity for unscrupulous companies to engage in harmful practices that endanger the lives of citizens. The impact on food quality and safety is significant due to loss of life, high medical costs, and loss of consumer confidence in the food processing industry. Foodborne diseases such as diarrhoea, typhoid and cholera are common in Ethiopia, and food quality and safety play an important role in . Additionally, food recalls due to contamination or contamination often cause significant economic losses in the food processing industry. To solve these problems, the Ethiopian government began taking measures to improve food quality and safety in the food processing industry. One of the most prominent initiatives is the Ethiopian Food and Drug Administration (EFDA), which was established in 2010 to monitor and control the quality and safety of food and beverage products in the country. EFDA has implemented many measures to improve food safety, such as carrying out routine inspections, monitoring the import of food products and implementing labeling requirements. Another solution that can improve food quality and safety in the food processing industry in Ethiopia is the implementation of food safety management system (FSMS). FSMS is a set of procedures and policies designed to identify, assess and control food safety risks during food processing. Implementing a FSMS can help companies in the food processing industry identify and address potential risks before they harm consumers. Additionally, implementing an FSMS can help companies comply with current safety and security regulations. Consequently, improving food safety policy and management system in Ethiopia's food processing industry is important to protect people's health and improve the country's economy. . Addressing the root causes of food quality and safety and implementing practical solutions that can help improve the overall food safety and quality in the country, such as establishing regulatory bodies and implementing food management systems.

Keywords: food quality, food safety, policy, management system, food processing industry food traceability, industry 4.0, internet of things, block chain, best worst method, marcos

Procedia PDF Downloads 66
630 Social Value of Travel Time Savings in Sub-Saharan Africa

Authors: Richard Sogah

Abstract:

The significance of transport infrastructure investments for economic growth and development has been central to the World Bank’s strategy for poverty reduction. Among the conventional surface transport infrastructures, road infrastructure is significant in facilitating the movement of human capital goods and services. When transport projects (i.e., roads, super-highways) are implemented, they come along with some negative social values (costs), such as increased noise and air pollution for local residents living near these facilities, displaced individuals, etc. However, these projects also facilitate better utilization of existing capital stock and generate other observable benefits that can be easily quantified. For example, the improvement or construction of roads creates employment, stimulates revenue generation (toll), reduces vehicle operating costs and accidents, increases accessibility, trade expansion, safety improvement, etc. Aside from these benefits, travel time savings (TTSs) which are the major economic benefits of urban and inter-urban transport projects and therefore integral in the economic assessment of transport projects, are often overlooked and omitted when estimating the benefits of transport projects, especially in developing countries. The absence of current and reliable domestic travel data and the inability of replicated models from the developed world to capture the actual value of travel time savings due to the large unemployment, underemployment, and other labor-induced distortions has contributed to the failure to assign value to travel time savings when estimating the benefits of transport schemes in developing countries. This omission of the value of travel time savings from the benefits of transport projects in developing countries poses problems for investors and stakeholders to either accept or dismiss projects based on schemes that favor reduced vehicular operating costs and other parameters rather than those that ease congestion, increase average speed, facilitate walking and handloading, and thus save travel time. Given the complex reality in the estimation of the value of travel time savings and the presence of widespread informal labour activities in Sub-Saharan Africa, we construct a “nationally ranked distribution of time values” and estimate the value of travel time savings based on the area beneath the distribution. Compared with other approaches, our method captures both formal sector workers and individuals/people who work outside the formal sector and hence changes in their time allocation occur in the informal economy and household production activities. The dataset for the estimations is sourced from the World Bank, the International Labour Organization, etc.

Keywords: road infrastructure, transport projects, travel time savings, congestion, Sub-Sahara Africa

Procedia PDF Downloads 110
629 Inequality of Opportunities and Dropping Out of High School: Perspectives for Students from a Public School and a Private School in Brazil

Authors: Joyce Mary Adam

Abstract:

The subject of youth and education has been on the agenda of both public policies and specific education policies. In this sense, this work aims to discuss, based on the conceptions of social capital and cultural capital, the possibilities of elaborating and putting into practice the life projects they build during secondary school. The critical view brought by the concepts of social capital and cultural capital considers that in the school environment, those who have social capital and cultural capital have more tools to continue their projects, while those who do not have such capital will consequently have fewer opportunities, a fact that directly contributes to the perpetuation of social and educational inequality. When the "Life Project" is discussed under the sole responsibility of the students, it is clear that it is the students who must "take their responsibilities and decisions", their success or failure. From this point of view, the success of the implementation of the Life Project is determined by how well the students have developed their "skills and competencies" and their capacity for entrepreneurship without promoting a critical reflection on the real economic difficulties of the majority of students at this level of education. This situation gives rise to feelings of self-blame and self-responsibility among young people, who are compelled to confront the reality that their expectations have not been fulfilled, that they have been unable to gain employment, and, in some instances, that they have been marginalized. In this regard, the research project aimed to gather data on the living conditions of students at a public school and a private school in Brazil through interviews. The research methodology was interviews with students from a public school and an elite private school. The main objective of the research was to analyze the students' cultural and social capital as a key element in their social and professional integration after completing this stage of education. The study showed that social and cultural capital has a significant influence on opportunities to continue studying or to find a satisfactory job. For young people from public schools and from lower economic classes, the need to enter the job market as soon as they finish or even before they finish high school is due to economic and survival issues. The hours of dedication to studies and the diversity of cultural activities such as trips, visits to museums, or the cultivation of artistic activities available to poorer students in state schools have proved to be rarer. In conclusion, we found that the difference in social and cultural capital between the young people taking part in the research has been shown to play an important role in the social and professional integration of the students and contributes to the maintenance of school and social inequality. This highlights the importance of public policies and support networks for young people leaving secondary school.

Keywords: social capital, cultural capital, high school, life project, social insertion, professional insertion, youth

Procedia PDF Downloads 29
628 Knowledge Management and Administrative Effectiveness of Non-teaching Staff in Federal Universities in the South-West, Nigeria

Authors: Nathaniel Oladimeji Dixon, Adekemi Dorcas Fadun

Abstract:

Educational managers have observed a downward trend in the administrative effectiveness of non-teaching staff in federal universities in South-west Nigeria. This is evident in the low-quality service delivery of administrators and unaccomplished institutional goals and missions of higher education. Scholars have thus indicated the need for the deployment and adoption of a practice that encourages information collection and sharing among stakeholders with a view to improving service delivery and outcomes. This study examined the extent to which knowledge management correlated with the administrative effectiveness of non-teaching staff in federal universities in South-west Nigeria. The study adopted the survey design. Three federal universities (the University of Ibadan, Federal University of Agriculture, Abeokuta, and Obafemi Awolowo University) were purposively selected because administrative ineffectiveness was more pronounced among non-teaching staff in government-owned universities, and these federal universities were long established. The proportional and stratified random sampling was adopted to select 1156 non-teaching staff across the three universities along the three existing layers of the non-teaching staff: secretarial (senior=311; junior=224), non-secretarial (senior=147; junior=241) and technicians (senior=130; junior=103). Knowledge Management Practices Questionnaire with four sub-scales: knowledge creation (α=0.72), knowledge utilization (α=0.76), knowledge sharing (α=0.79) and knowledge transfer (α=0.83); and Administrative Effectiveness Questionnaire with four sub-scales: communication (α=0.84), decision implementation (α=0.75), service delivery (α=0.81) and interpersonal relationship (α=0.78) were used for data collection. Data were analyzed using descriptive statistics, Pearson product-moment correlation and multiple regression at 0.05 level of significance, while qualitative data were content analyzed. About 59.8% of the non-teaching staff exhibited a low level of knowledge management. The indices of administrative effectiveness of non-teaching staff were rated as follows: service delivery (82.0%), communication (78.0%), decision implementation (71.0%) and interpersonal relationship (68.0%). Knowledge management had significant relationships with the indices of administrative effectiveness: service delivery (r=0.82), communication (r=0.81), decision implementation (r=0.80) and interpersonal relationship (r=0.47). Knowledge management had a significant joint prediction on administrative effectiveness (F (4;1151)= 0.79, R=0.86), accounting for 73.0% of its variance. Knowledge sharing (β=0.38), knowledge transfer (β=0.26), knowledge utilization (β=0.22), and knowledge creation (β=0.06) had relatively significant contributions to administrative effectiveness. Lack of team spirit and withdrawal syndrome is the major perceived constraints to knowledge management practices among the non-teaching staff. Knowledge management positively influenced the administrative effectiveness of the non-teaching staff in federal universities in South-west Nigeria. There is a need to ensure that the non-teaching staff imbibe team spirit and embrace teamwork with a view to eliminating their withdrawal syndromes. Besides, knowledge management practices should be deployed into the administrative procedures of the university system.

Keywords: knowledge management, administrative effectiveness of non-teaching staff, federal universities in the south-west of nigeria., knowledge creation, knowledge utilization, effective communication, decision implementation

Procedia PDF Downloads 104
627 Determination of Cyclic Citrullinated Peptide Antibodies on Quartz Crystal Microbalance Based Nanosensors

Authors: Y. Saylan, F. Yılmaz, A. Denizli

Abstract:

Rheumatoid arthritis (RA) which is the most common autoimmune disorder of the body's own immune system attacking healthy cells. RA has both articular and systemic effects.Until now romatiod factor (RF) assay is used the most commonly diagnosed RA but it is not specific. Anti-cyclic citrullinated peptide (anti-CCP) antibodies are IgG autoantibodies which recognize citrullinated peptides and offer improved specificity in early diagnosis of RA compared to RF. Anti-CCP antibodies have specificity for the diagnosis of RA from 91 to 98% and the sensitivity rate of 41-68%. Molecularly imprinted polymers (MIP) are materials that are easy to prepare, less expensive, stable have a talent for molecular recognition and also can be manufactured in large quantities with good reproducibility. Molecular recognition-based adsorption techniques have received much attention in several fields because of their high selectivity for target molecules. Quartz crystal microbalance (QCM) is an effective, simple, inexpensive approach mass changes that can be converted into an electrical signal. The applications for specific determination of chemical substances or biomolecules, crystal electrodes, cover by the thin films for bind or adsorption of molecules. In this study, we have focused our attention on combining of molecular imprinting into nanofilms and QCM nanosensor approaches and producing QCM nanosensor for anti-CCP, chosen as a model protein, using anti-CCP imprinted nanofilms. For this aim, anti-CCP imprinted QCM nanosensor was characterized by Fourier transform infrared spectroscopy, atomic force microscopy, contact angle measurements and ellipsometry. The non-imprinted nanosensor was also prepared to evaluate the selectivity of the imprinted nanosensor. Anti-CCP imprinted QCM nanosensor was tested for real-time detection of anti-CCP from aqueous solution. The kinetic and affinity studies were determined by using anti-CCP solutions with different concentrations. The responses related with mass shifts (Δm) and frequency shifts (Δf) were used to evaluate adsorption properties and to calculate binding (Ka) and dissociation (Kd) constants. To show the selectivity of the anti-CCP imprinted QCM nanosensor, competitive adsorption of anti-CCP and IgM was investigated.The results indicate that anti-CCP imprinted QCM nanosensor has a higher adsorption capabilities for anti-CCP than for IgM, due to selective cavities in the polymer structure.

Keywords: anti-CCP, molecular imprinting, nanosensor, rheumatoid arthritis, QCM

Procedia PDF Downloads 363
626 Electret: A Solution of Partial Discharge in High Voltage Applications

Authors: Farhina Haque, Chanyeop Park

Abstract:

The high efficiency, high field, and high power density provided by wide bandgap (WBG) semiconductors and advanced power electronic converter (PEC) topologies enabled the dynamic control of power in medium to high voltage systems. Although WBG semiconductors outperform the conventional Silicon based devices in terms of voltage rating, switching speed, and efficiency, the increased voltage handling properties, high dv/dt, and compact device packaging increase local electric fields, which are the main causes of partial discharge (PD) in the advanced medium and high voltage applications. PD, which occurs actively in voids, triple points, and airgaps, is an inevitable dielectric challenge that causes insulation and device aging. The aging process accelerates over time and eventually leads to the complete failure of the applications. Hence, it is critical to mitigating PD. Sharp edges, airgaps, triple points, and bubbles are common defects that exist in any medium to high voltage device. The defects are created during the manufacturing processes of the devices and are prone to high-electric-field-induced PD due to the low permittivity and low breakdown strength of the gaseous medium filling the defects. A contemporary approach of mitigating PD by neutralizing electric fields in high power density applications is introduced in this study. To neutralize the locally enhanced electric fields that occur around the triple points, airgaps, sharp edges, and bubbles, electrets are developed and incorporated into high voltage applications. Electrets are electric fields emitting dielectric materials that are embedded with electrical charges on the surface and in bulk. In this study, electrets are fabricated by electrically charging polyvinylidene difluoride (PVDF) films based on the widely used triode corona discharge method. To investigate the PD mitigation performance of the fabricated electret films, a series of PD experiments are conducted on both the charged and uncharged PVDF films under square voltage stimuli that represent PWM waveform. In addition to the use of single layer electrets, multiple layers of electrets are also experimented with to mitigate PD caused by higher system voltages. The electret-based approach shows great promise in mitigating PD by neutralizing the local electric field. The results of the PD measurements suggest that the development of an ultimate solution to the decades-long dielectric challenge would be possible with further developments in the fabrication process of electrets.

Keywords: electrets, high power density, partial discharge, triode corona discharge

Procedia PDF Downloads 203
625 Inversion of PROSPECT+SAIL Model for Estimating Vegetation Parameters from Hyperspectral Measurements with Application to Drought-Induced Impacts Detection

Authors: Bagher Bayat, Wouter Verhoef, Behnaz Arabi, Christiaan Van der Tol

Abstract:

The aim of this study was to follow the canopy reflectance patterns in response to soil water deficit and to detect trends of changes in biophysical and biochemical parameters of grass (Poa pratensis species). We used visual interpretation, imaging spectroscopy and radiative transfer model inversion to monitor the gradual manifestation of water stress effects in a laboratory setting. Plots of 21 cm x 14.5 cm surface area with Poa pratensis plants that formed a closed canopy were subjected to water stress for 50 days. In a regular weekly schedule, canopy reflectance was measured. In addition, Leaf Area Index (LAI), Chlorophyll (a+b) content (Cab) and Leaf Water Content (Cw) were measured at regular time intervals. The 1-D bidirectional canopy reflectance model SAIL, coupled with the leaf optical properties model PROSPECT, was inverted using hyperspectral measurements by means of an iterative optimization method to retrieve vegetation biophysical and biochemical parameters. The relationships between retrieved LAI, Cab, Cw, and Cs (Senescent material) with soil moisture content were established in two separated groups; stress and non-stressed. To differentiate the water stress condition from the non-stressed condition, a threshold was defined that was based on the laboratory produced Soil Water Characteristic (SWC) curve. All parameters retrieved by model inversion using canopy spectral data showed good correlation with soil water content in the water stress condition. These parameters co-varied with soil moisture content under the stress condition (Chl: R2= 0.91, Cw: R2= 0.97, Cs: R2= 0.88 and LAI: R2=0.48) at the canopy level. To validate the results, the relationship between vegetation parameters that were measured in the laboratory and soil moisture content was established. The results were totally in agreement with the modeling outputs and confirmed the results produced by radiative transfer model inversion and spectroscopy. Since water stress changes all parts of the spectrum, we concluded that analysis of the reflectance spectrum in the VIS-NIR-MIR region is a promising tool for monitoring water stress impacts on vegetation.

Keywords: hyperspectral remote sensing, model inversion, vegetation responses, water stress

Procedia PDF Downloads 225
624 Human Lens Metabolome: A Combined LC-MS and NMR Study

Authors: Vadim V. Yanshole, Lyudmila V. Yanshole, Alexey S. Kiryutin, Timofey D. Verkhovod, Yuri P. Tsentalovich

Abstract:

Cataract, or clouding of the eye lens, is the leading cause of vision impairment in the world. The lens tissue have very specific structure: It does not have vascular system, the lens proteins – crystallins – do not turnover throughout lifespan. The protection of lens proteins is provided by the metabolites which diffuse inside the lens from the aqueous humor or synthesized in the lens epithelial layer. Therefore, the study of changes in the metabolite composition of a cataractous lens as compared to a normal lens may elucidate the possible mechanisms of the cataract formation. Quantitative metabolomic profiles of normal and cataractous human lenses were obtained with the combined use of high-frequency nuclear magnetic resonance (NMR) and ion-pairing high-performance liquid chromatography with high-resolution mass-spectrometric detection (LC-MS) methods. The quantitative content of more than fifty metabolites has been determined in this work for normal aged and cataractous human lenses. The most abundant metabolites in the normal lens are myo-inositol, lactate, creatine, glutathione, glutamate, and glucose. For the majority of metabolites, their levels in the lens cortex and nucleus are similar, with the few exceptions including antioxidants and UV filters: The concentrations of glutathione, ascorbate and NAD in the lens nucleus decrease as compared to the cortex, while the levels of the secondary UV filters formed from primary UV filters in redox processes increase. That confirms that the lens core is metabolically inert, and the metabolic activity in the lens nucleus is mostly restricted by protection from the oxidative stress caused by UV irradiation, UV filter spontaneous decomposition, or other factors. It was found that the metabolomic composition of normal and age-matched cataractous human lenses differ significantly. The content of the most important metabolites – antioxidants, UV filters, and osmolytes – in the cataractous nucleus is at least ten fold lower than in the normal nucleus. One may suppose that the majority of these metabolites are synthesized in the lens epithelial layer, and that age-related cataractogenesis might originate from the dysfunction of the lens epithelial cells. Comprehensive quantitative metabolic profiles of the human eye lens have been acquired for the first time. The obtained data can be used for the analysis of changes in the lens chemical composition occurring with age and with the cataract development.

Keywords: cataract, lens, NMR, LC-MS, metabolome

Procedia PDF Downloads 324
623 The Importance of Urban Pattern and Planting Design in Urban Transformation Projects

Authors: Mustafa Var, Yasin Kültiğin Yaman, Elif Berna Var, Müberra Pulatkan

Abstract:

This study deals with real application of an urban transformation project in Trabzon, Turkey. It aims to highlight the significance of using native species in terms of planting design of transformation projects which will also promote sustainability of urban identity. Urban identity is a phenomenon shaped not only by physical, but also by natural, spatial, social, historical and cultural factors. Urban areas face with continuous change which can be whether positive or negative way. If it occurs in a negative way that may have some destructive effects on urban identity. To solve this problematic issue, urban renewal movements initally started after 1840s around the world especially in the cities with ports. This process later followed by the places where people suffered a lot from fires and has expanded to all over the world. In Turkey, those processes have been experienced mostly after 1980s as country experienced the worst effects of unplanned urbanization especially in 1950-1990 period. Also old squares, streets, meeting points, green areas, Ottoman bazaars have changed slowly. This change was resulted in alienation of inhabitants to their environments. As a solution, several actions were taken like Mass Housing Laws which was enacted in 1981 and 1984 or urban transformation projects. Although projects between 1990-2000 were tried to satisfy the expectations of local inhabitants by the help of several design solutions to promote cultural identity; unfortunately those modern projects has also been resulted in alienation of urban environments to the inhabitants. Those projects were initially done by TOKI (Housing Development Administration of Turkey) and later followed by the Ministry of Environment and Urbanization after 2011. Although they had significant potentials to create healthy urban environments, they could not use this opportunity in an effective way. The reason for their failure is that their architectural styles and planting designs are unrespectful to local identity and environments. Generally, it can be said that the most of the urban transformation projects implementing in Turkey nearly have no concerns about the locality. However, those projects can be used as a positive tool for enhanching the urban identity of cities by means of local planting material. For instance, Kyoto can be identified by Japanese Maple trees or Seattle can be specified by Dahlia. In the same way, in Turkey, Istanbul city can be identified by Judas and Stone Pine trees or Giresun city can be identified by Cherry trees. Thus, in this paper, the importance of conserving urban identity is discussed specificly with the help of using local planting elements. After revealing the mistakes that are made during urban transformation projects, the techniques and design criterias for preserving and promoting urban identity are examined. In the end, it is emphasized that every city should have their own original, local character and specific planting design which can be used for highlighting its identity as well as architectural elements.

Keywords: urban identity, urban transformation, planting design, landscape architecture

Procedia PDF Downloads 548
622 An Appraisal of Mitigation and Adaptation Measures under Paris Agreement 2015: Developing Nations' Pie

Authors: Olubisi Friday Oluduro

Abstract:

The Paris Agreement 2015, the result of negotiations under the United Nations Framework Convention on Climate Change (UNFCCC), after Kyoto Protocol expiration, sets a long-term goal of limiting the increase in the global average temperature to well below 2 degrees Celsius above pre-industrial levels, and of pursuing efforts to limiting this temperature increase to 1.5 degrees Celsius. An advancement on the erstwhile Kyoto Protocol which sets commitments to only a limited number of Parties to reduce their greenhouse gas (GHGs) emissions, it includes the goal to increase the ability to adapt to the adverse impacts of climate change and to make finance flows consistent with a pathway towards low GHGs emissions. For it achieve these goals, the Agreement requires all Parties to undertake efforts towards reaching global peaking of GHG emissions as soon as possible and towards achieving a balance between anthropogenic emissions by sources and removals by sinks in the second half of the twenty-first century. In addition to climate change mitigation, the Agreement aims at enhancing adaptive capacity, strengthening resilience and reducing the vulnerability to climate change in different parts of the world. It acknowledges the importance of addressing loss and damage associated with the adverse of climate change. The Agreement also contains comprehensive provisions on support to be provided to developing countries, which includes finance, technology transfer and capacity building. To ensure that such supports and actions are transparent, the Agreement contains a number reporting provisions, requiring parties to choose the efforts and measures that mostly suit them (Nationally Determined Contributions), providing for a mechanism of assessing progress and increasing global ambition over time by a regular global stocktake. Despite the somewhat global look of the Agreement, it has been fraught with manifold limitations threatening its very existential capability to produce any meaningful result. Considering these obvious limitations some of which were the very cause of the failure of its predecessor—the Kyoto Protocol—such as the non-participation of the United States, non-payment of funds into the various coffers for appropriate strategic purposes, among others. These have left the developing countries largely threatened eve the more, being more vulnerable than the developed countries, which are really responsible for the climate change scourge. The paper seeks to examine the mitigation and adaptation measures under the Paris Agreement 2015, appraise the present situation since the Agreement was concluded and ascertain whether the developing countries have been better or worse off since the Agreement was concluded, and examine why and how, while projecting a way forward in the present circumstance. It would conclude with recommendations towards ameliorating the situation.

Keywords: mitigation, adaptation, climate change, Paris agreement 2015, framework

Procedia PDF Downloads 158
621 Transgender Practices as Queer Politics: African a Variant

Authors: Adekeye Joshua Temitope

Abstract:

“Transgender” presents a complexion of ambiguity in the African context and it remains a contested topography in the discourse of sexual identity. The casts and stigmatisations towards transgender unveils vital facts and intricacies often ignored in the academic communities; the problems and oppressions of given sex/gender system, the constrain of monogamy and ignorance of fluidity of human sexuality thereby generating dual discords of “enforced heterosexual” and “unavoidable homosexual.” The African culture voids transgender movements and perceive same-sex sexual behavior as “taboo or bad habits” and this provide reasonable explanations for the failure of asserting for the sexual rights in GLBT movement in most discourse on sexuality in the African context. However, we could not deny the real existence of active flowing and fluidity of human sexuality even though its variants could be latent. The incessant consciousness of the existence of transgender practices in Africa either in form of bisexual desire or bisexual behavior with or without sexual identity, including people who identify themselves as bisexual opens up the vision for us to reconsider and reexamine what constitutes such ambiguity and controversy of transgender identity at present time. The notion of identity politics in gay, lesbian, and transgender community has its complexity and debates in its historical development. This paper analyses the representation of the historical trajectory of transgender practices by presenting the dynamic transition of how people cognize transgender practices under different historical conditions since the understanding of historical transition of bisexual practices would be very crucial and meaningful for gender/sexuality liberation movement at present time and in the future. The paper did a juxtaposition of the trajectories of bisexual practices between Anglo-American world and Africa, as it has certain similarities and differences within diverse historical complexities. The similar condition is the emergence of gay identity under the influence of capitalism but within different cultural context. Therefore, the political economy of each cultural context plays very important role in understanding the formation of sexual identities historically and its development and influence for the GLBT movement afterwards and in the future. By reexamining Kinsey’s categorization and applying Klein’s argument on individual’s sexual orientation this paper is poised to break the given and fixed connection among sexual behavior/sexual orientation/sexual identity, on the other hand to present the potential fluidity of human sexuality by reconsidering and reexamining the present given sex/gender system in our world. The paper concludes that it is obligatory for the essentialist and exclusionary trend at this historical moment since gay and lesbian communities in Africa need to clearly demonstrate and voice for themselves under the nuances of gender/sexuality liberation.

Keywords: heterosexual, homosexual, identity politics, queer politics, transgender

Procedia PDF Downloads 306
620 Human Beta Defensin 1 as Potential Antimycobacterial Agent against Active and Dormant Tubercle Bacilli

Authors: Richa Sharma, Uma Nahar, Sadhna Sharma, Indu Verma

Abstract:

Counteracting the deadly pathogen Mycobacterium tuberculosis (M. tb) effectively is still a global challenge. Scrutinizing alternative weapons like antimicrobial peptides to strengthen existing tuberculosis artillery is urgently required. Considering the antimycobacterial potential of Human Beta Defensin 1 (HBD-1) along with isoniazid, the present study was designed to explore the ability of HBD-1 to act against active and dormant M. tb. HBD-1 was screened in silico using antimicrobial peptide prediction servers to identify its short antimicrobial motif. The activity of both HBD-1 and its selected motif (Pep B) was determined at different concentrations against actively growing M. tb in vitro and ex vivo in monocyte derived macrophages (MDMs). Log phase M. tb was grown along with HBD-1 and Pep B for 7 days. M. tb infected MDMs were treated with HBD-1 and Pep B for 72 hours. Thereafter, colony forming unit (CFU) enumeration was performed to determine activity of both peptides against actively growing in vitro and intracellular M. tb. The dormant M. tb models were prepared by following two approaches and treated with different concentrations of HBD-1 and Pep B. Firstly, 20-22 days old M. tbH37Rv was grown in potassium deficient Sauton media for 35 days. The presence of dormant bacilli was confirmed by Nile red staining. Dormant bacilli were further treated with rifampicin, isoniazid, HBD-1 and its motif for 7 days. The effect of both peptides on latent bacilli was assessed by colony forming units (CFU) and most probable number (MPN) enumeration. Secondly, human PBMC granuloma model was prepared by infecting PBMCs seeded on collagen matrix with M. tb(MOI 0.1) for 10 days. Histopathology was done to confirm granuloma formation. The granuloma thus formed was incubated for 72 hours with rifampicin, HBD-1 and Pep B individually. Difference in bacillary load was determined by CFU enumeration. The minimum inhibitory concentrations of HBD-1 and Pep B restricting growth of mycobacteria in vitro were 2μg/ml and 20μg/ml respectively. The intracellular mycobacterial load was reduced significantly by HBD-1 and Pep B at 1μg/ml and 5μg/ml respectively. Nile red positive bacterial population, high MPN/ low CFU count and tolerance to isoniazid, confirmed the formation of potassium deficienybaseddormancy model. HBD-1 (8μg/ml) showed 96% and 99% killing and Pep B (40μg/ml) lowered dormant bacillary load by 68.89% and 92.49% based on CFU and MPN enumeration respectively. Further, H&E stained aggregates of macrophages and lymphocytes, acid fast bacilli surrounded by cellular aggregates and rifampicin resistance, indicated the formation of human granuloma dormancy model. HBD-1 (8μg/ml) led to 81.3% reduction in CFU whereas its motif Pep B (40μg/ml) showed only 54.66% decrease in bacterial load inside granuloma. Thus, the present study indicated that HBD-1 and its motif are effective antimicrobial players against both actively growing and dormant M. tb. They should be further explored to tap their potential to design a powerful weapon for combating tuberculosis.

Keywords: antimicrobial peptides, dormant, human beta defensin 1, tuberculosis

Procedia PDF Downloads 263
619 Pump-as-Turbine: Testing and Characterization as an Energy Recovery Device, for Use within the Water Distribution Network

Authors: T. Lydon, A. McNabola, P. Coughlan

Abstract:

Energy consumption in the water distribution network (WDN) is a well established problem equating to the industry contributing heavily to carbon emissions, with 0.9 kg CO2 emitted per m3 of water supplied. It is indicated that 85% of energy wasted in the WDN can be recovered by installing turbines. Existing potential in networks is present at small capacity sites (5-10 kW), numerous and dispersed across networks. However, traditional turbine technology cannot be scaled down to this size in an economically viable fashion, thus alternative approaches are needed. This research aims to enable energy recovery potential within the WDN by exploring the potential of pumps-as-turbines (PATs), to realise this potential. PATs are estimated to be ten times cheaper than traditional micro-hydro turbines, presenting potential to contribute to an economically viable solution. However, a number of technical constraints currently prohibit their widespread use, including the inability of a PAT to control pressure, difficulty in the selection of PATs due to lack of performance data and a lack of understanding on how PATs can cater for fluctuations as extreme as +/- 50% of the average daily flow, characteristic of the WDN. A PAT prototype is undergoing testing in order to identify the capabilities of the technology. Results of preliminary testing, which involved testing the efficiency and power potential of the PAT for varying flow and pressure conditions, in order to develop characteristic and efficiency curves for the PAT and a baseline understanding of the technologies capabilities, are presented here: •The limitations of existing selection methods which convert BEP from pump operation to BEP in turbine operation was highlighted by the failure of such methods to reflect the conditions of maximum efficiency of the PAT. A generalised selection method for the WDN may need to be informed by an understanding of impact of flow variations and pressure control on system power potential capital cost, maintenance costs, payback period. •A clear relationship between flow and efficiency rate of the PAT has been established. The rate of efficiency reductions for flows +/- 50% BEP is significant and more extreme for deviations in flow above the BEP than below, but not dissimilar to the reaction of efficiency of other turbines. •PAT alone is not sufficient to regulate pressure, yet the relationship of pressure across the PAT is foundational in exploring ways which PAT energy recovery systems can maintain required pressure level within the WDN. Efficiencies of systems of PAT energy recovery systems operating conditions of pressure regulation, which have been conceptualise in current literature, need to be established. Initial results guide the focus of forthcoming testing and exploration of PAT technology towards how PATs can form part of an efficiency energy recovery system.

Keywords: energy recovery, pump-as-turbine, water distribution network, water distribution network

Procedia PDF Downloads 261
618 Experimental and Numerical Investigation of Fracture Behavior of Foamed Concrete Based on Three-Point Bending Test of Beams with Initial Notch

Authors: M. Kozłowski, M. Kadela

Abstract:

Foamed concrete is known for its low self-weight and excellent thermal and acoustic properties. For many years, it has been used worldwide for insulation to foundations and roof tiles, as backfill to retaining walls, sound insulation, etc. However, in the last years it has become a promising material also for structural purposes e.g. for stabilization of weak soils. Due to favorable properties of foamed concrete, many interests and studies were involved to analyze its strength, mechanical, thermal and acoustic properties. However, these studies do not cover the investigation of fracture energy which is the core factor governing the damage and fracture mechanisms. Only limited number of publications can be found in literature. The paper presents the results of experimental investigation and numerical campaign of foamed concrete based on three-point bending test of beams with initial notch. First part of the paper presents the results of a series of static loading tests performed to investigate the fracture properties of foamed concrete of varying density. Beam specimens with dimensions of 100×100×840 mm with a central notch were tested in three-point bending. Subsequently, remaining halves of the specimens with dimensions of 100×100×420 mm were tested again as un-notched beams in the same set-up with reduced distance between supports. The tests were performed in a hydraulic displacement controlled testing machine with a load capacity of 5 kN. Apart from measuring the loading and mid-span displacement, a crack mouth opening displacement (CMOD) was monitored. Based on the load – displacement curves of notched beams the values of fracture energy and tensile stress at failure were calculated. The flexural tensile strength was obtained on un-notched beams with dimensions of 100×100×420 mm. Moreover, cube specimens 150×150×150 mm were tested in compression to determine the compressive strength. Second part of the paper deals with numerical investigation of the fracture behavior of beams with initial notch presented in the first part of the paper. Extended Finite Element Method (XFEM) was used to simulate and analyze the damage and fracture process. The influence of meshing and variation of mechanical properties on results was investigated. Numerical models simulate correctly the behavior of beams observed during three-point bending. The numerical results show that XFEM can be used to simulate different fracture toughness of foamed concrete and fracture types. Using the XFEM and computer simulation technology allow for reliable approximation of load–bearing capacity and damage mechanisms of beams made of foamed concrete, which provides some foundations for realistic structural applications.

Keywords: foamed concrete, fracture energy, three-point bending, XFEM

Procedia PDF Downloads 303
617 Isolation and Identification of Salmonella spp and Salmonella enteritidis, from Distributed Chicken Samples in the Tehran Province using Culture and PCR Techniques

Authors: Seyedeh Banafsheh Bagheri Marzouni, Sona Rostampour Yasouri

Abstract:

Salmonella is one of the most important common pathogens between humans and animals worldwide. Globally, the prevalence of the disease in humans is due to the consumption of food contaminated with animal-derived Salmonella. These foods include eggs, red meat, chicken, and milk. Contamination of chicken and its products with Salmonella may occur at any stage of the chicken processing chain. Salmonella infection is usually not fatal. However, its occurrence is considered dangerous in some individuals, such as infants, children, the elderly, pregnant women, or individuals with weakened immune systems. If Salmonella infection enters the bloodstream, the possibility of contamination of tissues throughout the body will arise. Therefore, determining the potential risk of Salmonella at various stages is essential from the perspective of consumers and public health. The aim of this study is to isolate and identify Salmonella from chicken samples distributed in the Tehran market using the Gold standard culture method and PCR techniques based on specific genes, invA and ent. During the years 2022-2023, sampling was performed using swabs from the liver and intestinal contents of distributed chickens in the Tehran province, with a total of 120 samples taken under aseptic conditions. The samples were initially enriched in buffered peptone water (BPW) for pre-enrichment overnight. Then, the samples were incubated in selective enrichment media, including TT broth and RVS medium, at temperatures of 37°C and 42°C, respectively, for 18 to 24 hours. Organisms that grew in the liquid medium and produced turbidity were transferred to selective media (XLD and BGA) and incubated overnight at 37°C for isolation. Suspicious Salmonella colonies were selected for DNA extraction, and PCR technique was performed using specific primers that targeted the invA and ent genes in Salmonella. The results indicated that 94 samples were Salmonella using the PCR technique. Of these, 71 samples were positive based on the invA gene, and 23 samples were positive based on the ent gene. Although the culture technique is the Gold standard, PCR is a faster and more accurate method. Rapid detection through PCR can enable the identification of Salmonella contamination in food items and the implementation of necessary measures for disease control and prevention.

Keywords: culture, PCR, salmonella spp, salmonella enteritidis

Procedia PDF Downloads 74
616 Glyco-Biosensing as a Novel Tool for Prostate Cancer Early-Stage Diagnosis

Authors: Pavel Damborsky, Martina Zamorova, Jaroslav Katrlik

Abstract:

Prostate cancer is annually the most common newly diagnosed cancer among men. An extensive number of evidence suggests that traditional serum Prostate-specific antigen (PSA) assay still suffers from a lack of sufficient specificity and sensitivity resulting in vast over-diagnosis and overtreatment. Thus, the early-stage detection of prostate cancer (PCa) plays undisputedly a critical role for successful treatment and improved quality of life. Over the last decade, particular altered glycans have been described that are associated with a range of chronic diseases, including cancer and inflammation. These glycans differences enable a distinction to be made between physiological and pathological state and suggest a valuable biosensing tool for diagnosis and follow-up purposes. Aberrant glycosylation is one of the major characteristics of disease progression. Consequently, the aim of this study was to develop a more reliable tool for early-stage PCa diagnosis employing lectins as glyco-recognition elements. Biosensor and biochip technology putting to use lectin-based glyco-profiling is one of the most promising strategies aimed at providing fast and efficient analysis of glycoproteins. The proof-of-concept experiments based on sandwich assay employing anti-PSA antibody and an aptamer as a capture molecules followed by lectin glycoprofiling were performed. We present a lectin-based biosensing assay for glycoprofiling of serum biomarker PSA using different biosensor and biochip platforms such as label-free surface plasmon resonance (SPR) and microarray with fluorescent label. The results suggest significant differences in interaction of particular lectins with PSA. The antibody-based assay is frequently associated with the sensitivity, reproducibility, and cross-reactivity issues. Aptamers provide remarkable advantages over antibodies due to the nucleic acid origin, stability and no glycosylation. All these data are further step for construction of highly selective, sensitive and reliable sensors for early-stage diagnosis. The experimental set-up also holds promise for the development of comparable assays with other glycosylated disease biomarkers.

Keywords: biomarker, glycosylation, lectin, prostate cancer

Procedia PDF Downloads 407
615 An Intelligent Text Independent Speaker Identification Using VQ-GMM Model Based Multiple Classifier System

Authors: Ben Soltane Cheima, Ittansa Yonas Kelbesa

Abstract:

Speaker Identification (SI) is the task of establishing identity of an individual based on his/her voice characteristics. The SI task is typically achieved by two-stage signal processing: training and testing. The training process calculates speaker specific feature parameters from the speech and generates speaker models accordingly. In the testing phase, speech samples from unknown speakers are compared with the models and classified. Even though performance of speaker identification systems has improved due to recent advances in speech processing techniques, there is still need of improvement. In this paper, a Closed-Set Tex-Independent Speaker Identification System (CISI) based on a Multiple Classifier System (MCS) is proposed, using Mel Frequency Cepstrum Coefficient (MFCC) as feature extraction and suitable combination of vector quantization (VQ) and Gaussian Mixture Model (GMM) together with Expectation Maximization algorithm (EM) for speaker modeling. The use of Voice Activity Detector (VAD) with a hybrid approach based on Short Time Energy (STE) and Statistical Modeling of Background Noise in the pre-processing step of the feature extraction yields a better and more robust automatic speaker identification system. Also investigation of Linde-Buzo-Gray (LBG) clustering algorithm for initialization of GMM, for estimating the underlying parameters, in the EM step improved the convergence rate and systems performance. It also uses relative index as confidence measures in case of contradiction in identification process by GMM and VQ as well. Simulation results carried out on voxforge.org speech database using MATLAB highlight the efficacy of the proposed method compared to earlier work.

Keywords: feature extraction, speaker modeling, feature matching, Mel frequency cepstrum coefficient (MFCC), Gaussian mixture model (GMM), vector quantization (VQ), Linde-Buzo-Gray (LBG), expectation maximization (EM), pre-processing, voice activity detection (VAD), short time energy (STE), background noise statistical modeling, closed-set tex-independent speaker identification system (CISI)

Procedia PDF Downloads 310
614 Prevalence of Treponema pallidum Infection among HIV-Seroreactive Patients in Kano, Nigeria

Authors: Y. Mohammed, A. I. Kabuga

Abstract:

Sexually transmitted infections (STIs) have continued to be a major public health problem in sub-Saharan Africa especially with the recent resurgence of syphilis. Syphilis is a systemic disease caused by the bacterium, spirochete Treponema pallidum and has been reported as one of the common sexually transmitted infections (STIs) in Nigeria. Presence of genital ulcer disease from syphilis facilitates human immunodeficiency virus (HIV) transmission and their ¬diagnosis is essential for the proper management. Venereal Disease Research Laboratory (VDRL) test is used as a screening test for the diagnosis of syphilis. However, unusual VDRL test results have been reported in HIV-infected persons with syphilis. There are reports showing higher than expected VDRL titers as well as biological false positive in most of the studies. A negative Rapid Plasma Reagin (RPR) test or VDRL test result may not rule out syphilis in patients with HIV infection. For laboratory confirmation of syphilis, one specific Treponemal test, namely, Fluroscent Treponemal Antibody Absorption (FTA-ABS) test or Treponema Pallidum Haemagglutination Assay (TPHA) should be done along with VDRL. A prospective cross sectional study was conducted for 2 years from Jun, 2012 to Jun 2014 to determine the prevalence of syphilis in HIV-seroreactive patients at 5 selected HIV/AIDS treatment and counseling centers in Kano State, North Western, Nigeria. New HIV-Seroreactive patients who gave informed consent to participate in the study were recruited. Venereal Diseases Research Laboratory (VDRL) test for Syphilis screening was performed on the same sera samples which were collected for HIV testing. A total of 238 patients, 113 (47%) males and 125 (53%) females, were enrolled. In the present study, 238 HIV-seropositive patients were screened for syphilis by VDRL test. Out of these 238 cases, 72 (32%) patients were positive for TPHA and 8 (3.4%) patients were reactive for VDRL in various titers with an overall prevalence of 3.4%. All the eight patients who were reactive for VDRL test were also positive for TPHA test. In Conclusions, with high prevalence of syphilis among HIV-infected people from this study, it is recommended that serological testing for syphilis should be carried out in all patients with newly diagnosed HIV infection. Detection and treatment of STI should have a central role in HIV prevention and control. This will help in proper management of patients having STIs and HIV co infection.

Keywords: HIV, infections, STIs, syphilis

Procedia PDF Downloads 322
613 The Association of Southeast Asian Nations (ASEAN) and the Dynamics of Resistance to Sovereignty Violation: The Case of East Timor (1975-1999)

Authors: Laura Southgate

Abstract:

The Association of Southeast Asian Nations (ASEAN), as well as much of the scholarship on the organisation, celebrates its ability to uphold the principle of regional autonomy, understood as upholding the norm of non-intervention by external powers in regional affairs. Yet, in practice, this has been repeatedly violated. This dichotomy between rhetoric and practice suggests an interesting avenue for further study. The East Timor crisis (1975-1999) has been selected as a case-study to test the dynamics of ASEAN state resistance to sovereignty violation in two distinct timeframes: Indonesia’s initial invasion of the territory in 1975, and the ensuing humanitarian crisis in 1999 which resulted in a UN-mandated, Australian-led peacekeeping intervention force. These time-periods demonstrate variation on the dependent variable. It is necessary to observe covariation in order to derive observations in support of a causal theory. To establish covariation, my independent variable is therefore a continuous variable characterised by variation in convergence of interest. Change of this variable should change the value of the dependent variable, thus establishing causal direction. This paper investigates the history of ASEAN’s relationship to the norm of non-intervention. It offers an alternative understanding of ASEAN’s history, written in terms of the relationship between a key ASEAN state, which I call a ‘vanguard state’, and selected external powers. This paper will consider when ASEAN resistance to sovereignty violation has succeeded, and when it has failed. It will contend that variation in outcomes associated with vanguard state resistance to sovereignty violation can be best explained by levels of interest convergence between the ASEAN vanguard state and designated external actors. Evidence will be provided to support the hypothesis that in 1999, ASEAN’s failure to resist violations to the sovereignty of Indonesia was a consequence of low interest convergence between Indonesia and the external powers. Conversely, in 1975, ASEAN’s ability to resist violations to the sovereignty of Indonesia was a consequence of high interest convergence between Indonesia and the external powers. As the vanguard state, Indonesia was able to apply pressure on the ASEAN states and obtain unanimous support for Indonesia’s East Timor policy in 1975 and 1999. However, the key factor explaining the variance in outcomes in both time periods resides in the critical role played by external actors. This view represents a serious challenge to much of the existing scholarship that emphasises ASEAN’s ability to defend regional autonomy. As these cases attempt to show, ASEAN autonomy is much more contingent than portrayed in the existing literature.

Keywords: ASEAN, east timor, intervention, sovereignty

Procedia PDF Downloads 359
612 Investigation Two Polymorphism of hTERT Gene (Rs 2736098 and Rs 2736100) and miR- 146a rs2910164 Polymorphism in Cervical Cancer

Authors: Hossein Rassi, Alaheh Gholami Roud-Majany, Zahra Razavi, Massoud Hoshmand

Abstract:

Cervical cancer is multi step disease that is thought to result from an interaction between genetic background and environmental factors. Human papillomavirus (HPV) infection is the leading risk factor for cervical intraepithelial neoplasia (CIN)and cervical cancer. In other hand, some of hTERT and miRNA polymorphism may plays an important role in carcinogenesis. This study attempts to clarify the relation of hTERT genotypes and miR-146a genotypes in cervical cancer. Forty two archival samples with cervical lesion retired from Khatam hospital and 40 sample from healthy persons used as control group. A simple and rapid method was used to detect the simultaneous amplification of the HPV consensus L1 region and HPV-16,-18, -11, -31, 33 and -35 along with the b-globin gene as an internal control. We use Multiplex PCR for detection of hTERT and miR-146a rs2910164 genotypes in our lab. Finally, data analysis was performed using the 7 version of the Epi Info(TM) 2012 software and test chi-square(x2) for trend. Cervix lesions were collected from 42 patients with Squamous metaplasia, cervical intraepithelial neoplasia, and cervical carcinoma. Successful DNA extraction was assessed by PCR amplification of b-actin gene (99bp). According to the results, hTERT ( rs 2736098) GG genotype and miR-146a rs2910164 CC genotype was significantly associated with increased risk of cervical cancer in the study population. In this study, we detected 13 HPV 18 from 42 cervical cancer. The connection between several SNP polymorphism and human virus papilloma in rare researches were seen. The reason of these differences in researches' findings can result in different kinds of races and geographic situations and also differences in life grooves in every region. The present study provided preliminary evidence that a p53 GG genotype and miR-146a rs2910164 CC genotype may effect cervical cancer risk in the study population, interacting synergistically with HPV 18 genotype. Our results demonstrate that the testing of hTERT rs 2736098 genotypes and miR-146a rs2910164 genotypes in combination with HPV18 can serve as major risk factors in the early identification of cervical cancers. Furthermore, the results indicate the possibility of primary prevention of cervical cancer by vaccination against HPV18 in Iran.

Keywords: polymorphism of hTERT gene, miR-146a rs2910164 polymorphism, cervical cancer, virus

Procedia PDF Downloads 322