Search results for: imprecise number
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 9903

Search results for: imprecise number

8073 Investigating the Significance of Ground Covers and Partial Root Zone Drying Irrigation for Water Conservation Weed Suppression and Quality Traits of Wheat

Authors: Muhammad Aown Sammar Raza, Salman Ahmad, Muhammad Farrukh Saleem, Muhammad Saqlain Zaheer, Rashid Iqbal, Imran Haider, Muhammad Usman Aslam, Muhammad Adnan Nazar

Abstract:

One of the main negative effects of climate change is the increasing scarcity of water worldwide, especially for irrigation purpose. In order to ensure food security with less available water, there is a need to adopt easy and economic techniques. Two of the effective techniques are; use of ground covers and partial root zone drying (PRD). A field experiment was arranged to find out the most suitable mulch for PRD irrigation system in wheat. The experiment was comprised of two irrigation methods (I0 = irrigation on both sides of roots and I1= irrigation to only one side of the root as alternate irrigation) and four ground covers (M0= open ground without any cover, M1= black plastic cover, M2= wheat straw cover and M4= cotton sticks cover). More plant height, spike length, number of spikelets and number of grains were found in full irrigation treatment. While water use efficiency and grain nutrient (NPK) contents were more in PRD irrigation. All soil covers suppress the weeds and significantly influenced the yield attributes, final yield as well as the grain nutrient contents. However black plastic cover performed the best. It was concluded that joint use of both techniques was more effective for water conservation and increasing grain yield than their sole application and combination of PRD with black plastic mulch performed the best than other ground covers combination used in the experiment.

Keywords: ground covers, partial root zone drying, grain yield, quality traits, WUE, weed control efficiency

Procedia PDF Downloads 220
8072 From Primer Generation to Chromosome Identification: A Primer Generation Genotyping Method for Bacterial Identification and Typing

Authors: Wisam H. Benamer, Ehab A. Elfallah, Mohamed A. Elshaari, Farag A. Elshaari

Abstract:

A challenge for laboratories is to provide bacterial identification and antibiotic sensitivity results within a short time. Hence, advancement in the required technology is desirable to improve timing, accuracy and quality. Even with the current advances in methods used for both phenotypic and genotypic identification of bacteria the need is there to develop method(s) that enhance the outcome of bacteriology laboratories in accuracy and time. The hypothesis introduced here is based on the assumption that the chromosome of any bacteria contains unique sequences that can be used for its identification and typing. The outcome of a pilot study designed to test this hypothesis is reported in this manuscript. Methods: The complete chromosome sequences of several bacterial species were downloaded to use as search targets for unique sequences. Visual basic and SQL server (2014) were used to generate a complete set of 18-base long primers, a process started with reverse translation of randomly chosen 6 amino acids to limit the number of the generated primers. In addition, the software used to scan the downloaded chromosomes using the generated primers for similarities was designed, and the resulting hits were classified according to the number of similar chromosomal sequences, i.e., unique or otherwise. Results: All primers that had identical/similar sequences in the selected genome sequence(s) were classified according to the number of hits in the chromosomes search. Those that were identical to a single site on a single bacterial chromosome were referred to as unique. On the other hand, most generated primers sequences were identical to multiple sites on a single or multiple chromosomes. Following scanning, the generated primers were classified based on ability to differentiate between medically important bacterial and the initial results looks promising. Conclusion: A simple strategy that started by generating primers was introduced; the primers were used to screen bacterial genomes for match. Primer(s) that were uniquely identical to specific DNA sequence on a specific bacterial chromosome were selected. The identified unique sequence can be used in different molecular diagnostic techniques, possibly to identify bacteria. In addition, a single primer that can identify multiple sites in a single chromosome can be exploited for region or genome identification. Although genomes sequences draft of isolates of organism DNA enable high throughput primer design using alignment strategy, and this enhances diagnostic performance in comparison to traditional molecular assays. In this method the generated primers can be used to identify an organism before the draft sequence is completed. In addition, the generated primers can be used to build a bank for easy access of the primers that can be used to identify bacteria.

Keywords: bacteria chromosome, bacterial identification, sequence, primer generation

Procedia PDF Downloads 173
8071 Social Impact Bonds in the US Context

Authors: Paula M. Lantz

Abstract:

In the United States, significant socioeconomic and racial inequalities exist in many population-based indicators of health and social welfare. Although a number of effective prevention programs and interventions are available, local and state governments often do not pursue prevention in the face of budgetary constraints and more acute problems. There is growing interest in and excitement about Pay for Success” (PFS) strategies, also referred to as social impact bonds, as an approach to financing and implementing promising prevention programs and services that help the public sector either save money or achieve greater value for an investment. The PFS finance model implements evidence-based interventions using capital from investors who only receive a return on their investment from the government if agreed-upon, measurable outcomes are achieved. This paper discusses the current landscape regarding social impact bonds in the U.S., and their potential and challenges in addressing serious health and social problems. The paper presents an analysis of a number of social science issues that are fundamental to the potential for social impact bonds to successfully address social inequalities in health and social welfare. This includes: a) the economics of the intervention and a potential public payout; b) organizational and management issues in intervention implementation; c) evaluation research design and methods; d) legal/regulatory issues in public payouts to investors; e) ethical issues in the design of social impact bond deals and their evaluation; and f) political issues. Despite significant challenges in the U.S. context, there is great potential for social impact bonds as a type of social impact investing to encourage private investments in evidence-based interventions that address important public health and social problems in underserved populations and provide a return on investment.

Keywords: pay for success, public/private partnerships, social impact bonds, social impact investing

Procedia PDF Downloads 283
8070 Vulnerability Assessment of Vertically Irregular Structures during Earthquake

Authors: Pranab Kumar Das

Abstract:

Vulnerability assessment of buildings with irregularity in the vertical direction has been carried out in this study. The constructions of vertically irregular buildings are increasing in the context of fast urbanization in the developing countries including India. During two reconnaissance based survey performed after Nepal earthquake 2015 and Imphal (India) earthquake 2016, it has been observed that so many structures are damaged due to the vertically irregular configuration. These irregular buildings are necessary to perform safely during seismic excitation. Therefore, it is very urgent demand to point out the actual vulnerability of the irregular structure. So that remedial measures can be taken for protecting those structures during natural hazard as like earthquake. This assessment will be very helpful for India and as well as for the other developing countries. A sufficient number of research has been contributed to the vulnerability of plan asymmetric buildings. In the field of vertically irregular buildings, the effort has not been forwarded much to find out their vulnerability during an earthquake. Irregularity in vertical direction may be caused due to irregular distribution of mass, stiffness and geometrically irregular configuration. Detailed analysis of such structures, particularly non-linear/ push over analysis for performance based design seems to be challenging one. The present paper considered a number of models of irregular structures. Building models made of both reinforced concrete and brick masonry are considered for the sake of generality. The analyses are performed with both help of finite element method and computational method.The study, as a whole, may help to arrive at a reasonably good estimate, insight for fundamental and other natural periods of such vertically irregular structures. The ductility demand, storey drift, and seismic response study help to identify the location of critical stress concentration. Summarily, this paper is a humble step for understanding the vulnerability and framing up the guidelines for vertically irregular structures.

Keywords: ductility, stress concentration, vertically irregular structure, vulnerability

Procedia PDF Downloads 217
8069 Cadaveric Study of Lung Anatomy: A Surgical Overview

Authors: Arthi Ganapathy, Rati Tandon, Saroj Kaler

Abstract:

Introduction: A thorough knowledge of variations in lung anatomy is of prime significance during surgical procedures like lobectomy, pneumonectomy, and segmentectomy of lungs. The arrangement of structures in the lung hilum act as a guide in performing such procedures. The normal pattern of arrangement of hilar structures in the right lung is eparterial bronchus, pulmonary artery, hyparterial bronchus and pulmonary veins from above downwards. In the left lung, it is pulmonary artery, principal bronchus and pulmonary vein from above downwards. The arrangement of hilar structures from anterior to posterior in both the lungs is pulmonary vein, pulmonary artery, and principal bronchus. The bronchial arteries are very small and usually the posterior most structures in the hilum of lungs. Aim: The present study aims at reporting the variations in hilar anatomy (arrangement and number) of lungs. Methodology: 75 adult formalin fixed cadaveric lungs from the department of Anatomy AIIMS New Delhi were observed for variations in the lobar anatomy. Arrangement of pulmonary hilar structures was meticulously observed, and any deviation in the pattern of presentation was recorded. Results: Among the 75 adult lung specimens observed 36 specimens were of right lung and the rest of left lung. Seven right lung specimens showed only 2 lobes with an oblique fissure dividing them and one left lung showed 3 lobes. The normal pattern of arrangement of hilar structures was seen in 22 right lungs and 23 left lungs. Rest of the lung specimens (14 right and 16 left) showed a varied pattern of arrangement of hilar structures. Some of them showed alterations in the sequence of arrangement of pulmonary artery, pulmonary veins, bronchus, and others in the number of these structures. Conclusion: Alterations in the pattern of arrangement of structures in the lung hilum are quite frequent. A compromise in knowledge of such variations will result in inadvertent complications like intraoperative bleeding during surgical procedures.

Keywords: fissures, hilum, lobes, pulmonary

Procedia PDF Downloads 205
8068 Advancing Spatial Mapping and Monitoring of Illegal Landfills for Deprived Urban Areas in Romania

Authors: ȘercăIanu Mihai, Aldea Mihaela, Iacoboaea Cristina, Luca Oana, Nenciu Ioana

Abstract:

The emergence and neutralization of illegal waste dumps represent a global concern for waste management ecosystems with a particularly pronounced impact on disadvantaged communities. All over the world, and in this particular case in Romania, a relevant number of people resided in houses lacking any legal forms such as land ownership documents or building permits. These areas are referred to as “informal settlements”. An increasing number of regions and cities in Romania are struggling to manage their waste dumps, especially in the context of increasing poverty and lack of regulation related to informal settlements. An example of such informal settlement can be found at the terminus of Bistra Street in Câlnic, which falls under the jurisdiction of the Municipality of Reșița in Caras Severin County. The article presents a case study that focuses on employing remote sensing techniques and spatial data to monitor and map illegal waste practices, with subsequent integration into a geographic information system tailored for the Reșița community. In addition, the paper outlines the steps involved in devising strategies aimed at enhancing waste management practices in disadvantaged areas, aligning with the shift toward a circular economy. Results presented in the paper contain a spatial mapping and visualization methodology calibrated with in situ data collection applicable for identifying illegal landfills. The emergence and neutralization of illegal dumps pose a challenge in the field of waste management. These approaches, which prove effective where conventional solutions have failed, need to be replicated and adopted more wisely.

Keywords: waste dumps, waste management, monitoring, GIS, informal settlements

Procedia PDF Downloads 54
8067 Paradigmatic Approach University Management from the Perspective of Strategic Management: A Research in the Marmara Region in Turkey

Authors: Recep Yücel, Cihat Kartal, Mustafa Kara

Abstract:

On the basis of strategic management, it is believed in the necessity of a number of innovations in the postmodern management approach in the management of universities in our country. In this sense, some of these requirements are the integration of public and private universities, international integration, R & D status and increasing young population will create a dynamic structure. According to the postmodern management approach, universities, in our country despite being governed by the classical approach autonomous universities; academically are thought solid, to be non-hierarchical and creative. In fact, studies that require a multidisciplinary academic environment, universities and there is a close cooperation between formal and non-formal sub-units. Moreover, terms of postmodern management approaches, the requirements specified in the direction of solving the problem of an increasing number of universities in our country is considered to be more difficult. Therefore, considering the psychological impact on the academic personnel the university organizational structure, the study are trying to aim to propose an appropriate model of university organization. In this context, the study sought to answer the question how to have an impact innovation and international integration on the academic achievement of the classical organizational structure. Finally, in the study, due to the adoption of the classical organizational structure of the university, integration is considered to be difficult, academic cooperation between universities at the international level and maintaining it. In addition, it was understood that block the efforts of this organization structure, academic motivation, development and innovation. In this study under these purposes; on the basis of the existing organization and management structure of the universities in the Marmara Region in Turkey, a study was conducted with qualitative research methods. The data have been analyzed using content analysis and assessment was based on the results obtained.

Keywords: university, strategic management, postmodern management approaches, multidisciplinary studies

Procedia PDF Downloads 376
8066 Digital Content Strategy (DCS) Detailed Review of the Key Content Components

Authors: Oksana Razina, Shakeel Ahmad, Jessie Qun Ren, Olufemi Isiaq

Abstract:

The modern life of businesses is categorically reliant on their established position online, where digital (and particularly website) content plays a significant role as the first point of information. Digital content, therefore, becomes essential – from making the first impression to the building and development of client relationships. Despite a number of valuable papers suggesting a strategic approach when dealing with digital data, other sources often do not view or accept the approach to digital content as a holistic or continuous process. Associations are frequently made with merely a one-off marketing campaign or similar. The challenge is to establish an agreed definition for the notion of Digital Content Strategy, which currently does not exist, as DCS is viewed from an excessive number of different angles. A strategic approach to content, nonetheless, is required, both practically and contextually. The researchers, therefore, aimed at attempting to identify the key content components comprising a digital content strategy to ensure all the aspects were covered and strategically applied – from the company’s understanding of the content value to the ability to display flexibility of content and advances in technology. This conceptual project evaluated existing literature on the topic of Digital Content Strategy (DCS) and related aspects, using the PRISMA Systematic Review Method, Document Analysis, Inclusion and Exclusion Criteria, Scoping Review, Snow-Balling Technique and Thematic Analysis. The data was collected from academic and statistical sources, government and relevant trade publications. Based on the suggestions from academics and trading sources related to the issues discussed, the researchers revealed the key actions for content creation and attempted to define the notion of DCS. The major finding of the study presented Key Content Components of Digital Content Strategy and can be considered for implementation in a business retail setting.

Keywords: digital content strategy, key content components, websites, digital marketing strategy

Procedia PDF Downloads 123
8065 The Impact of Cognitive Load on Deceit Detection and Memory Recall in Children’s Interviews: A Meta-Analysis

Authors: Sevilay Çankaya

Abstract:

The detection of deception in children’s interviews is essential for statement veracity. The widely used method for deception detection is building cognitive load, which is the logic of the cognitive interview (CI), and its effectiveness for adults is approved. This meta-analysis delves into the effectiveness of inducing cognitive load as a means of enhancing veracity detection during interviews with children. Additionally, the effectiveness of cognitive load on children's total number of events recalled is assessed as a second part of the analysis. The current meta-analysis includes ten effect sizes from search using databases. For the effect size calculation, Hedge’s g was used with a random effect model by using CMA version 2. Heterogeneity analysis was conducted to detect potential moderators. The overall result indicated that cognitive load had no significant effect on veracity outcomes (g =0.052, 95% CI [-.006,1.25]). However, a high level of heterogeneity was found (I² = 92%). Age, participants’ characteristics, interview setting, and characteristics of the interviewer were coded as possible moderators to explain variance. Age was significant moderator (β = .021; p = .03, R2 = 75%) but the analysis did not reveal statistically significant effects for other potential moderators: participants’ characteristics (Q = 0.106, df = 1, p = .744), interview setting (Q = 2.04, df = 1, p = .154), and characteristics of interviewer (Q = 2.96, df = 1, p = .086). For the second outcome, the total number of events recalled, the overall effect was significant (g =4.121, 95% CI [2.256,5.985]). The cognitive load was effective in total recalled events when interviewing with children. All in all, while age plays a crucial role in determining the impact of cognitive load on veracity, the surrounding context, interviewer attributes, and inherent participant traits may not significantly alter the relationship. These findings throw light on the need for more focused, age-specific methods when using cognitive load measures. It may be possible to improve the precision and dependability of deceit detection in children's interviews with the help of more studies in this field.

Keywords: deceit detection, cognitive load, memory recall, children interviews, meta-analysis

Procedia PDF Downloads 40
8064 The 'Toshi-No-Sakon' Phenomenon: A Trend in Japanese Family Formations

Authors: Franco Lorenzo D. Morales

Abstract:

‘Toshi-no-sakon,’ which translates to as ‘age gap marriage,’ is a term that has been popularized by celebrity couples in the Japanese entertainment industry. Japan is distinct for a developed nation for its rapidly aging population, declining marital and fertility rates, and the reinforcement of traditional gender roles. Statistical data has shown that the average age of marriage in Japan is increasing every year, showing a growing tendency for late marriage. As a result, the government has been trying to curb the declining trends by encouraging marriage and childbirth among the populace. This graduate thesis seeks to analyze the ‘toshi-no-sakon’ phenomenon in lieu of Japan’s current economic and social situation, and to see what the implications are for these kinds of married couples. This research also seeks to expound more on age gaps within married couples, which is a factor rarely-touched upon in Japanese family studies. A literature review was first performed in order to provide a framework to study ‘toshi-no-sakon’ from the perspective of four fields of study—marriage, family, aging, and gender. Numerous anonymous online statements by ‘toshi-no-sakon’ couples were then collected and analyzed, which brought to light a number of concerns. Couples wherein the husband is the older partner were prioritized in order to narrow down the focus of the research, and ‘toshi-no-sakon’ is only considered when the couple’s age gap is ten years or more. Current findings suggest that one of the perceived merits for a woman to marry an older man is that financial security would be guaranteed. However, this has been shown to be untrue as a number of couples express concern regarding their financial situation, which could be attributed to their husband’s socio-economic status. Having an older husband who is approaching the age of retirement presents another dilemma as the wife would be more obliged to provide care for her aging husband. This notion of the wife being a caregiver likely stems from an arrangement once common in Japanese families in which the wife must primarily care for her husband’s elderly parents. Childbearing is another concern as couples would be pressured to have a child right away due to the age of the husband, in addition to limiting the couple’s ideal number of children. This is another problematic aspect as the husband would have to provide income until his child has finished their education, implying that retirement would have to be delayed indefinitely. It is highly recommended that future studies conduct face-to-face interviews with couples and families who fall under the category of ‘toshi-no-sakon’ in order to gain a more in-depth perspective into the phenomenon and to reveal any undiscovered trends. Cases wherein the wife is the older partner in the relationship should also be given focus in future studies involving ‘toshi-no-sakon’.

Keywords: age gap, family structure, gender roles, marriage trends

Procedia PDF Downloads 340
8063 A Novel Heuristic for Analysis of Large Datasets by Selecting Wrapper-Based Features

Authors: Bushra Zafar, Usman Qamar

Abstract:

Large data sample size and dimensions render the effectiveness of conventional data mining methodologies. A data mining technique are important tools for collection of knowledgeable information from variety of databases and provides supervised learning in the form of classification to design models to describe vital data classes while structure of the classifier is based on class attribute. Classification efficiency and accuracy are often influenced to great extent by noisy and undesirable features in real application data sets. The inherent natures of data set greatly masks its quality analysis and leave us with quite few practical approaches to use. To our knowledge first time, we present a new approach for investigation of structure and quality of datasets by providing a targeted analysis of localization of noisy and irrelevant features of data sets. Machine learning is based primarily on feature selection as pre-processing step which offers us to select few features from number of features as a subset by reducing the space according to certain evaluation criterion. The primary objective of this study is to trim down the scope of the given data sample by searching a small set of important features which may results into good classification performance. For this purpose, a heuristic for wrapper-based feature selection using genetic algorithm and for discriminative feature selection an external classifier are used. Selection of feature based on its number of occurrence in the chosen chromosomes. Sample dataset has been used to demonstrate proposed idea effectively. A proposed method has improved average accuracy of different datasets is about 95%. Experimental results illustrate that proposed algorithm increases the accuracy of prediction of different diseases.

Keywords: data mining, generic algorithm, KNN algorithms, wrapper based feature selection

Procedia PDF Downloads 302
8062 Experimental Simulations of Aerosol Effect to Landfalling Tropical Cyclones over Philippine Coast: Virtual Seeding Using WRF Model

Authors: Bhenjamin Jordan L. Ona

Abstract:

Weather modification is an act of altering weather systems that catches interest on scientific studies. Cloud seeding is a common form of weather alteration. On the same principle, tropical cyclone mitigation experiment follows the methods of cloud seeding with intensity to account for. This study will present the effects of aerosol to tropical cyclone cloud microphysics and intensity. The framework of Weather Research and Forecasting (WRF) model incorporated with Thompson aerosol-aware scheme is the prime host to support the aerosol-cloud microphysics calculations of cloud condensation nuclei (CCN) ingested into the tropical cyclones before making landfall over the Philippine coast. The coupled microphysical and radiative effects of aerosols will be analyzed using numerical data conditions of Tropical Storm Ketsana (2009), Tropical Storm Washi (2011), and Typhoon Haiyan (2013) associated with varying CCN number concentrations per simulation per typhoon: clean maritime, polluted, and very polluted having 300 cm-3, 1000 cm-3, and 2000 cm-3 aerosol number initial concentrations, respectively. Aerosol species like sulphates, sea salts, black carbon, and organic carbon will be used as cloud nuclei and mineral dust as ice nuclei (IN). To make the study as realistic as possible, investigation during the biomass burning due to forest fire in Indonesia starting October 2015 as Typhoons Mujigae/Kabayan and Koppu/Lando had been seeded with aerosol emissions mainly comprises with black carbon and organic carbon, will be considered. Emission data that will be used is from NASA's Moderate Resolution Imaging Spectroradiometer (MODIS). The physical mechanism/s of intensification or deintensification of tropical cyclones will be determined after the seeding experiment analyses.

Keywords: aerosol, CCN, IN, tropical cylone

Procedia PDF Downloads 279
8061 Liquid-Liquid Plug Flow Characteristics in Microchannel with T-Junction

Authors: Anna Yagodnitsyna, Alexander Kovalev, Artur Bilsky

Abstract:

The efficiency of certain technological processes in two-phase microfluidics such as emulsion production, nanomaterial synthesis, nitration, extraction processes etc. depends on two-phase flow regimes in microchannels. For practical application in chemistry and biochemistry it is very important to predict the expected flow pattern for a large variety of fluids and channel geometries. In the case of immiscible liquids, the plug flow is a typical and optimal regime for chemical reactions and needs to be predicted by empirical data or correlations. In this work flow patterns of immiscible liquid-liquid flow in a rectangular microchannel with T-junction are investigated. Three liquid-liquid flow systems are considered, viz. kerosene – water, paraffin oil – water and castor oil – paraffin oil. Different flow patterns such as parallel flow, slug flow, plug flow, dispersed (droplet) flow, and rivulet flow are observed for different velocity ratios. New flow pattern of the parallel flow with steady wavy interface (serpentine flow) has been found. It is shown that flow pattern maps based on Weber numbers for different liquid-liquid systems do not match well. Weber number multiplied by Ohnesorge number is proposed as a parameter to generalize flow maps. Flow maps based on this parameter are superposed well for all liquid-liquid systems of this work and other experiments. Plug length and velocity are measured for the plug flow regime. When dispersed liquid wets channel walls plug length cannot be predicted by known empirical correlations. By means of particle tracking velocimetry technique instantaneous velocity fields in a plug flow regime were measured. Flow circulation inside plug was calculated using velocity data that can be useful for mass flux prediction in chemical reactions.

Keywords: flow patterns, hydrodynamics, liquid-liquid flow, microchannel

Procedia PDF Downloads 374
8060 A Statistical Approach to Predict and Classify the Commercial Hatchability of Chickens Using Extrinsic Parameters of Breeders and Eggs

Authors: M. S. Wickramarachchi, L. S. Nawarathna, C. M. B. Dematawewa

Abstract:

Hatchery performance is critical for the profitability of poultry breeder operations. Some extrinsic parameters of eggs and breeders cause to increase or decrease the hatchability. This study aims to identify the affecting extrinsic parameters on the commercial hatchability of local chicken's eggs and determine the most efficient classification model with a hatchability rate greater than 90%. In this study, seven extrinsic parameters were considered: egg weight, moisture loss, breeders age, number of fertilised eggs, shell width, shell length, and shell thickness. Multiple linear regression was performed to determine the most influencing variable on hatchability. First, the correlation between each parameter and hatchability were checked. Then a multiple regression model was developed, and the accuracy of the fitted model was evaluated. Linear Discriminant Analysis (LDA), Classification and Regression Trees (CART), k-Nearest Neighbors (kNN), Support Vector Machines (SVM) with a linear kernel, and Random Forest (RF) algorithms were applied to classify the hatchability. This grouping process was conducted using binary classification techniques. Hatchability was negatively correlated with egg weight, breeders' age, shell width, shell length, and positive correlations were identified with moisture loss, number of fertilised eggs, and shell thickness. Multiple linear regression models were more accurate than single linear models regarding the highest coefficient of determination (R²) with 94% and minimum AIC and BIC values. According to the classification results, RF, CART, and kNN had performed the highest accuracy values 0.99, 0.975, and 0.972, respectively, for the commercial hatchery process. Therefore, the RF is the most appropriate machine learning algorithm for classifying the breeder outcomes, which are economically profitable or not, in a commercial hatchery.

Keywords: classification models, egg weight, fertilised eggs, multiple linear regression

Procedia PDF Downloads 73
8059 The Effects on Abomasal Emtying Rate of Erythromycin and Bethanechol in Healthy, Premature and Diarrheic Calves

Authors: Sebnem Canikli Engin, Mutlu Sevinc, Hasan Guzelbektes

Abstract:

In this study, we aim to define the effects of erythromycin and bethanechol which are prokinetic agents, on the value of abomasal discharge in healthy, diarrhea and premature calves. In the work, 5 healty calves, 12 diarrheaic calves and 12 premature calves, amounting to a total of 29 calves. In healty calves work; the same 5 calves were used for controlled, erythromycin and bethanechol studies (there was a 48-hour waiting period between each work). In diarrheic calves work; 12 diarrheic calves were used during the study (4 of them for control group, 4 of them bethanechol group and last 4 calves erythromycin group). In premature calves works; 12 premature calves were used during the study (4 of them for control group, 4 of them bethanechol group and last 4 calves erythromycin group). 10 mg/kg IM dose of erythromycin were applied to each erythromycin group, 0,07 mg/kg IM dose of bethanechol were applied on bethanechol group. No drugs were applied to the control group and substitution milk was given to all calves. 50 mg/kg acetominophen and 25 gram/L glucose have been added into the substitution milk to evaluate the speed of gastrointestinal motility with the test results of absorptions of acetominophen and glucose. The blood samples have been taken before substitution milk application and 30, 60, 90, 120, 180, 240 and 300 minutes after substitution milk application. Respiratory rates and number of heartbeats were also recorded during the test time. No changes were observed in the number of heartbeats, respiratory rates and general conditions for all groups after drug application. It is observed that, the feces of some calves became slightly watery and viscous and premature calves generaly defecated after 180 minutes. When Cmax, Tmax and AUC values of acetaminophen and glucose are compared with control group’s after applying erythromycin on the calves in the premature group, we obtain higher Cmax (P<0,05), shorter Tmax and greather AUC (P>0,05) values. In conclusion, according to clinical and laboratory findings, it may be stated that the application of 10 mg/kg doze of erythromycin IM has provided faster abomazal emptying in premature calves.

Keywords: abomazal emptying, bethanechol, calf, erythromycin

Procedia PDF Downloads 317
8058 New Ways of Vocabulary Enlargement

Authors: S. Pesina, T. Solonchak

Abstract:

Lexical invariants, being a sort of stereotypes within the frames of ordinary consciousness, are created by the members of a language community as a result of uniform division of reality. The invariant meaning is formed in person’s mind gradually in the course of different actualizations of secondary meanings in various contexts. We understand lexical the invariant as abstract language essence containing a set of semantic components. In one of its configurations it is the basis or all or a number of the meanings making up the semantic structure of the word.

Keywords: lexical invariant, invariant theories, polysemantic word, cognitive linguistics

Procedia PDF Downloads 306
8057 Evaluation of Possible Application of Cold Energy in Liquefied Natural Gas Complexes

Authors: А. I. Dovgyalo, S. O. Nekrasova, D. V. Sarmin, A. A. Shimanov, D. A. Uglanov

Abstract:

Usually liquefied natural gas (LNG) gasification is performed due to atmospheric heat. In order to produce a liquefied gas a sufficient amount of energy is to be consumed (about 1 kW∙h for 1 kg of LNG). This study offers a number of solutions, allowing using a cold energy of LNG. In this paper it is evaluated the application turbines installed behind the evaporator in LNG complex due to its work additional energy can be obtained and then converted into electricity. At the LNG consumption of G=1000kg/h the expansion work capacity of about 10 kW can be reached. Herewith-open Rankine cycle is realized, where a low capacity cryo-pump (about 500W) performs its normal function, providing the cycle pressure. Additionally discussed an application of Stirling engine within the LNG complex also gives a possibility to realize cold energy. Considering the fact, that efficiency coefficient of Stirling engine reaches 50 %, LNG consumption of G=1000 kg/h may result in getting a capacity of about 142 kW of such a thermal machine. The capacity of the pump, required to compensate pressure losses when LNG passes through the hydraulic channel, will make 500 W. Apart from the above-mentioned converters, it can be proposed to use thermoelectric generating packages (TGP), which are widely used now. At present, the modern thermoelectric generator line provides availability of electric capacity with coefficient of efficiency up to 15%. In the proposed complex, it is suggested to install the thermoelectric generator on the evaporator surface is such a way, that the cold end is contacted with the evaporator’s surface, and the hot one – with the atmosphere. At the LNG consumption of G=1000 kgг/h and specified coefficient of efficiency the capacity of the heat flow Qh will make about 32 kW. The derivable net electric power will be P=4,2 kW, and the number of packages will amount to about 104 pieces. The carried out calculations demonstrate the research perceptiveness in this field of propulsion plant development, as well as allow realizing the energy saving potential with the use of liquefied natural gas and other cryogenics technologies.

Keywords: cold energy, gasification, liquefied natural gas, electricity

Procedia PDF Downloads 258
8056 Investigating the Relationship Between the Auditor’s Personality Type and the Quality of Financial Reporting in Companies Listed on the Tehran Stock Exchange

Authors: Seyedmohsen Mortazavi

Abstract:

The purpose of this research is to investigate the personality types of internal auditors on the quality of financial reporting in companies admitted to the Tehran Stock Exchange. Personality type is one of the issues that emphasizes the field of auditors' behavior, and this field has attracted the attention of shareholders and stock companies today, because the auditors' personality can affect the type of financial reporting and its quality. The research is applied in terms of purpose and descriptive and correlational in terms of method, and a researcher-made questionnaire was used to check the research hypotheses. The statistical population of the research is all the auditors, accountants and financial managers of the companies admitted to the Tehran Stock Exchange, and due to their large number and the uncertainty of their exact number, 384 people have been considered as a statistical sample using Morgan's table. The researcher-made questionnaire was approved by experts in the field, and then its validity and reliability were obtained using software. For the validity of the questionnaire, confirmatory factor analysis was first examined, and then using divergent and convergent validity; Fornell-Larker and cross-sectional load test of the validity of the questionnaire were confirmed; Then, the reliability of the questionnaire was examined using Cronbach's alpha and composite reliability, and the results of these two tests showed the appropriate reliability of the questionnaire. After checking the validity and reliability of the research hypotheses, PLS software was used to check the hypotheses. The results of the research showed that the personalities of internal auditors can affect the quality of financial reporting; The personalities investigated in this research include neuroticism, extroversion, flexibility, agreeableness and conscientiousness, all of these personality types can affect the quality of financial reporting.

Keywords: flexibility, quality of financial reporting, agreeableness, conscientiousness

Procedia PDF Downloads 82
8055 Feature Evaluation Based on Random Subspace and Multiple-K Ensemble

Authors: Jaehong Yu, Seoung Bum Kim

Abstract:

Clustering analysis can facilitate the extraction of intrinsic patterns in a dataset and reveal its natural groupings without requiring class information. For effective clustering analysis in high dimensional datasets, unsupervised dimensionality reduction is an important task. Unsupervised dimensionality reduction can generally be achieved by feature extraction or feature selection. In many situations, feature selection methods are more appropriate than feature extraction methods because of their clear interpretation with respect to the original features. The unsupervised feature selection can be categorized as feature subset selection and feature ranking method, and we focused on unsupervised feature ranking methods which evaluate the features based on their importance scores. Recently, several unsupervised feature ranking methods were developed based on ensemble approaches to achieve their higher accuracy and stability. However, most of the ensemble-based feature ranking methods require the true number of clusters. Furthermore, these algorithms evaluate the feature importance depending on the ensemble clustering solution, and they produce undesirable evaluation results if the clustering solutions are inaccurate. To address these limitations, we proposed an ensemble-based feature ranking method with random subspace and multiple-k ensemble (FRRM). The proposed FRRM algorithm evaluates the importance of each feature with the random subspace ensemble, and all evaluation results are combined with the ensemble importance scores. Moreover, FRRM does not require the determination of the true number of clusters in advance through the use of the multiple-k ensemble idea. Experiments on various benchmark datasets were conducted to examine the properties of the proposed FRRM algorithm and to compare its performance with that of existing feature ranking methods. The experimental results demonstrated that the proposed FRRM outperformed the competitors.

Keywords: clustering analysis, multiple-k ensemble, random subspace-based feature evaluation, unsupervised feature ranking

Procedia PDF Downloads 313
8054 Investigating the Flow Physics within Vortex-Shockwave Interactions

Authors: Frederick Ferguson, Dehua Feng, Yang Gao

Abstract:

No doubt, current CFD tools have a great many technical limitations, and active research is being done to overcome these limitations. Current areas of limitations include vortex-dominated flows, separated flows, and turbulent flows. In general, turbulent flows are unsteady solutions to the fluid dynamic equations, and instances of these solutions can be computed directly from the equations. One of the approaches commonly implemented is known as the ‘direct numerical simulation’, DNS. This approach requires a spatial grid that is fine enough to capture the smallest length scale of the turbulent fluid motion. This approach is called the ‘Kolmogorov scale’ model. It is of interest to note that the Kolmogorov scale model must be captured throughout the domain of interest and at a correspondingly small-time step. In typical problems of industrial interest, the ratio of the length scale of the domain to the Kolmogorov length scale is so great that the required grid set becomes prohibitively large. As a result, the available computational resources are usually inadequate for DNS related tasks. At this time in its development, DNS is not applicable to industrial problems. In this research, an attempt is made to develop a numerical technique that is capable of delivering DNS quality solutions at the scale required by the industry. To date, this technique has delivered preliminary results for both steady and unsteady, viscous and inviscid, compressible and incompressible, and for both high and low Reynolds number flow fields that are very accurate. Herein, it is proposed that the Integro-Differential Scheme (IDS) be applied to a set of vortex-shockwave interaction problems with the goal of investigating the nonstationary physics within the resulting interaction regions. In the proposed paper, the IDS formulation and its numerical error capability will be described. Further, the IDS will be used to solve the inviscid and viscous Burgers equation, with the goal of analyzing their solutions over a considerable length of time, thus demonstrating the unsteady capabilities of the IDS. Finally, the IDS will be used to solve a set of fluid dynamic problems related to flow that involves highly vortex interactions. Plans are to solve the following problems: the travelling wave and vortex problems over considerable lengths of time, the normal shockwave–vortex interaction problem for low supersonic conditions and the reflected oblique shock–vortex interaction problem. The IDS solutions obtained in each of these solutions will be explored further in efforts to determine the distributed density gradients and vorticity, as well as the Q-criterion. Parametric studies will be conducted to determine the effects of the Mach number on the intensity of vortex-shockwave interactions.

Keywords: vortex dominated flows, shockwave interactions, high Reynolds number, integro-differential scheme

Procedia PDF Downloads 117
8053 Thermo-Mechanical Analysis of Composite Structures Utilizing a Beam Finite Element Based on Global-Local Superposition

Authors: Andre S. de Lima, Alfredo R. de Faria, Jose J. R. Faria

Abstract:

Accurate prediction of thermal stresses is particularly important for laminated composite structures, as large temperature changes may occur during fabrication and field application. The normal transverse deformation plays an important role in the prediction of such stresses, especially for problems involving thick laminated plates subjected to uniform temperature loads. Bearing this in mind, the present study aims to investigate the thermo-mechanical behavior of laminated composite structures using a new beam element based on global-local superposition, accounting for through-the-thickness effects. The element formulation is based on a global-local superposition in the thickness direction, utilizing a cubic global displacement field in combination with a linear layerwise local displacement distribution, which assures zig-zag behavior of the stresses and displacements. By enforcing interlaminar stress (normal and shear) and displacement continuity, as well as free conditions at the upper and lower surfaces, the number of degrees of freedom in the model is maintained independently of the number of layers. Moreover, the proposed formulation allows for the determination of transverse shear and normal stresses directly from the constitutive equations, without the need of post-processing. Numerical results obtained with the beam element were compared to analytical solutions, as well as results obtained with commercial finite elements, rendering satisfactory results for a range of length-to-thickness ratios. The results confirm the need for an element with through-the-thickness capabilities and indicate that the present formulation is a promising alternative to such analysis.

Keywords: composite beam element, global-local superposition, laminated composite structures, thermal stresses

Procedia PDF Downloads 141
8052 Evaluation of the Gamma-H2AX Expression as a Biomarker of DNA Damage after X-Ray Radiation in Angiography Patients

Authors: Reza Fardid, Aliyeh Alipour

Abstract:

Introduction: Coronary heart disease (CHD) is the most common and deadliest diseases. A coronary angiography is an important tool for the diagnosis and treatment of this disease. Because angiography is performed by exposure to ionizing radiation, it can lead to harmful effects. Ionizing radiation induces double-stranded breaks in DNA, which is a potentially life-threatening injury. The purpose of the present study is an investigation of the phosphorylation of histone H2AX in the location of the double-stranded break in Peripheral blood lymphocytes as an indication of Biological effects of radiation on angiography patients. Materials and Methods: This method is based on measurement of the phosphorylation of histone (gamma-H2AX, gH2AX) level on serine 139 after formation of DNA double-strand break. 5 cc of blood from 24 patients with angiography were sampled before and after irradiation. Blood lymphocytes were removed, fixed and were stained with specific ϒH2AX antibodies. Finally, ϒH2AX signal as an indicator of the double-strand break was measured with Flow Cytometry Technique. Results and discussion: In all patients, an increase was observed in the number of breaks in double-stranded DNA after irradiation (20.15 ± 14.18) compared to before exposure (1.52 ± 0.34). Also, the mean of DNA double-strand break was showed a linear correlation with DAP. However, although induction of DNA double-strand breaks associated with radiation dose in patients, the effect of individual factors such as radiosensitivity and regenerative capacity should not be ignored. If in future we can measure DNA damage response in every patient angiography and it will be used as a biomarker patient dose, will look very impressive on the public health level. Conclusion: Using flow cytometry readings which are done automatically, it is possible to detect ϒH2AX in the number of blood cells. Therefore, the use of this technique could play a significant role in monitoring patients.

Keywords: coronary angiography, DSB of DNA, ϒH2AX, ionizing radiation

Procedia PDF Downloads 165
8051 Design and Development of High Strength Aluminium Alloy from Recycled 7xxx-Series Material Using Bayesian Optimisation

Authors: Alireza Vahid, Santu Rana, Sunil Gupta, Pratibha Vellanki, Svetha Venkatesh, Thomas Dorin

Abstract:

Aluminum is the preferred material for lightweight applications and its alloys are constantly improving. The high strength 7xxx alloys have been extensively used for structural components in aerospace and automobile industries for the past 50 years. In the next decade, a great number of airplanes will be retired, providing an obvious source of valuable used metals and great demand for cost-effective methods to re-use these alloys. The design of proper aerospace alloys is primarily based on optimizing strength and ductility, both of which can be improved by controlling the additional alloying elements as well as heat treatment conditions. In this project, we explore the design of high-performance alloys with 7xxx as a base material. These designed alloys have to be optimized and improved to compare with modern 7xxx-series alloys and to remain competitive for aircraft manufacturing. Aerospace alloys are extremely complex with multiple alloying elements and numerous processing steps making optimization often intensive and costly. In the present study, we used Bayesian optimization algorithm, a well-known adaptive design strategy, to optimize this multi-variable system. An Al alloy was proposed and the relevant heat treatment schedules were optimized, using the tensile yield strength as the output to maximize. The designed alloy has a maximum yield strength and ultimate tensile strength of more than 730 and 760 MPa, respectively, and is thus comparable to the modern high strength 7xxx-series alloys. The microstructure of this alloy is characterized by electron microscopy, indicating that the increased strength of the alloy is due to the presence of a high number density of refined precipitates.

Keywords: aluminum alloys, Bayesian optimization, heat treatment, tensile properties

Procedia PDF Downloads 101
8050 Destination Decision Model for Cruising Taxis Based on Embedding Model

Authors: Kazuki Kamada, Haruka Yamashita

Abstract:

In Japan, taxi is one of the popular transportations and taxi industry is one of the big businesses. However, in recent years, there has been a difficult problem of reducing the number of taxi drivers. In the taxi business, mainly three passenger catching methods are applied. One style is "cruising" that drivers catches passengers while driving on a road. Second is "waiting" that waits passengers near by the places with many requirements for taxies such as entrances of hospitals, train stations. The third one is "dispatching" that is allocated based on the contact from the taxi company. Above all, the cruising taxi drivers need the experience and intuition for finding passengers, and it is difficult to decide "the destination for cruising". The strong recommendation system for the cruising taxies supports the new drivers to find passengers, and it can be the solution for the decreasing the number of drivers in the taxi industry. In this research, we propose a method of recommending a destination for cruising taxi drivers. On the other hand, as a machine learning technique, the embedding models that embed the high dimensional data to a low dimensional space is widely used for the data analysis, in order to represent the relationship of the meaning between the data clearly. Taxi drivers have their favorite courses based on their experiences, and the courses are different for each driver. We assume that the course of cruising taxies has meaning such as the course for finding business man passengers (go around the business area of the city of go to main stations) and course for finding traveler passengers (go around the sightseeing places or big hotels), and extract the meaning of their destinations. We analyze the cruising history data of taxis based on the embedding model and propose the recommendation system for passengers. Finally, we demonstrate the recommendation of destinations for cruising taxi drivers based on the real-world data analysis using proposing method.

Keywords: taxi industry, decision making, recommendation system, embedding model

Procedia PDF Downloads 122
8049 Role of Pulsed-Dye Laser in the Treatment of Inflammatory Acne Vulgaris

Authors: Shirajul Islam Khan, Muhammad Ashraful Alam Bhuiyan, Syeda Tania Begum

Abstract:

Introduction: Acne vulgaris is one of the most common dermatologic conditions and affects the vast majority of people at some point during their lifetime, so effective treatment is of major importance. The failure of usual treatment modalities, teratogenic effects with some severe side effects, and resistance to P.Acne by Retinoides have been focusing on new therapeutic options for the treatment of acne. More recently, pulsed dye laser therapy has been reported to reduce acne lesion counts. The negligible morbidity of these treatment modalities and some other benefits of subsequent acne scar management lead this therapy more attractive. Objective: The objective of this study is to assess the efficacy and safety of pulsed dye laser therapy in the treatment of inflammatory acne vulgaris. Materials and Methods: A prospective clinical trial was done in the Department of Dermatology and Venereology, Combined Military Hospital (CMH), Dhaka, to find out the role of pulse dye laser in the treatment of inflammatory acne vulgaris. The study was carried out with 60 patients with mild to moderate acne vulgaris, and those were treated with pulsed dye laser therapy at baseline and after 4, 8, and 12 weeks. Results: Among 60 patients with inflammatory acne, 42(70%) were in the age group of less than 20 years, and 36(60%) were female. Regarding the number of inflammatory lesions, the baseline mean number (± SD) was 12.77 ± 4.01; after 4 weeks of treatment of inflammatory acne by pulsed dye laser was 7.80 ± 4.11; after 8 weeks of treatment, 6.10 ± 4.03 and after 12 weeks of treatment was 4.17 ± 4.02. After 4 weeks of treatment by pulse dye laser, the level of improvement was excellent at 3.3%, good at 10%, fair at 60%, and poor at 26.7%; after 8 weeks of treatment, excellent was 13.3%, good was 46.7%, the fair was 30% and poor 10% and after 12 weeks of treatment, excellent was 56.7%, good 13.3%, fair 23.3% and poor 6.7%. Regarding safety level, out of 60 patients of inflammatory acne vulgaris treated by pulsed dye laser, about 52(86.7%) patients did not observe any side effects. Conclusions: On the basis of the study results, it can be concluded that pulsed-dye laser is highly effective and well tolerated by patients in the treatment of inflammatory acne.

Keywords: pulsed-dye laser, inflammatory acne, acne vulgaris, retinoids

Procedia PDF Downloads 68
8048 Mitigating Food Insecurity and Malnutrition by Promoting Carbon Farming via a Solar-Powered Enzymatic Composting Bioreactor with Arduino-Based Sensors

Authors: Molin A., De Ramos J. M., Cadion L. G., Pico R. L.

Abstract:

Malnutrition and food insecurity represent significant global challenges affecting millions of individuals, particularly in low-income and developing regions. The researchers created a solar-powered enzymatic composting bioreactor with an Arduino-based monitoring system for pH, humidity, and temperature. It manages mixed municipal solid wastes incorporating industrial enzymes and whey additives for accelerated composting and minimized carbon footprint. Within 15 days, the bioreactor yielded 54.54% compost compared to 44.85% from traditional methods, increasing yield by nearly 10%. Tests showed that the bioreactor compost had 4.84% NPK, passing metal analysis standards, while the traditional pit compost had 3.86% NPK; both are suitable for agriculture. Statistical analyses, including ANOVA and Tukey's HSD test, revealed significant differences in agricultural yield across different compost types based on leaf length, width, and number of leaves. The study compared the effects of different composts on Brassica rapa subsp. Chinesis (Petchay) and Brassica juncea (Mustasa) plant growth. For Pechay, significant effects of compost type on plant leaf length (F(5,84) = 62.33, η² = 0.79) and leaf width (F(5,84) = 12.35, η² = 0.42) were found. For Mustasa, significant effects of compost type on leaf length (F(4,70) = 20.61, η² = 0.54), leaf width (F(4,70) = 19.24, η² = 0.52), and number of leaves (F(4,70) = 13.17, η² = 0.43) were observed. This study explores the effectiveness of the enzymatic composting bioreactor and its viability in promoting carbon farming as a solution to food insecurity and malnutrition.

Keywords: malnutrition, food insecurity, enzymatic composting bioreactor, arduino-based monitoring system, enzymes, carbon farming, whey additive, NPK level

Procedia PDF Downloads 30
8047 Screening of Rice Genotypes in Methane and Carbon Dioxide Emissions Under Different Water Regimes

Authors: Mthiyane Pretty, Mitsui Toshiake, Nagano Hirohiko, Aycan Murat

Abstract:

Among the most significant greenhouse gases released from rice fields are methane and carbon dioxide. The primary focus of this research was to quantify CH₄ and CO₂ gas using different 4 rice cultivars, two water regimes, and a recording of soil moisture and temperature. In this study, we hypothesized that paddy field soils may directly affect soil enzymatic activities and physicochemical properties in the rhizosphere soil of paddy fields and subsequently indirectly affect the activity, abundance, diversity, and community composition of methanogens, ultimately affecting CH₄ flux. The experiment was laid out in the randomized block design with two treatments and three replications for each genotype. In two treatments, paddy fields and artificial soil were used. 35 days after planting (DAP), continuous flooding irrigation, Alternate wetting, and drying (AWD) were applied during the vegetative stage. The highest recorded measurements of soil and environmental parameters were soil moisture at 76%, soil temperature at 28.3℃, Bulk EC at 0.99 ds/m, and pore water EC at 1,25, using HydraGO portable soil sensor system. Gas samples were carried out once on a weekly basis at 09:00 am and 12: 00 pm to obtain the mean GHG flux. Gas Chromatography (GC, Shimadzu, GC-2010, Japan) was used for the analysis of CH4 and CO₂. The treatments with paddy field soil had a 1.3℃ higher temperature than artificial soil. The overall changes in Bulk EC were not significant across the treatment. The CH₄ emission patterns were observed in all rice genotypes, although they were less in treatments with AWD with artificial soil. This shows that AWD creates oxic conditions in the rice soil. CO₂ was also quantified, but it was in minute quantities, as rice plants were using CO₂ for photosynthesis. The highest tillering number was 7, and the lowest was 3 in cultivars grown. The rice varieties to be used for breeding are Norin 24, with showed a high number of tillers with less CH₄.

Keywords: greenhouse gases, methane, morphological characterization, alternating wetting and drying

Procedia PDF Downloads 59
8046 Neurological Complications of HIV/AIDS: Case of Meningitis Caused by Cryptococcus neoformans and Tuberculous Meningitis

Authors: Ndarusanze Berchmans

Abstract:

This research work focused on the analysis of the observations of tuberculous meningitis in HIV-positive patients who were treated by the Prince Regent Charles Hospital in Bujumbura. A number of 246 seropositive patients were examined by the laboratory of Prince Regent Charles in the period between 2010 and 2015. We did a retrospective study; we used data from the registers of the laboratories mentioned above; the objective was to approach the epidemiological, biological, clinical, and therapeutic characteristics of tuberculosis meningitis infection: 124 women (50.40% of AIDS patients) and 122 men (49.59% of AIDS patients) were subject to the diagnosis by identification of cerebrospinal fluid (CSF). The average age of the patients was 30 years for this period. The population at risk has an average age of between 34 and 42 years for the years between 2010-2015. From 2010 to 2012, cases of opportunistic diseases (e.g., tuberculous meningitis and Cryptococcus neoformans meningitis), often found in immunocompromised, were observed at a high rate; in this period, there was a disturbance of the rhythm providing antiretroviral drugs to people with AIDS. The rate of the two meningitis (tuberculous meningitis and Cryptococcus neoformans meningitis) remained above 10% to gradually decrease until 2015, with the gradual return of antiretrovirals. This period records an overall average of 25 cases of tuberculous meningitis, or a percentage of 10.16%. For the year 2015, there were 4 cases of tuberculous meningitis out of a total of 35 seropositive examined (11.42%). This year's percentage shows that the number of tuberculous meningitis cases has fallen from the rate in previous years. This is the result of the care given by associations against HIV/AIDS to HIV-positive people. This decrease in cases of tuberculous meningitis is due to the acquisition of antiretrovirals by all HIV-positive people treated by hospitals. For the moment, these hospitals are taking care of many AIDS patients by providing them permanently with antiretrovirals; Besides that, there are many patients who are supported by associations whose activities are directed against HIV/AIDS.

Keywords: Cryptococcus neoformans meningitis, tuberculosis meningitis, neurological complications, epidemiology of meningitis

Procedia PDF Downloads 193
8045 Budgetary Performance Model for Managing Pavement Maintenance

Authors: Vivek Hokam, Vishrut Landge

Abstract:

An ideal maintenance program for an industrial road network is one that would maintain all sections at a sufficiently high level of functional and structural conditions. However, due to various constraints such as budget, manpower and equipment, it is not possible to carry out maintenance on all the needy industrial road sections within a given planning period. A rational and systematic priority scheme needs to be employed to select and schedule industrial road sections for maintenance. Priority analysis is a multi-criteria process that determines the best ranking list of sections for maintenance based on several factors. In priority setting, difficult decisions are required to be made for selection of sections for maintenance. It is more important to repair a section with poor functional conditions which includes uncomfortable ride etc. or poor structural conditions i.e. sections those are in danger of becoming structurally unsound. It would seem therefore that any rational priority setting approach must consider the relative importance of functional and structural condition of the section. The maintenance priority index and pavement performance models tend to focus mainly on the pavement condition, traffic criteria etc. There is a need to develop the model which is suitably used with respect to limited budget provisions for maintenance of pavement. Linear programming is one of the most popular and widely used quantitative techniques. A linear programming model provides an efficient method for determining an optimal decision chosen from a large number of possible decisions. The optimum decision is one that meets a specified objective of management, subject to various constraints and restrictions. The objective is mainly minimization of maintenance cost of roads in industrial area. In order to determine the objective function for analysis of distress model it is necessary to fix the realistic data into a formulation. Each type of repair is to be quantified in a number of stretches by considering 1000 m as one stretch. A stretch considered under study is having 3750 m length. The quantity has to be put into an objective function for maximizing the number of repairs in a stretch related to quantity. The distress observed in this stretch are potholes, surface cracks, rutting and ravelling. The distress data is measured manually by observing each distress level on a stretch of 1000 m. The maintenance and rehabilitation measured that are followed currently are based on subjective judgments. Hence, there is a need to adopt a scientific approach in order to effectively use the limited resources. It is also necessary to determine the pavement performance and deterioration prediction relationship with more accurate and economic benefits of road networks with respect to vehicle operating cost. The infrastructure of road network should have best results expected from available funds. In this paper objective function for distress model is determined by linear programming and deterioration model considering overloading is discussed.

Keywords: budget, maintenance, deterioration, priority

Procedia PDF Downloads 183
8044 Evaluation of SCS-Curve Numbers and Runoff across Varied Tillage Methods

Authors: Umar Javed, Kristen Blann, Philip Adalikwu, Maryam Sahraei, John McMaine

Abstract:

The soil conservation service curve number (SCS-CN) is a widely used method to assess direct runoff depth based on specific rainfall events. “Actual” estimated runoff depth was estimated by subtracting the change in soil moisture from the depth of precipitation for each discrete rain event during the growing seasons from 2021 to 2023. Fields under investigation were situated in a HUC-12 watershed in southeastern South Dakota selected for a common soil series (Nora-Crofton complex and Moody-Nora complex) to minimize the influence of soil texture on soil moisture. Two soil moisture probes were installed from May 2021 to October 2023, with exceptions during planting and harvest periods. For each field, “Textbook” CN estimates were derived from the TR-55 table based on corresponding mapped land use land cover LULC class and hydrologic soil groups from web soil survey maps. The TR-55 method incorporated HSG and crop rotation within the study area fields. These textbook values were then compared to actual CN values to determine the impact of tillage practices on CN and runoff. Most fields were mapped as having a textbook C or D HSG, but the HSG of actual CNs was that of a B or C hydrologic group. Actual CNs were consistently lower than textbook CNs for all management practices, but actual CNs in conventionally tilled fields were the highest (and closest to textbook CNs), while actual CNs in no-till fields were the lowest. Preliminary results suggest that no-till practice reduces runoff compared to conventional till. This research highlights the need to use CNs that incorporate agricultural management to more accurately estimate runoff at the field and watershed scale.

Keywords: curve number hydrology, hydrologic soil groups, runoff, tillage practices

Procedia PDF Downloads 26