Search results for: probability weighted moment estimation
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 4263

Search results for: probability weighted moment estimation

873 Critical Success Factors Quality Requirement Change Management

Authors: Jamshed Ahmad, Abdul Wahid Khan, Javed Ali Khan

Abstract:

Managing software quality requirements change management is a difficult task in the field of software engineering. Avoiding incoming changes result in user dissatisfaction while accommodating to many requirement changes may delay product delivery. Poor requirements management is solely considered the primary cause of the software failure. It becomes more challenging in global software outsourcing. Addressing success factors in quality requirement change management is desired today due to the frequent change requests from the end-users. In this research study, success factors are recognized and scrutinized with the help of a systematic literature review (SLR). In total, 16 success factors were identified, which significantly impacted software quality requirement change management. The findings show that Proper Requirement Change Management, Rapid Delivery, Quality Software Product, Access to Market, Project Management, Skills and Methodologies, Low Cost/Effort Estimation, Clear Plan and Road Map, Agile Processes, Low Labor Cost, User Satisfaction, Communication/Close Coordination, Proper Scheduling and Time Constraints, Frequent Technological Changes, Robust Model, Geographical distribution/Cultural differences are the key factors that influence software quality requirement change. The recognized success factors and validated with the help of various research methods, i.e., case studies, interviews, surveys and experiments. These factors are then scrutinized in continents, database, company size and period of time. Based on these findings, requirement change will be implemented in a better way.

Keywords: global software development, requirement engineering, systematic literature review, success factors

Procedia PDF Downloads 187
872 IoT and Deep Learning approach for Growth Stage Segregation and Harvest Time Prediction of Aquaponic and Vermiponic Swiss Chards

Authors: Praveen Chandramenon, Andrew Gascoyne, Fideline Tchuenbou-Magaia

Abstract:

Aquaponics offers a simple conclusive solution to the food and environmental crisis of the world. This approach combines the idea of Aquaculture (growing fish) to Hydroponics (growing vegetables and plants in a soilless method). Smart Aquaponics explores the use of smart technology including artificial intelligence and IoT, to assist farmers with better decision making and online monitoring and control of the system. Identification of different growth stages of Swiss Chard plants and predicting its harvest time is found to be important in Aquaponic yield management. This paper brings out the comparative analysis of a standard Aquaponics with a Vermiponics (Aquaponics with worms), which was grown in the controlled environment, by implementing IoT and deep learning-based growth stage segregation and harvest time prediction of Swiss Chards before and after applying an optimal freshwater replenishment. Data collection, Growth stage classification and Harvest Time prediction has been performed with and without water replenishment. The paper discusses the experimental design, IoT and sensor communication with architecture, data collection process, image segmentation, various regression and classification models and error estimation used in the project. The paper concludes with the results comparison, including best models that performs growth stage segregation and harvest time prediction of the Aquaponic and Vermiponic testbed with and without freshwater replenishment.

Keywords: aquaponics, deep learning, internet of things, vermiponics

Procedia PDF Downloads 54
871 Resonant Fluorescence in a Two-Level Atom and the Terahertz Gap

Authors: Nikolai N. Bogolubov, Andrey V. Soldatov

Abstract:

Terahertz radiation occupies a range of frequencies somewhere from 100 GHz to approximately 10 THz, just between microwaves and infrared waves. This range of frequencies holds promise for many useful applications in experimental applied physics and technology. At the same time, reliable, simple techniques for generation, amplification, and modulation of electromagnetic radiation in this range are far from been developed enough to meet the requirements of its practical usage, especially in comparison to the level of technological abilities already achieved for other domains of the electromagnetic spectrum. This situation of relative underdevelopment of this potentially very important range of electromagnetic spectrum is known under the name of the 'terahertz gap.' Among other things, technological progress in the terahertz area has been impeded by the lack of compact, low energy consumption, easily controlled and continuously radiating terahertz radiation sources. Therefore, development of new techniques serving this purpose as well as various devices based on them is of obvious necessity. No doubt, it would be highly advantageous to employ the simplest of suitable physical systems as major critical components in these techniques and devices. The purpose of the present research was to show by means of conventional methods of non-equilibrium statistical mechanics and the theory of open quantum systems, that a thoroughly studied two-level quantum system, also known as an one-electron two-level 'atom', being driven by external classical monochromatic high-frequency (e.g. laser) field, can radiate continuously at much lower (e.g. terahertz) frequency in the fluorescent regime if the transition dipole moment operator of this 'atom' possesses permanent non-equal diagonal matrix elements. This assumption contradicts conventional assumption routinely made in quantum optics that only the non-diagonal matrix elements persist. The conventional assumption is pertinent to natural atoms and molecules and stems from the property of spatial inversion symmetry of their eigenstates. At the same time, such an assumption is justified no more in regard to artificially manufactured quantum systems of reduced dimensionality, such as, for example, quantum dots, which are often nicknamed 'artificial atoms' due to striking similarity of their optical properties to those ones of the real atoms. Possible ways to experimental observation and practical implementation of the predicted effect are discussed too.

Keywords: terahertz gap, two-level atom, resonant fluorescence, quantum dot, resonant fluorescence, two-level atom

Procedia PDF Downloads 255
870 Improving Fault Tolerance and Load Balancing in Heterogeneous Grid Computing Using Fractal Transform

Authors: Saad M. Darwish, Adel A. El-Zoghabi, Moustafa F. Ashry

Abstract:

The popularity of the Internet and the availability of powerful computers and high-speed networks as low-cost commodity components are changing the way we use computers today. These technical opportunities have led to the possibility of using geographically distributed and multi-owner resources to solve large-scale problems in science, engineering, and commerce. Recent research on these topics has led to the emergence of a new paradigm known as Grid computing. To achieve the promising potentials of tremendous distributed resources, effective and efficient load balancing algorithms are fundamentally important. Unfortunately, load balancing algorithms in traditional parallel and distributed systems, which usually run on homogeneous and dedicated resources, cannot work well in the new circumstances. In this paper, the concept of a fast fractal transform in heterogeneous grid computing based on R-tree and the domain-range entropy is proposed to improve fault tolerance and load balancing algorithm by improve connectivity, communication delay, network bandwidth, resource availability, and resource unpredictability. A novel two-dimension figure of merit is suggested to describe the network effects on load balance and fault tolerance estimation. Fault tolerance is enhanced by adaptively decrease replication time and message cost while load balance is enhanced by adaptively decrease mean job response time. Experimental results show that the proposed method yields superior performance over other methods.

Keywords: Grid computing, load balancing, fault tolerance, R-tree, heterogeneous systems

Procedia PDF Downloads 473
869 DNA-Polycation Condensation by Coarse-Grained Molecular Dynamics

Authors: Titus A. Beu

Abstract:

Many modern gene-delivery protocols rely on condensed complexes of DNA with polycations to introduce the genetic payload into cells by endocytosis. In particular, polyethyleneimine (PEI) stands out by a high buffering capacity (enabling the efficient condensation of DNA) and relatively simple fabrication. Realistic computational studies can offer essential insights into the formation process of DNA-PEI polyplexes, providing hints on efficient designs and engineering routes. We present comprehensive computational investigations of solvated PEI and DNA-PEI polyplexes involving calculations at three levels: ab initio, all-atom (AA), and coarse-grained (CG) molecular mechanics. In the first stage, we developed a rigorous AA CHARMM (Chemistry at Harvard Macromolecular Mechanics) force field (FF) for PEI on the basis of accurate ab initio calculations on protonated model pentamers. We validated this atomistic FF by matching the results of extensive molecular dynamics (MD) simulations of structural and dynamical properties of PEI with experimental data. In a second stage, we developed a CG MARTINI FF for PEI by Boltzmann inversion techniques from bead-based probability distributions obtained from AA simulations and ensuring an optimal match between the AA and CG structural and dynamical properties. In a third stage, we combined the developed CG FF for PEI with the standard MARTINI FF for DNA and performed comprehensive CG simulations of DNA-PEI complex formation and condensation. Various technical aspects which are crucial for the realistic modeling of DNA-PEI polyplexes, such as options of treating electrostatics and the relevance of polarizable water models, are discussed in detail. Massive CG simulations (with up to 500 000 beads) shed light on the mechanism and provide time scales for DNA polyplex formation independence of PEI chain size and protonation pattern. The DNA-PEI condensation mechanism is shown to primarily rely on the formation of DNA bundles, rather than by changes of the DNA-strand curvature. The gained insights are expected to be of significant help for designing effective gene-delivery applications.

Keywords: DNA condensation, gene-delivery, polyethylene-imine, molecular dynamics.

Procedia PDF Downloads 108
868 An Estimating Equation for Survival Data with a Possibly Time-Varying Covariates under a Semiparametric Transformation Models

Authors: Yemane Hailu Fissuh, Zhongzhan Zhang

Abstract:

An estimating equation technique is an alternative method of the widely used maximum likelihood methods, which enables us to ease some complexity due to the complex characteristics of time-varying covariates. In the situations, when both the time-varying covariates and left-truncation are considered in the model, the maximum likelihood estimation procedures become much more burdensome and complex. To ease the complexity, in this study, the modified estimating equations those have been given high attention and considerations in many researchers under semiparametric transformation model was proposed. The purpose of this article was to develop the modified estimating equation under flexible and general class of semiparametric transformation models for left-truncated and right censored survival data with time-varying covariates. Besides the commonly applied Cox proportional hazards model, such kind of problems can be also analyzed with a general class of semiparametric transformation models to estimate the effect of treatment given possibly time-varying covariates on the survival time. The consistency and asymptotic properties of the estimators were intuitively derived via the expectation-maximization (EM) algorithm. The characteristics of the estimators in the finite sample performance for the proposed model were illustrated via simulation studies and Stanford heart transplant real data examples. To sum up the study, the bias for covariates has been adjusted by estimating density function for the truncation time variable. Then the effect of possibly time-varying covariates was evaluated in some special semiparametric transformation models.

Keywords: EM algorithm, estimating equation, semiparametric transformation models, time-to-event outcomes, time varying covariate

Procedia PDF Downloads 144
867 An Analysis of the Impact of Government Budget Deficits on Economic Performance. A Zimbabwean Perspective

Authors: Tafadzwa Shumba, Rose C. Nyatondo, Regret Sunge

Abstract:

This research analyses the impact of budget deficits on the economic performance of Zimbabwe. The study employs the autoregressive distributed lag (ARDL) confines testing method to co-integration and long-run estimation using time series data from 1980-2018. The Augmented Dick Fuller (ADF) and the Granger approach were used to testing for stationarity and causality among the factors. Co-integration test results affirm a long term association between GDP development rate and descriptive factors. Causality test results show a unidirectional connection between budget shortfall to GDP development and bi-directional causality amid debt and budget deficit. This study also found unidirectional causality from debt to GDP growth rate. ARDL estimates indicate a significantly positive long term and significantly negative short term impact of budget shortfall on GDP. This suggests that budget deficits have a short-run growth retarding effect and a long-run growth-inducing effect. The long-run results follow the Keynesian theory that posits that fiscal deficits result in an increase in GDP growth. Short-run outcomes follow the neoclassical theory. In light of these findings, the government is recommended to minimize financing of recurrent expenditure using a budget deficit. To achieve sustainable growth and development, the government needs to spend an absorbable budget deficit focusing on capital projects such as the development of human capital and infrastructure.

Keywords: ARDL, budget deficit, economic performance, long run

Procedia PDF Downloads 74
866 Sustainable Land Use Evaluation Based on Preservative Approach: Neighborhoods of Susa City

Authors: Somaye Khademi, Elahe Zoghi Hoseini, Mostafa Norouzi

Abstract:

Determining the manner of land-use and the spatial structure of cities on the one hand, and the economic value of each piece of land, on the other hand, land-use planning is always considered as the main part of urban planning. In this regard, emphasizing the efficient use of land, the sustainable development approach has presented a new perspective on urban planning and consequently on its most important pillar, i.e. land-use planning. In order to evaluate urban land-use, it has been attempted in this paper to select the most significant indicators affecting urban land-use and matching sustainable development indicators. Due to the significance of preserving ancient monuments and the surroundings as one of the main pillars of achieving sustainability, in this research, sustainability indicators have been selected emphasizing the preservation of ancient monuments and historical observance of the city of Susa as one of the historical cities of Iran. It has also been attempted to integrate these criteria with other land-use sustainability indicators. For this purpose, Kernel Density Estimation (KDE) and the AHP model have been used for providing maps displaying spatial density and combining layers as well as providing final maps respectively. Moreover, the rating of sustainability will be studied in different districts of the city of Shush so as to evaluate the status of land sustainability in different parts of the city. The results of the study show that different neighborhoods of Shush do not have the same sustainability in land-use such that neighborhoods located in the eastern half of the city, i.e. the new neighborhoods, have a higher sustainability than those of the western half. It seems that the allocation of a high percentage of these areas to arid lands and historical areas is one of the main reasons for their sustainability.

Keywords: city of Susa, historical heritage, land-use evaluation, urban sustainable development

Procedia PDF Downloads 362
865 Spectroscopic Relation between Open Cluster and Globular Cluster

Authors: Robin Singh, Mayank Nautiyal, Priyank Jain, Vatasta Koul, Vaibhav Sharma

Abstract:

The curiosity to investigate the space and its mysteries was dependably the main impetus of human interest, as the particle of livings exists from the "debut de l'Univers" (beginning of the Universe) typified with its few other living things. The sharp drive to uncover the secrets of stars and their unusual deportment was dependably an ignitor of stars investigation. As humankind lives in civilizations and states, stars likewise live in provinces named ‘clusters’. Clusters are separates into 2 composes i.e. open clusters and globular clusters. An open cluster is a gathering of thousand stars that were moulded from a comparable goliath sub-nuclear cloud and for the most part; contain Propulsion I (extremely metal-rich) and Propulsion II (mild metal-rich), where globular clusters are around gathering of more than thirty thousand stars that circles a galactic focus and basically contain Propulsion III (to a great degree metal-poor) stars. Futurology of this paper lies in the spectroscopic investigation of globular clusters like M92 and NGC419 and open clusters like M34 and IC2391 in different color bands by using software like VIREO virtual observatory, Aladin, CMUNIWIN, and MS-Excel. Assessing the outcome Hertzsprung-Russel (HR) diagram with exemplary cosmological models like Einstein model, De Sitter and Planck survey demonstrate for a superior age estimation of respective clusters. Colour-Magnitude Diagram of these clusters was obtained by photometric analysis in g and r bands which further transformed into BV bands which will unravel the idea of stars exhibit in the individual clusters.

Keywords: color magnitude diagram, globular clusters, open clusters, Einstein model

Procedia PDF Downloads 213
864 Towards Modern Approaches of Intelligence Measurement for Clinical and Educational Practices

Authors: Alena Kulikova, Tatjana Kanonire

Abstract:

Intelligence research is one of the oldest fields of psychology. Many factors have made a research on intelligence, defined as reasoning and problem solving [1, 2], a very acute and urgent problem. Thus, it has been repeatedly shown that intelligence is a predictor of academic, professional, and social achievement in adulthood (for example, [3]); Moreover, intelligence predicts these achievements better than any other trait or ability [4]. The individual level, a comprehensive assessment of intelligence is a necessary criterion for the diagnosis of various mental conditions. For example, it is a necessary condition for psychological, medical and pedagogical commissions when deciding on educational needs and the most appropriate educational programs for school children. Assessment of intelligence is crucial in clinical psychodiagnostic and needs high-quality intelligence measurement tools. Therefore, it is not surprising that the development of intelligence tests is an essential part of psychological science and practice. Many modern intelligence tests have a long history and have been used for decades, for example, the Stanford-Binet test or the Wechsler test. However, the vast majority of these tests are based on the classic linear test structure, in which all respondents receive all tasks (see, for example, a critical review by [5]). This understanding of the testing procedure is a legacy of the pre-computer era, in which blank testing was the only diagnostic procedure available [6] and has some significant limitations that affect the reliability of the data obtained [7] and increased time costs. Another problem with measuring IQ is that classical line-structured tests do not fully allow to measure respondent's intellectual progress [8], which is undoubtedly a critical limitation. Advances in modern psychometrics allow for avoiding the limitations of existing tools. However, as in any rapidly developing industry, at the moment, psychometrics does not offer ready-made and straightforward solutions and requires additional research. In our presentation we would like to discuss the strengths and weaknesses of the current approaches to intelligence measurement and highlight “points of growth” for creating a test in accordance with modern psychometrics. Whether it is possible to create the instrument that will use all achievements of modern psychometric and remain valid and practically oriented. What would be the possible limitations for such an instrument? The theoretical framework and study design to create and validate the original Russian comprehensive computer test for measuring the intellectual development in school-age children will be presented.

Keywords: Intelligence, psychometrics, psychological measurement, computerized adaptive testing, multistage testing

Procedia PDF Downloads 67
863 Doing Bad for a Greater Good: Moral Disengagement in Social and Commercial Entrepreneurial Contexts

Authors: Thorsten Auer, Sumaya Islam, Sabrina Plaß, Colin Wooldridge

Abstract:

Whether individuals are more likely to forgo some ethical values if it is for a “great” social mission remains questionable. Research interest in the mechanism of moral disengagement has risen sharply in the organizational context over the last decades. Moral disengagement provides an explanatory approach to why individuals decide against their moral intent and describes the tendency to make unethical decisions due to a lack of self-regulation given various actions and their consequences. In our study, we examine the differences between individual decision-making given a commercial and social entrepreneurial context. Thereby, we investigate whether individuals in a social entrepreneurial context, characterized by pro-social goals and purpose beyond profit maximization, tend to make more or less “unethical” decisions in trade-off situations than those given a profit-focused commercial, entrepreneurial context. While a general priming effect may explain the tendency for individuals to make less unethical decisions given a social context, it remains unclear how individuals decide given a trade-off in that specific context. The trade-off in our study is characterized by the option to decide (un-) ethically to enhance the business purpose (in the social context, a social purpose, in the commercial context, a profit-maximization purpose). To investigate which characteristics of the context –and specifically of a trade-off – lead individuals to disregard and override their ethical values for a “greater good”, we design a conjoint analysis. This approach allows us to vary the attributes and scenarios and to test which attributes of a trade-off increase the probability of making an unethical choice. We add survey data to examine the individual propensity to morally disengage as an influencing factor to prefer certain attributes. Currently, we are in the final process of designing the conjoint analysis and plan to conduct the study by December 2022. We contribute to a better understanding of the role of moral disengagement in individual decision-making in a (social) entrepreneurial trade-off.

Keywords: moral disengagement, social entrepreneurship, unethical decision, conjoint analysis

Procedia PDF Downloads 75
862 A Bayesian Approach for Analyzing Academic Article Structure

Authors: Jia-Lien Hsu, Chiung-Wen Chang

Abstract:

Research articles may follow a simple and succinct structure of organizational patterns, called move. For example, considering extended abstracts, we observe that an extended abstract usually consists of five moves, including Background, Aim, Method, Results, and Conclusion. As another example, when publishing articles in PubMed, authors are encouraged to provide a structured abstract, which is an abstract with distinct and labeled sections (e.g., Introduction, Methods, Results, Discussions) for rapid comprehension. This paper introduces a method for computational analysis of move structures (i.e., Background-Purpose-Method-Result-Conclusion) in abstracts and introductions of research documents, instead of manually time-consuming and labor-intensive analysis process. In our approach, sentences in a given abstract and introduction are automatically analyzed and labeled with a specific move (i.e., B-P-M-R-C in this paper) to reveal various rhetorical status. As a result, it is expected that the automatic analytical tool for move structures will facilitate non-native speakers or novice writers to be aware of appropriate move structures and internalize relevant knowledge to improve their writing. In this paper, we propose a Bayesian approach to determine move tags for research articles. The approach consists of two phases, training phase and testing phase. In the training phase, we build a Bayesian model based on a couple of given initial patterns and the corpus, a subset of CiteSeerX. In the beginning, the priori probability of Bayesian model solely relies on initial patterns. Subsequently, with respect to the corpus, we process each document one by one: extract features, determine tags, and update the Bayesian model iteratively. In the testing phase, we compare our results with tags which are manually assigned by the experts. In our experiments, the promising accuracy of the proposed approach reaches 56%.

Keywords: academic English writing, assisted writing, move tag analysis, Bayesian approach

Procedia PDF Downloads 317
861 Modeling and Numerical Simulation of Heat Transfer and Internal Loads at Insulating Glass Units

Authors: Nina Penkova, Kalin Krumov, Liliana Zashcova, Ivan Kassabov

Abstract:

The insulating glass units (IGU) are widely used in the advanced and renovated buildings in order to reduce the energy for heating and cooling. Rules for the choice of IGU to ensure energy efficiency and thermal comfort in the indoor space are well known. The existing of internal loads - gage or vacuum pressure in the hermetized gas space, requires additional attention at the design of the facades. The internal loads appear at variations of the altitude, meteorological pressure and gas temperature according to the same at the process of sealing. The gas temperature depends on the presence of coatings, coating position in the transparent multi-layer system, IGU geometry and space orientation, its fixing on the facades and varies with the climate conditions. An algorithm for modeling and numerical simulation of thermal fields and internal pressure in the gas cavity at insulating glass units as function of the meteorological conditions is developed. It includes models of the radiation heat transfer in solar and infrared wave length, indoor and outdoor convection heat transfer and free convection in the hermetized gas space, assuming the gas as compressible. The algorithm allows prediction of temperature and pressure stratification in the gas domain of the IGU at different fixing system. The models are validated by comparison of the numerical results with experimental data obtained by Hot-box testing. Numerical calculations and estimation of 3D temperature, fluid flow fields, thermal performances and internal loads at IGU in window system are implemented.

Keywords: insulating glass units, thermal loads, internal pressure, CFD analysis

Procedia PDF Downloads 260
860 A Contemporary Advertising Strategy on Social Networking Sites

Authors: M. S. Aparna, Pushparaj Shetty D.

Abstract:

Nowadays social networking sites have become so popular that the producers or the sellers look for these sites as one of the best options to target the right audience to market their products. There are several tools available to monitor or analyze the social networks. Our task is to identify the right community web pages and find out the behavior analysis of the members by using these tools and formulate an appropriate strategy to market the products or services to achieve the set goals. The advertising becomes more effective when the information of the product/ services come from a known source. The strategy explores great buying influence in the audience on referral marketing. Our methodology proceeds with critical budget analysis and promotes viral influence propagation. In this context, we encompass the vital bits of budget evaluation such as the number of optimal seed nodes or primary influential users activated onset, an estimate coverage spread of nodes and maximum influence propagating distance from an initial seed to an end node. Our proposal for Buyer Prediction mathematical model arises from the urge to perform complex analysis when the probability density estimates of reliable factors are not known or difficult to calculate. Order Statistics and Buyer Prediction mapping function guarantee the selection of optimal influential users at each level. We exercise an efficient tactics of practicing community pages and user behavior to determine the product enthusiasts on social networks. Our approach is promising and should be an elementary choice when there is little or no prior knowledge on the distribution of potential buyers on social networks. In this strategy, product news propagates to influential users on or surrounding networks. By applying the same technique, a user can search friends who are capable to advise better or give referrals, if a product interests him.

Keywords: viral marketing, social network analysis, community web pages, buyer prediction, influence propagation, budget constraints

Procedia PDF Downloads 246
859 Comparative Study of Mutations Associated with Second Line Drug Resistance and Genetic Background of Mycobacterium tuberculosis Strains

Authors: Syed Beenish Rufai, Sarman Singh

Abstract:

Background: Performance of Genotype MTBDRsl (Hain Life science GmbH Germany) for detection of mutations associated with second-line drug resistance is well known. However, less evidence regarding the association of mutations and genetic background of strains is known which, in the future, is essential for clinical management of anti-tuberculosis drugs in those settings where the probability of particular genotype is predominant. Material and Methods: During this retrospective study, a total of 259 MDR-TB isolates obtained from pulmonary TB patients were tested for second-line drug susceptibility testing (DST) using Genotype MTBDRsl VER 1.0 and compared with BACTEC MGIT-960 as a reference standard. All isolates were further characterized using spoligotyping. The spoligo patterns obtained were compared and analyzed using SITVIT_WEB. Results: Of total 259 MDR-TB isolates which were screened for second-line DST by Genotype MTBDRsl, mutations were found to be associated with gyrA, rrs and emb genes in 82 (31.6%), 2 (0.8%) and 90 (34.7%) isolates respectively. 16 (6.1%) isolates detected mutations associated with both FQ as well as to AG/CP drugs (XDR-TB). No mutations were detected in 159 (61.4%) isolates for corresponding gyrA and rrs genes. Genotype MTBDRsl showed a concordance of 96.4% for detection of sensitive isolates in comparison with second-line DST by BACTEC MGIT-960 and 94.1%, 93.5%, 60.5% and 50% for detection of XDR-TB, FQ, EMB, and AMK/CAP respectively. D94G was the most prevalent mutation found among (38 (46.4%)) OFXR isolates (37 FQ mono-resistant and 1 XDR-TB) followed by A90V (23 (28.1%)) (17 FQ mono-resistant and 6 XDR-TB). Among AG/CP resistant isolates A1401G was the most frequent mutation observed among (11 (61.1%)) isolates (2 AG/CP mono-resistant isolates and 9 XDR-TB isolates) followed by WT+A1401G (6 (33.3%)) and G1484T (1 (5.5%)) respectively. On spoligotyping analysis, Beijing strain (46%) was found to be the most predominant strain among pre-XDR and XDR TB isolates followed by CAS (30%), X (6%), Unique (5%), EAI and T each of 4%, Manu (3%) and Ural (2%) respectively. Beijing strain was found to be strongly associated with D94G (47.3%) and A90V mutations by (47.3%) and 34.8% followed by CAS strain by (31.6%) and 30.4% respectively. However, among AG/CP resistant isolates, only Beijing strain was found to be strongly associated with A1401G and WT+A1401G mutations by 54.5% and 50% respectively. Conclusion: Beijing strain was found to be strongly associated with the most prevalent mutations among pre-XDR and XDR TB isolates. Acknowledgments: Study was supported with Grant by All India Institute of Medical Sciences, New Delhi reference No. P-2012/12452.

Keywords: tuberculosis, line probe assay, XDR TB, drug susceptibility

Procedia PDF Downloads 126
858 Comparative Analysis of the Third Generation of Research Data for Evaluation of Solar Energy Potential

Authors: Claudineia Brazil, Elison Eduardo Jardim Bierhals, Luciane Teresa Salvi, Rafael Haag

Abstract:

Renewable energy sources are dependent on climatic variability, so for adequate energy planning, observations of the meteorological variables are required, preferably representing long-period series. Despite the scientific and technological advances that meteorological measurement systems have undergone in the last decades, there is still a considerable lack of meteorological observations that form series of long periods. The reanalysis is a system of assimilation of data prepared using general atmospheric circulation models, based on the combination of data collected at surface stations, ocean buoys, satellites and radiosondes, allowing the production of long period data, for a wide gamma. The third generation of reanalysis data emerged in 2010, among them is the Climate Forecast System Reanalysis (CFSR) developed by the National Centers for Environmental Prediction (NCEP), these data have a spatial resolution of 0.50 x 0.50. In order to overcome these difficulties, it aims to evaluate the performance of solar radiation estimation through alternative data bases, such as data from Reanalysis and from meteorological satellites that satisfactorily meet the absence of observations of solar radiation at global and/or regional level. The results of the analysis of the solar radiation data indicated that the reanalysis data of the CFSR model presented a good performance in relation to the observed data, with determination coefficient around 0.90. Therefore, it is concluded that these data have the potential to be used as an alternative source in locations with no seasons or long series of solar radiation, important for the evaluation of solar energy potential.

Keywords: climate, reanalysis, renewable energy, solar radiation

Procedia PDF Downloads 196
857 An Approach to Correlate the Statistical-Based Lorenz Method, as a Way of Measuring Heterogeneity, with Kozeny-Carman Equation

Authors: H. Khanfari, M. Johari Fard

Abstract:

Dealing with carbonate reservoirs can be mind-boggling for the reservoir engineers due to various digenetic processes that cause a variety of properties through the reservoir. A good estimation of the reservoir heterogeneity which is defined as the quality of variation in rock properties with location in a reservoir or formation, can better help modeling the reservoir and thus can offer better understanding of the behavior of that reservoir. Most of reservoirs are heterogeneous formations whose mineralogy, organic content, natural fractures, and other properties vary from place to place. Over years, reservoir engineers have tried to establish methods to describe the heterogeneity, because heterogeneity is important in modeling the reservoir flow and in well testing. Geological methods are used to describe the variations in the rock properties because of the similarities of environments in which different beds have deposited in. To illustrate the heterogeneity of a reservoir vertically, two methods are generally used in petroleum work: Dykstra-Parsons permeability variations (V) and Lorenz coefficient (L) that are reviewed briefly in this paper. The concept of Lorenz is based on statistics and has been used in petroleum from that point of view. In this paper, we correlated the statistical-based Lorenz method to a petroleum concept, i.e. Kozeny-Carman equation and derived the straight line plot of Lorenz graph for a homogeneous system. Finally, we applied the two methods on a heterogeneous field in South Iran and discussed each, separately, with numbers and figures. As expected, these methods show great departure from homogeneity. Therefore, for future investment, the reservoir needs to be treated carefully.

Keywords: carbonate reservoirs, heterogeneity, homogeneous system, Dykstra-Parsons permeability variations (V), Lorenz coefficient (L)

Procedia PDF Downloads 204
856 Highly Active, Non-Platinum Metal Catalyst Material as Bi-Functional Air Cathode in Zinc Air Battery

Authors: Thirupathi Thippani, Kothandaraman Ramanujam

Abstract:

Current research on energy storage has been paid to metal-air batteries, because of attractive alternate energy source for the future. Metal – air batteries have the probability to significantly increase the power density, decrease the cost of energy storage and also used for a long time due to its high energy density, low-level pollution, light weight. The performance of these batteries mostly restricted by the slow kinetics of the oxygen reduction reaction (ORR) and oxygen evolution reaction (OER) on cathode during battery discharge and charge. The ORR and OER are conventionally carried out with precious metals (such as Pt) and metal oxides (such as RuO₂ and IrO₂) as catalysts separately. However, these metal-based catalysts are regularly undergoing some difficulties, including high cost, low selectivity, poor stability and unfavorable to environmental effects. So, in order to develop the active, stable, corrosion resistance and inexpensive bi-functional catalyst material is mandatory for the commercialization of zinc-air rechargeable battery technology. We have attempted and synthesized non-precious metal (NPM) catalysts comprising cobalt and N-doped multiwalled carbon nanotubes (N-MWCNTs-Co) were synthesized by the solid-state pyrolysis (SSP) of melamine with Co₃O₄. N-MWCNTs-Co acts as an excellent electrocatalyst for both the oxygen reduction reaction (ORR) and the oxygen evolution reaction (OER), and hence can be used in secondary metal-air batteries and in unitized regenerative fuel cells. It is important to study the OER and ORR at high concentrations of KOH as most of the metal-air batteries employ KOH concentrations > 4M. In the first 16 cycles of the zinc-air battery while using N-MWCNTs-Co, 20 wt.% Pt/C or 20 wt.% IrO₂/C as air electrodes. In the ORR regime (the discharge profile of the zinc-air battery), the cell voltage exhibited by N-MWCNTs-Co was 44 and 83 mV higher (based on 5th cycle) in comparison to of 20 wt.% Pt/C and 20 wt.% IrO₂/C respectively. To demonstrate this promise, a zinc-air battery was assembled and tested at a current density of 0.5 Ag⁻¹ for charge-discharge 100 cycles.

Keywords: oxygen reduction reaction (ORR), oxygen evolution reaction(OER), non-platinum, zinc air battery

Procedia PDF Downloads 220
855 Adolescent Sleep Hygiene Scale and Adolescent Sleep Wake Scale: Factorial Analysis and Validation for Indian Population

Authors: Sataroopa Mishra, Mona Basker, Sneha Varkki, Ram Kumar Pandian, Grace Rebekah

Abstract:

Background: Sleep deprivation is a matter of public health importance among adolescents. We used adolescent sleep wake scale and adolescent sleep hygiene scale to determine the sleep quality and sleep hygiene respectively of school going adolescents in Vellore city of India. The objective of the study was to do factorial analysis of the scales and validate it for use in local population. Methods: Observational questionnaire based cross sectional study. Setting: Community based school survey in a semi-urban setting in three schools in Vellore city. Data collection: Non probability sample was collected form students studying in standard 9 and 11. Students filled Adolescent Sleep Wake scale (ASWS) and Adolescent Sleep Hygiene Scale (ASHS) translated into vernacular language. Data Analysis: Exploratory Factorial Analysis was used to see the factor loading of various components of the two scales. Confirmatory factorial analysis is subsequently planned for assessing the internal validity of the scales.Results: 557 adolescents were included in the study of 12 – 17 years old. Exploratory factorial analysis of adolescent sleep hygiene scale indicated significant factor loading for 18 items from 28 items originally devised by the authors and has been reconstructed to four domains instead of 9 domains in the original scale namely sleep stability, cognitive – emotional, Physiological - bed time routine - behavioural arousal factor (activites before bedtime and during bed time), Sleep environment (lighting and bed sharing). Factorial analysis of Adolescent sleep wake scale showed factor loading of 18 items out of 28 items in original scale reconstructed into 5 aspects of sleep quality. Conclusions: The factorial analysis gives a reconstructed scale useful for the local population. Further a confirmatory factorial analysis has been subsequently planned to determine the internal consistency of the scale for local population.

Keywords: factorial analysis, sleep hygiene, sleep quality, adolescent sleep scale

Procedia PDF Downloads 267
854 Entrepreneurial Leadership in a Startup Context: A Comparative Study on Two Egyptian Startup Businesses

Authors: Nada Basset

Abstract:

Problem Statement: The study examines the important role of leading change inside start-ups and highlights the challenges faced by an entrepreneur during the startup phase of the business. Research Methods/Procedures/Approaches: A qualitative research approach is taken, using the case study analysis method. A comparative study was made between two day care nurseries in Greater Cairo. Non-probability purposive sampling was used and a triangulation of semi-structured interviews, document analysis and participant-observation were applied simultaneously. The in-depth case study analysis took place over a longitudinal study of four calendar months. Results/Findings: Findings demonstrated that leading change in an entrepreneurial setup must be initiated by the entrepreneur, who must also be the owner of the change process. Another important finding showed that the culture of change, although created by the entrepreneur, needs the support and engagement of followers, who should be sharing the same value system and vision of the entrepreneur. Conclusions and Implications: An important implication suggests that during the first year of a start-up lifecycle, special emphasis must be made to the recruitment and selection of personnel, who should play a role into setting the new start-up culture and help it grow or shrink. Another drawn conclusion is that the success of the change must be measured in both quantitative and qualitative terms. Increasing revenues and customer attrition rates -as quantitative KPIs- must be aligned with other qualitative KPIs like customer satisfaction, employee satisfaction, and organizational commitment and business reputation. Originality of Paper: The paper addresses change management in an entrepreneurial concept, with an empirical application on an Egyptian start-up model providing a service to both adults and children. This privileges the research as the constructs measured merged together the level of satisfaction of employees, decision-makers (parents of children), and the users (children).

Keywords: leadership, change management, entrepreneurship, startup business

Procedia PDF Downloads 167
853 Regeneration Nature of Rumex Species Root Fragment as Affected by Desiccation

Authors: Khalid Alshallash

Abstract:

Small fragments of the roots of some Rumex species including R. obtusifolius and R. crispus have been found to regenerate readily, contributing to the severity of infestations by these very common, widespread and difficult to control perennial weeds of agricultural crops and grasslands. Their root fragments are usually created during routine agricultural practices. We found that fresh root fragments of both species containing 65-70 % of moisture, progressively lose their moisture content when desiccated under controlled growth room conditions matching summer weather of southeast England, with the greatest reduction occurring in the first 48 hours. Probability of shoot emergence and the time taken for emergence in glasshouse conditions were also reduced significantly by desiccation, with R. obtusifolius least affected up to 48-hour. However, the effects converged after 120 hours. In contrast, R. obtusifolius was significantly slower to emerge after up to 48 hours desiccation, again effects converging after longer periods, R. crispus entirely failed to emerge at 120 hours. The dry weight of emerged shoots was not significantly different between the species, until desiccated for 96 hours when R. obtusifolius was significantly reduced. At 120 hours, R. obtusifolius did not emerge. In outdoor trials, desiccation for 24 or 48 hours had less effect on emergence when planted at the soil surface or up to 10 cm of depth, compared to deeper plantings. In both species, emergence was significantly lower when desiccated fragments were planted at 15 or 20 cm. Time taken for emergence was not significantly different between the species until planted at 15 or 20 cm when R. obtusifolius was slower than R. crispus and reduced further by increasing desiccation. Similar variation in effects of increasing soil depth interacting with increasing desiccation was found in reductions in dry weight, the number of tillers and leaf area, with R obtusifolius generally but not exclusively better able to withstand more extreme trial conditions. Our findings suggest that infestations of these highly troublesome weeds may be partly controlled by appropriate agricultural practices, notably exposing cut fragments to drying environmental conditions followed by deep burial.

Keywords: regeneration, root fragment, rumex crispus, rumex obtusifolius

Procedia PDF Downloads 83
852 Bacteriological and Mineral Analyses of Leachate Samples from Erifun Dumpsite, Ado-Ekiti, Ekiti State, Nigeria

Authors: Adebowale T. Odeyemi, Oluwafemi A. Ajenifuja

Abstract:

The leachate samples collected from Erifun dumpsite along Federal Polythenic road, Ado-Ekiti, Ekiti State, were subjected to bacteriological and mineral analyses. The bacteriological estimation and isolation were done using serial dilution and pour plating techniques. Antibiotic susceptibility test was done using agar disc diffusion technique. Atomic Absorption Spectophotometry method was used to analyze the heavy metal contents in the leachate samples. The bacterial and coliform counts ranged from 4.2 × 105 CFU/ml to 2.97 × 106 CFU/ml and 5.0 × 104 CFU/ml to 2.45 x 106 CFU/ml, respectively. The isolated bacteria and percentage of occurrence include Bacillus cereus (22%), Enterobacter aerogenes (18%), Staphylococcus aureus (16%), Proteus vulgaris (14%), Escherichia coli (14%), Bacillus licheniformis (12%) and Klebsiella aerogenes (4%). The mineral value ranged as follow; iron (21.30mg/L - 25.60mg/L), zinc (1.80mg/L - 5.60mg/L), copper (1.00mg/L - 2.60mg/L), chromium (0.50mg/L - 1.30mg/L), candium (0.20mg/L - 1.30mg/L), nickel (0.20mg/L - 0.80mg/L), lead (0.05mg/L-0.30mg/L), cobalt (0.03mg/L - 0.30mg/L) and in all samples manganese was not detected. The entire organisms isolated exhibited a high level of resistance to most of the antibiotics used. There is an urgent need for awareness to be created about the present situation of the leachate in Erifun, on the need for treatment of the nearby stream and other water sources before they can be used for drinking and other domestic use. In conclusion, a good method of waste disposal is required in those communities to prevent leachate formation, percolation, and runoff into water bodies during the raining season.

Keywords: antibiotic susceptibility, dumpsite, bacteriological analysis, heavy metal

Procedia PDF Downloads 127
851 Estimation of Carbon Uptake of Seoul City Street Trees in Seoul and Plans for Increase Carbon Uptake by Improving Species

Authors: Min Woo Park, Jin Do Chung, Kyu Yeol Kim, Byoung Uk Im, Jang Woo Kim, Hae Yeul Ryu

Abstract:

Nine representative species of trees among all the street trees were selected to estimate the absorption amount of carbon dioxide emitted from street trees in Seoul calculating the biomass, amount of carbon saved, and annual absorption amount of carbon dioxide in each of the species. Planting distance of street trees in Seoul was 1,851,180 m, the number of planting lines was 1,287, the number of planted trees was 284,498 and 46 species of trees were planted as of 2013. According to the result of plugging the quantity of species of street trees in Seoul on the absorption amount of each of the species, 120,097 ton of biomass, 60,049.8 ton of amount of carbon saved, and 11,294 t CO2/year of annual absorption amount of carbon dioxide were calculated. Street ratio mentioned on the road statistics in Seoul in 2022 is 23.13%. If the street trees are assumed to be increased in the same rate, the number of street trees in Seoul was calculated to be 294,823. The planting distance was estimated to be 1,918,360 m, and the annual absorption amount of carbon dioxide was measured to be 11,704 t CO2/year. Plans for improving the annual absorption amount of carbon dioxide from street trees were established based on the expected amount of absorption. First of all, it is to improve the annual absorption amount of carbon dioxide by increasing the number of planted street trees after adjusting the planting distance of street trees. If adjusting the current planting distance to 6 m, it was turned out that 12,692.7 t CO2/year was absorbed on an annual basis. Secondly, it is to change the species of trees to tulip trees that represent high absorption rate. If increasing the proportion of tulip trees to 30% up to 2022, the annual absorption rate of carbon dioxide was calculated to be 17804.4 t CO2/year.

Keywords: absorption of carbon dioxide, source of absorbing carbon dioxide, trees in city, improving species

Procedia PDF Downloads 347
850 Analysis of Earthquake Potential and Shock Level Scenarios in South Sulawesi

Authors: Takhul Bakhtiar

Abstract:

In South Sulawesi Province, there is an active Walanae Fault causing this area to frequently experience earthquakes. This study aims to determine the level of seismicity of the earthquake in order to obtain the potential for earthquakes in the future. The estimation of the potential for earthquakes is then made a scenario model determine the estimated level of shocks as an effort to mitigate earthquake disasters in the region. The method used in this study is the Gutenberg Richter Method through the statistical likelihood approach. This study used earthquake data in the South Sulawesi region in 1972 - 2022. The research location is located at the coordinates of 3.5° – 5.5° South Latitude and 119.5° – 120.5° East Longitude and divided into two segments, namely the northern segment at the coordinates of 3.5° – 4.5° South Latitude and 119,5° – 120,5° East Longitude then the southern segment with coordinates of 4.5° – 5.5° South Latitude and 119,5° – 120.5° East Longitude. This study uses earthquake parameters with a magnitude > 1 and a depth < 50 km. The results of the analysis show that the potential for earthquakes in the next ten years with a magnitude of M = 7 in the northern segment is estimated at 98.81% with an estimated shock level of VI-VII MMI around the cities of Pare-Pare, Barru, Pinrang and Soppeng then IV - V MMI in the cities of Bulukumba, Selayar, Makassar and Gowa. In the southern segment, the potential for earthquakes in the next ten years with a magnitude of M = 7 is estimated at 32.89% with an estimated VI-VII MMI shock level in the cities of Bulukumba, Selayar, Makassar and Gowa, then III-IV MMI around the cities of Pare-Pare, Barru, Pinrang and Soppeng.

Keywords: Gutenberg Richter, likelihood method, seismicity, shakemap and MMI scale

Procedia PDF Downloads 109
849 Aerodynamic Modeling Using Flight Data at High Angle of Attack

Authors: Rakesh Kumar, A. K. Ghosh

Abstract:

The paper presents the modeling of linear and nonlinear longitudinal aerodynamics using real flight data of Hansa-3 aircraft gathered at low and high angles of attack. The Neural-Gauss-Newton (NGN) method has been applied to model the linear and nonlinear longitudinal dynamics and estimate parameters from flight data. Unsteady aerodynamics due to flow separation at high angles of attack near stall has been included in the aerodynamic model using Kirchhoff’s quasi-steady stall model. NGN method is an algorithm that utilizes Feed Forward Neural Network (FFNN) and Gauss-Newton optimization to estimate the parameters and it does not require any a priori postulation of mathematical model or solving of equations of motion. NGN method was validated on real flight data generated at moderate angles of attack before application to the data at high angles of attack. The estimates obtained from compatible flight data using NGN method were validated by comparing with wind tunnel values and the maximum likelihood estimates. Validation was also carried out by comparing the response of measured motion variables with the response generated by using estimates a different control input. Next, NGN method was applied to real flight data generated by executing a well-designed quasi-steady stall maneuver. The results obtained in terms of stall characteristics and aerodynamic parameters were encouraging and reasonably accurate to establish NGN as a method for modeling nonlinear aerodynamics from real flight data at high angles of attack.

Keywords: parameter estimation, NGN method, linear and nonlinear, aerodynamic modeling

Procedia PDF Downloads 427
848 Impact of Vehicle Travel Characteristics on Level of Service: A Comparative Analysis of Rural and Urban Freeways

Authors: Anwaar Ahmed, Muhammad Bilal Khurshid, Samuel Labi

Abstract:

The effect of trucks on the level of service is determined by considering passenger car equivalents (PCE) of trucks. The current version of Highway Capacity Manual (HCM) uses a single PCE value for all tucks combined. However, the composition of truck traffic varies from location to location; therefore a single PCE-value for all trucks may not correctly represent the impact of truck traffic at specific locations. Consequently, present study developed separate PCE values for single-unit and combination trucks to replace the single value provided in the HCM on different freeways. Site specific PCE values, were developed using concept of spatial lagging headways (the distance from the rear bumper of a leading vehicle to the rear bumper of the following vehicle) measured from field traffic data. The study used data from four locations on a single urban freeway and three different rural freeways in Indiana. Three-stage-least-squares (3SLS) regression techniques were used to generate models that predicted lagging headways for passenger cars, single unit trucks (SUT), and combination trucks (CT). The estimated PCE values for single-unit and combination truck for basic urban freeways (level terrain) were: 1.35 and 1.60, respectively. For rural freeways the estimated PCE values for single-unit and combination truck were: 1.30 and 1.45, respectively. As expected, traffic variables such as vehicle flow rates and speed have significant impacts on vehicle headways. Study results revealed that the use of separate PCE values for different truck classes can have significant influence on the LOS estimation.

Keywords: level of service, capacity analysis, lagging headway, trucks

Procedia PDF Downloads 341
847 Changes in Geospatial Structure of Households in the Czech Republic: Findings from Population and Housing Census

Authors: Jaroslav Kraus

Abstract:

Spatial information about demographic processes are a standard part of outputs in the Czech Republic. That was also the case of Population and Housing Census which was held on 2011. This is a starting point for a follow up study devoted to two basic types of households: single person households and households of one completed family. Single person households and one family households create more than 80 percent of all households, but the share and spatial structure is in long-term changing. The increase of single households is results of long-term fertility decrease and divorce increase, but also possibility of separate living. There are regions in the Czech Republic with traditional demographic behavior, and regions like capital Prague and some others with changing pattern. Population census is based - according to international standards - on the concept of currently living population. Three types of geospatial approaches will be used for analysis: (i) firstly measures of geographic distribution, (ii) secondly mapping clusters to identify the locations of statistically significant hot spots, cold spots, spatial outliers, and similar features and (iii) finally analyzing pattern approach as a starting point for more in-depth analyses (geospatial regression) in the future will be also applied. For analysis of this type of data, number of households by types should be distinct objects. All events in a meaningful delimited study region (e.g. municipalities) will be included in an analysis. Commonly produced measures of central tendency and spread will include: identification of the location of the center of the point set (by NUTS3 level); identification of the median center and standard distance, weighted standard distance and standard deviational ellipses will be also used. Identifying that clustering exists in census households datasets does not provide a detailed picture of the nature and pattern of clustering but will be helpful to apply simple hot-spot (and cold spot) identification techniques to such datasets. Once the spatial structure of households will be determined, any particular measure of autocorrelation can be constructed by defining a way of measuring the difference between location attribute values. The most widely used measure is Moran’s I that will be applied to municipal units where numerical ratio is calculated. Local statistics arise naturally out of any of the methods for measuring spatial autocorrelation and will be applied to development of localized variants of almost any standard summary statistic. Local Moran’s I will give an indication of household data homogeneity and diversity on a municipal level.

Keywords: census, geo-demography, households, the Czech Republic

Procedia PDF Downloads 87
846 Risk Factors for Determining Anti-HBcore to Hepatitis B Virus Among Blood Donors

Authors: Tatyana Savchuk, Yelena Grinvald, Mohamed Ali, Ramune Sepetiene, Dinara Sadvakassova, Saniya Saussakova, Kuralay Zhangazieva, Dulat Imashpayev

Abstract:

Introduction. The problem of viral hepatitis B (HBV) takes a vital place in the global health system. The existing risk of HBV transmission through blood transfusions is associated with transfusion of blood taken from infected individuals during the “serological window” period or from patients with latent HBV infection, the marker of which is anti-HBcore. In the absence of information about other markers of hepatitis B, the presence of anti-HBcore suggests that a person may be actively infected or has suffered hepatitis B in the past and has immunity. Aim. To study the risk factors influencing the positive anti-HBcore indicators among the donor population. Materials and Methods. The study was conducted in 2021 in the Scientific and Production Center of Transfusiology of the Ministry of Healthcare in Kazakhstan. The samples taken from blood donors were tested for anti-HBcore, by CLIA on the Architect i2000SR (ABBOTT). A special questionnaire was developed for the blood donors’ socio-demographic characteristics. Statistical analysis was conducted by the R software (version 4.1.1, USA, 2021). Results.5709 people aged 18 to 66 years were included in the study, the proportion of men and women was 68.17% and 31.83%, respectively. The average age of the participants was 35.7 years. A weighted multivariable mixed effects logistic regression analysis showed that age (p<0.001), ethnicity (p<0.05), and marital status (p<0.05) were statistically associated with anti-HBcore positivity. In particular, analysis adjusting for gender, nationality, education, marital status, family history of hepatitis, blood transfusion, injections, and surgical interventions, with a one-year increase in age (adjOR=1.06, 95%CI:1.05-1.07), showed an 6% growth in odds of having anti-HBcore positive results. Those who were russian ethnicity (adjOR=0.65, 95%CI:0.46-0.93) and representatives of other nationality groups (adjOR=0.56, 95%CI:0.37-0.85) had lower odds of having anti-HBcore when compared to Kazakhs when controlling for other covariant variables. Among singles, the odds of having a positive anti-HBcore were lower by 29% (adjOR = 0.71, 95%CI:0.57-0.89) compared to married participants when adjusting for other variables. Conclusions.Kazakhstan is one of the countries with medium endemicity of HBV prevalence (2%-7%). Results of the study demonstrated the possibility to form a profile of risk factors (age, nationality, marital status). Taking into account the data, it is recommended to increase attention to donor questionnaires by adding leading questions and to improve preventive measures to prevent HBV. Funding. This research was supported by a grant from Abbott Laboratories.

Keywords: anti-HBcore, blood donor, donation, hepatitis B virus, occult hepatitis

Procedia PDF Downloads 88
845 Artificial intelligence and Law

Authors: Mehrnoosh Abouzari, Shahrokh Shahraei

Abstract:

With the development of artificial intelligence in the present age, intelligent machines and systems have proven their actual and potential capabilities and are mindful of increasing their presence in various fields of human life in the fields of industry, financial transactions, marketing, manufacturing, service affairs, politics, economics and various branches of the humanities .Therefore, despite the conservatism and prudence of law enforcement, the traces of artificial intelligence can be seen in various areas of law. Including judicial robotics capability estimation, intelligent judicial decision making system, intelligent defender and attorney strategy adjustment, dissemination and regulation of different and scattered laws in each case to achieve judicial coherence and reduce opinion, reduce prolonged hearing and discontent compared to the current legal system with designing rule-based systems, case-based, knowledge-based systems, etc. are efforts to apply AI in law. In this article, we will identify the ways in which AI is applied in its laws and regulations, identify the dominant concerns in this area and outline the relationship between these two areas in order to answer the question of how artificial intelligence can be used in different areas of law and what the implications of this application will be. The authors believe that the use of artificial intelligence in the three areas of legislative, judiciary and executive power can be very effective in governments' decisions and smart governance, and helping to reach smart communities across human and geographical boundaries that humanity's long-held dream of achieving is a global village free of violence and personalization and human error. Therefore, in this article, we are going to analyze the dimensions of how to use artificial intelligence in the three legislative, judicial and executive branches of government in order to realize its application.

Keywords: artificial intelligence, law, intelligent system, judge

Procedia PDF Downloads 100
844 Development and Validation of Selective Methods for Estimation of Valaciclovir in Pharmaceutical Dosage Form

Authors: Eman M. Morgan, Hayam M. Lotfy, Yasmin M. Fayez, Mohamed Abdelkawy, Engy Shokry

Abstract:

Two simple, selective, economic, safe, accurate, precise and environmentally friendly methods were developed and validated for the quantitative determination of valaciclovir (VAL) in the presence of its related substances R1 (acyclovir), R2 (guanine) in bulk powder and in the commercial pharmaceutical product containing the drug. Method A is a colorimetric method where VAL selectively reacts with ferric hydroxamate and the developed color was measured at 490 nm over a concentration range of 0.4-2 mg/mL with percentage recovery 100.05 ± 0.58 and correlation coefficient 0.9999. Method B is a reversed phase ultra performance liquid chromatographic technique (UPLC) which is considered superior in technology to the high-performance liquid chromatography with respect to speed, resolution, solvent consumption, time, and cost of analysis. Efficient separation was achieved on Agilent Zorbax CN column using ammonium acetate (0.1%) and acetonitrile as a mobile phase in a linear gradient program. Elution time for the separation was less than 5 min and ultraviolet detection was carried out at 256 nm over a concentration range of 2-50 μg/mL with mean percentage recovery 100.11±0.55 and correlation coefficient 0.9999. The proposed methods were fully validated as per International Conference on Harmonization specifications and effectively applied for the analysis of valaciclovir in pure form and tablets dosage form. Statistical comparison of the results obtained by the proposed and official or reported methods revealed no significant difference in the performance of these methods regarding the accuracy and precision respectively.

Keywords: hydroxamic acid, related substances, UPLC, valaciclovir

Procedia PDF Downloads 232