Search results for: binary number system
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 25437

Search results for: binary number system

2577 Rapid Building Detection in Population-Dense Regions with Overfitted Machine Learning Models

Authors: V. Mantey, N. Findlay, I. Maddox

Abstract:

The quality and quantity of global satellite data have been increasing exponentially in recent years as spaceborne systems become more affordable and the sensors themselves become more sophisticated. This is a valuable resource for many applications, including disaster management and relief. However, while more information can be valuable, the volume of data available is impossible to manually examine. Therefore, the question becomes how to extract as much information as possible from the data with limited manpower. Buildings are a key feature of interest in satellite imagery with applications including telecommunications, population models, and disaster relief. Machine learning tools are fast becoming one of the key resources to solve this problem, and models have been developed to detect buildings in optical satellite imagery. However, by and large, most models focus on affluent regions where buildings are generally larger and constructed further apart. This work is focused on the more difficult problem of detection in populated regions. The primary challenge with detecting small buildings in densely populated regions is both the spatial and spectral resolution of the optical sensor. Densely packed buildings with similar construction materials will be difficult to separate due to a similarity in color and because the physical separation between structures is either non-existent or smaller than the spatial resolution. This study finds that training models until they are overfitting the input sample can perform better in these areas than a more robust, generalized model. An overfitted model takes less time to fine-tune from a generalized pre-trained model and requires fewer input data. The model developed for this study has also been fine-tuned using existing, open-source, building vector datasets. This is particularly valuable in the context of disaster relief, where information is required in a very short time span. Leveraging existing datasets means that little to no manpower or time is required to collect data in the region of interest. The training period itself is also shorter for smaller datasets. Requiring less data means that only a few quality areas are necessary, and so any weaknesses or underpopulated regions in the data can be skipped over in favor of areas with higher quality vectors. In this study, a landcover classification model was developed in conjunction with the building detection tool to provide a secondary source to quality check the detected buildings. This has greatly reduced the false positive rate. The proposed methodologies have been implemented and integrated into a configurable production environment and have been employed for a number of large-scale commercial projects, including continent-wide DEM production, where the extracted building footprints are being used to enhance digital elevation models. Overfitted machine learning models are often considered too specific to have any predictive capacity. However, this study demonstrates that, in cases where input data is scarce, overfitted models can be judiciously applied to solve time-sensitive problems.

Keywords: building detection, disaster relief, mask-RCNN, satellite mapping

Procedia PDF Downloads 159
2576 Associations of the FTO Gene Polymorphism with Obesity and Metabolic Syndrome in Lithuanian Adult Population

Authors: Alina Smalinskiene Janina Petkeviciene, Jurate Klumbiene, Vilma Kriaucioniene, Vaiva Lesauskaite

Abstract:

The worldwide prevalence of obesity has been increasing dramatically in the last few decades, and Lithuania is no exception. In 2012, every fifth adult (19% of men and 20.5 % of women) was obese and every third was overweight Association studies have highlighted the influence of SNPs in obesity, with particular focus on FTO rs9939609. Thus far, no data on the possible association of this SNP to obesity in the adult Lithuanian population has been reported. Here, for the first time, we demonstrate an association between the FTO rs9939609 homozygous AA genotype and increased BMI when compared to homozygous TT. Furthermore, a positive association was determined between the FTO rs9939609 variant and risk of metabolic syndrome. Background: This study aimed to examine the associations between the fat mass and obesity associated (FTO) gene rs9939609 variant with obesity and metabolic syndrome in Lithuanian adult population. Materials and Methods: A cross-sectional health survey was carried out in randomly selected municipalities of Lithuania. The random sample was obtained from lists of 25–64 year-old inhabitants. The data from 1020 subjects were analysed. The rs9939609 SNP of the FTO gene was assessed using TaqMan assays (Applied Biosystems, Foster City, CA, USA). The Applied Biosystems 7900HT Real-Time Polymerase Chain Reaction System was used for detecting the SNPs. Results: The carriers of the AA genotype had the highest mean values of BMI and waist circumference (WC) and the highest risk of obesity. Interactions ‘genotype x age’ and ‘genotype x physical activity’ in determining BMI and WC were shown. Neither lipid and glucose levels, nor blood pressure were associated with the rs9939609 independently of BMI. In the age group of 25-44 years, association between the FTO genotypes and metabolic syndrome was found. Conclusion: The FTO rs9939609 variant was significantly associated with BMI and WC, and with the risk of obesity in Lithuanian population. The FTO polymorphism might have a greater influence on weight status in younger individuals and in subjects with a low level of physical activity.

Keywords: obesity metabolic syndrome, FTO gene, polymorphism, Lithuania

Procedia PDF Downloads 415
2575 The Association between IFNAR2 and Dpp9 Genes Single Nucleotide Polymorphisms Frequency with COVID-19 Severity in Iranian Patients

Authors: Sima Parvizi Omran, Rezvan Tavakoli, Mahnaz Safari, Mohammadreza Aghasadeghi, Abolfazl Fateh, Pooneh Rahimi

Abstract:

Background: SARS-CoV-2, a single-stranded RNA betacoronavirus causes the global outbreak of coronavirus disease 2019 (COVID-19). Several clinical and scientific concerns are raised by this pandemic. Genetic factors can contribute to pathogenesis and disease susceptibility. There are single nucleotide polymorphisms (SNPs) in many of the genes in the immune system that affect the expression of specific genes or functions of some proteins related to immune responses against viral infections. In this study, we analyzed the impact of polymorphism in the interferon alpha and beta receptor subunit 2 (IFNAR2) and dipeptidyl peptidase 9 (Dpp9) genes and clinical parameters on the susceptibility and resistance to Coronavirus disease (COVID-19). Methods: A total of 330- SARS-CoV-2 positive patients (188 survivors and 142 nonsurvivors) were included in this study. All single-nucleotide polymorphisms (SNPs) on IFNAR2 (rs2236757) and Dpp9 (rs2109069) were genotyped by the polymerase chain reaction-restriction fragment length polymorphism (PCR-RFLP) method. Results: In survivor patients, the frequency of the favourable genotypes of IFNAR2 SNP (rs2236757 GC) was significantly higher than in nonsurvivor patients, and also Dpp9 (rs2109069 AT) genotypes were associated with the severity of COVID-19 infection. Conclusions: This study demonstrated that the severity of COVID- 19 patients was strongly associated with clinical parameters and unfavourable IFNAR2, Dpp9 SNP genotypes. In order to establish the relationship between host genetic factors and the severity of COVID-19 infection, further studies are needed in multiple parts of the world.

Keywords: SARS-CoV-2, COVID-19, interferon alpha and beta receptor subunit 2, dipeptidyl peptidase 9, single-nucleotide polymorphisms

Procedia PDF Downloads 143
2574 Systematic Analysis of Immune Response to Biomaterial Surface Characteristics

Authors: Florian Billing, Soren Segan, Meike Jakobi, Elsa Arefaine, Aliki Jerch, Xin Xiong, Matthias Becker, Thomas Joos, Burkhard Schlosshauer, Ulrich Rothbauer, Nicole Schneiderhan-Marra, Hanna Hartmann, Christopher Shipp

Abstract:

The immune response plays a major role in implant biocompatibility, but an understanding of how to design biomaterials for specific immune responses is yet to be achieved. We aimed to better understand how changing certain material properties can drive immune responses. To this end, we tested immune response to experimental implant coatings that vary in specific characteristics. A layer-by-layer approach was employed to vary surface charge and wettability. Human-based in vitro models (THP-1 macrophages and primary peripheral blood mononuclear cells (PBMCS)) were used to assess immune responses using multiplex cytokine analysis, flow cytometry (CD molecule expression) and microscopy (cell morphology). We observed dramatic differences in immune response due to specific alterations in coating properties. For example altering the surface charge of coating A from anionic to cationic resulted in the substantial elevation of the pro-inflammatory molecules IL-1beta, IL-6, TNF-alpha and MIP-1beta, while the pro-wound healing factor VEGF was significantly down-regulated. We also observed changes in cell surface marker expression in relation to altered coating properties, such as CD16 on NK Cells and HLA-DR on monocytes. We furthermore observed changes in the morphology of THP-1 macrophages following cultivation on different coatings. A correlation between these morphological changes and the cytokine expression profile is ongoing. Targeted changes in biomaterial properties can produce vast differences in immune response. The properties of the coatings examined here may, therefore, be a method to direct specific biological responses in order to improve implant biocompatibility.

Keywords: biomaterials, coatings, immune system, implants

Procedia PDF Downloads 173
2573 Molecular Detection and Antibiotics Resistance Pattern of Extended-Spectrum Beta-Lactamase Producing Escherichia coli in a Tertiary Hospital in Enugu, Nigeria

Authors: I. N. Nwafia, U. C. Ozumba, M. E. Ohanu, S. O. Ebede

Abstract:

Antibiotic resistance is increasing globally and has become a major health challenge. Extended-spectrum beta-lactamase is clinically important because the ESBL gene are mostly plasmid encoded and these plasmids frequently carry genes encoding resistance to other classes of antimicrobials thereby limiting antibiotic options in the treatment of infections caused by these organisms. The specific objectives of this study were to determine the prevalence of ESBLs production in Escherichia coli, to determine the antibiotic susceptibility pattern of ESBLs producing Escherichia coli, to detect TEM, SHV and CTX-M genes and the risk factors to acquisition of ESBL producing Escherichia coli. The protocol of the study was approved by Health Research and Ethics committee of the University of Nigeria Teaching Hospital (UNTH), Enugu. It was a descriptive cross-sectional study that involved all hospitalized patients in UNTH from whose specimens Escherichia coli was isolated during the period of the study. The samples analysed were urine, wound swabs, blood and cerebrospinal fluid. These samples were cultured in 5% sheep Blood agar and MacConkey agar (Oxoid Laboratories, Cambridge UK) and incubated at 35-370C for 24 hours. Escherichia coli was identified with standard biochemical tests and confirmed using API 20E auxanogram (bioMerieux, Marcy 1'Etoile, France). The antibiotic susceptibility testing was done by disc diffusion method and interpreted according to the Clinical and Laboratory Standard Institute guideline. ESBL production was confirmed using ESBL Epsilometer test strips (Liofilchem srl, Italy). The ESBL bla genes were detected with polymerase chain reaction, after extraction of DNA with plasmid mini-prep kit (Jena Bioscience, Jena, Germany). Data analysis was with appropriate descriptive and inferential statistics. One hundred and six isolates (53.00%) out of the 200 were from urine, followed by isolates from different swabs specimens 53(26.50%) and the least number of the isolates 4(2.00) were from blood (P value = 0.096). Seventy (35.00%) out of the 200 isolates, were confirmed positive for ESBL production. Forty-two (60.00%) of the isolates were from female patients while 28(40.00%) were from male patients (P value = 0.13). Sixty-eight (97.14%) of the isolates were susceptible to imipenem while all of the isolates were resistant to ampicillin, chloramphenicol and tetracycline. From the 70 positive isolates the ESBL genes detected with polymerase chain reaction were blaCTX-M (n=26; 37.14%), blaTEM (n=7; 10.00%), blaSHV (n=2; 2.86%), blaCTX-M/TEM (n=7; 10.0%), blaCTX-M/SHV (n=14; 20.0%) and blaCTX-M/TEM/SHV (n=10; 14.29%). There was no gene detected in 4(5.71%) of the isolates. The most associated risk factors to infections caused by ESBL producing Escherichia coli was previous antibiotics use for the past 3 months followed by admission in the intensive care unit, recent surgery, and urinary catheterization. In conclusion, ESBLs was detected in 4 of every 10 Escherichia coli with the predominant gene detected being CTX-M. This knowledge will enable appropriate measures towards improvement of patient health care, antibiotic stewardship, research and infection control in the hospital.

Keywords: antimicrobial, Escherichia coli, extended spectrum beta lactamase, resistance

Procedia PDF Downloads 281
2572 Security of Database Using Chaotic Systems

Authors: Eman W. Boghdady, A. R. Shehata, M. A. Azem

Abstract:

Database (DB) security demands permitting authorized users and prohibiting non-authorized users and intruders actions on the DB and the objects inside it. Organizations that are running successfully demand the confidentiality of their DBs. They do not allow the unauthorized access to their data/information. They also demand the assurance that their data is protected against any malicious or accidental modification. DB protection and confidentiality are the security concerns. There are four types of controls to obtain the DB protection, those include: access control, information flow control, inference control, and cryptographic. The cryptographic control is considered as the backbone for DB security, it secures the DB by encryption during storage and communications. Current cryptographic techniques are classified into two types: traditional classical cryptography using standard algorithms (DES, AES, IDEA, etc.) and chaos cryptography using continuous (Chau, Rossler, Lorenz, etc.) or discreet (Logistics, Henon, etc.) algorithms. The important characteristics of chaos are its extreme sensitivity to initial conditions of the system. In this paper, DB-security systems based on chaotic algorithms are described. The Pseudo Random Numbers Generators (PRNGs) from the different chaotic algorithms are implemented using Matlab and their statistical properties are evaluated using NIST and other statistical test-suits. Then, these algorithms are used to secure conventional DB (plaintext), where the statistical properties of the ciphertext are also tested. To increase the complexity of the PRNGs and to let pass all the NIST statistical tests, we propose two hybrid PRNGs: one based on two chaotic Logistic maps and another based on two chaotic Henon maps, where each chaotic algorithm is running side-by-side and starting from random independent initial conditions and parameters (encryption keys). The resulted hybrid PRNGs passed the NIST statistical test suit.

Keywords: algorithms and data structure, DB security, encryption, chaotic algorithms, Matlab, NIST

Procedia PDF Downloads 257
2571 Knowledge and Attitude Towards Strabismus Among Adult Residents in Woreta Town, Northwest Ethiopia: A Community-Based Study

Authors: Henok Biruk Alemayehu, Kalkidan Berhane Tsegaye, Fozia Seid Ali, Nebiyat Feleke Adimassu, Getasew Alemu Mersha

Abstract:

Background: Strabismus is a visual disorder where the eyes are misaligned and point in different directions. Untreated strabismus can lead to amblyopia, loss of binocular vision, and social stigma due to its appearance. Since it is assumed that knowledge is pertinent for early screening and prevention of strabismus, the main objective of this study was to assess knowledge and attitudes toward strabismus in Woreta town, Northwest Ethiopia. Providing data in this area is important for planning health policies. Methods: A community-based cross-sectional study was done in Woreta town from April–May 2020. The sample size was determined using a single population proportion formula by taking a 50% proportion of good knowledge, 95% confidence level, 5% margin of errors, and 10% non- response rate. Accordingly, the final computed sample size was 424. All four kebeles were included in the study. There were 42,595 people in total, with 39,684 adults and 9229 house holds. A sample fraction ’’k’’ was obtained by dividing the number of the household by the calculated sample size of 424. Systematic random sampling with proportional allocation was used to select the participating households with a sampling fraction (K) of 21 i.e. each household was approached in every 21 households included in the study. One individual was selected ran- domly from each household with more than one adult, using the lottery method to obtain a final sample size. The data was collected through a face-to-face interview with a pretested and semi-structured questionnaire which was translated from English to Amharic and back to English to maintain its consistency. Data were entered using epi-data version 3.1, then processed and analyzed via SPSS version- 20. Descriptive and analytical statistics were employed to summarize the data. A p-value of less than 0.05 was used to declare statistical significance. Result: A total of 401 individuals aged over 18 years participated, with a response rate of 94.5%. Of those who responded, 56.6% were males. Of all the participants, 36.9% were illiterate. The proportion of people with poor knowledge of strabismus was 45.1%. It was shown that 53.9% of the respondents had a favorable attitude. Older age, higher educational level, having a history of eye examination, and a having a family history of strabismus were significantly associated with good knowledge of strabismus. A higher educational level, older age, and hearing about strabismus were significantly associated with a favorable attitude toward strabismus. Conclusion and recommendation: The proportion of good knowledge and favorable attitude towards strabismus were lower than previously reported in Gondar City, Northwest Ethiopia. There is a need to provide health education and promotion campaigns on strabismus to the community: what strabismus is, its’ possible treatments and the need to bring children to the eye care center for early diagnosis and treatment. it advocate for prospective research endeavors to employ qualitative study design.Additionally, it suggest the exploration of studies that investigate causal-effect relationship.

Keywords: strabismus, knowledge, attitude, Woreta

Procedia PDF Downloads 47
2570 Climate Change Adaptation Interventions in Agriculture and Sustainable Development through South-South Cooperation in Sub-Saharan Africa

Authors: Nuhu Mohammed Gali, Kenichi Matsui

Abstract:

Climate change poses a significant threat to agriculture and food security in Africa. The UNFCC recognized the need to address climate change adaptation in the broader context of sustainable development. African countries have initiated a governance system for adapting and responding to climate change in their Nationally Determined Contributions (NDCs). Despite the implementation limitations, Africa’s adaptation initiatives highlight the need to strengthen and expand adaptation responses. This paper looks at the extent to which South-South cooperation facilitates the implementation of adaptation actions between nations for agriculture and sustainable development. We conducted a literature review and content analysis of reports prepared by international organizations, reflecting the diversity of adaptation activities taking place in Sub-Saharan Africa. Our analysis of the connection between adaptation and nationally determined contributions (NDCs) showed that climate actions are mainstreamed into sustainable development. The NDCs in many countries on climate change adaptation action for agriculture aimed to strengthen the resilience of the poor. We found that climate-smart agriculture is the core of many countries target to end hunger. We revealed that South-South Cooperation, in terms of capacity, technology, and financial support, can help countries to achieve their climate action priorities and the Sustainable Development Goals (SDGs). We found that inadequate policy and regulatory frameworks between countries, differences in development priorities and strategies, poor communication, inadequate coordination, and the lack of local engagement and advocacy are some key barriers to South-South Cooperation in Africa. We recommend a multi-dimensional partnership, provisionoffinancialresources, systemic approach for coordination and engagement to promote and achieve the potential of SSC in Africa.

Keywords: climate change, adaptation, food security, sustainable development goals

Procedia PDF Downloads 111
2569 Comparative Assessment of Geocell and Geogrid Reinforcement for Flexible Pavement: Numerical Parametric Study

Authors: Anjana R. Menon, Anjana Bhasi

Abstract:

Development of highways and railways play crucial role in a nation’s economic growth. While rigid concrete pavements are durable with high load bearing characteristics, growing economies mostly rely on flexible pavements which are easier in construction and more economical. The strength of flexible pavement is based on the strength of subgrade and load distribution characteristics of intermediate granular layers. In this scenario, to simultaneously meet economy and strength criteria, it is imperative to strengthen and stabilize the load transferring layers, namely subbase and base. Geosynthetic reinforcement in planar and cellular forms have been proven effective in improving soil stiffness and providing a stable load transfer platform. Studies have proven the relative superiority of cellular form-geocells over planar geosynthetic forms like geogrid, owing to the additional confinement of infill material and pocket effect arising from vertical deformation. Hence, the present study investigates the efficiency of geocells over single/multiple layer geogrid reinforcements by a series of three-dimensional model analyses of a flexible pavement section under a standard repetitive wheel load. The stress transfer mechanism and deformation profiles under various reinforcement configurations are also studied. Geocell reinforcement is observed to take up a higher proportion of stress caused by the traffic loads compared to single and double-layer geogrid reinforcements. The efficiency of single geogrid reinforcement reduces with an increase in embedment depth. The contribution of lower geogrid is insignificant in the case of the double-geogrid reinforced system.

Keywords: Geocell, Geogrid, Flexible Pavement, Repetitive Wheel Load, Numerical Analysis

Procedia PDF Downloads 63
2568 The Effect of Antibiotic Use on Blood Cultures: Implications for Future Policy

Authors: Avirup Chowdhury, Angus K. McFadyen, Linsey Batchelor

Abstract:

Blood cultures (BCs) are an important aspect of management of the septic patient, identifying the underlying pathogen and its antibiotic sensitivities. However, while the current literature outlines indications for initial BCs to be taken, there is little guidance for repeat sampling in the following 5-day period and little information on how antibiotic use can affect the usefulness of this investigation. A retrospective cohort study was conducted using inpatients who had undergone 2 or more BCs within 5 days between April 2016 and April 2017 at a 400-bed hospital in the west of Scotland and received antibiotic therapy between the first and second BCs. The data for BC sampling was collected from the electronic microbiology database, and cross-referenced with data from the hospital electronic prescribing system. Overall, 283 BCs were included in the study, taken from 92 patients (mean 3.08 cultures per patient, range 2-10). All 92 patients had initial BCs, of which 83 were positive (90%). 65 had a further sample within 24 hours of commencement of antibiotics, with 35 positive (54%). 23 had samples within 24-48 hours, with 4 (17%) positive; 12 patients had sampling at 48-72 hours, 12 at 72-96 hours, and 10 at 96-120 hours, with none positive. McNemar’s Exact Test was used to calculate statistical significance for patients who received blood cultures in multiple time blocks (Initial, < 24h, 24-120h, > 120h). For initial vs. < 24h-post BCs (53 patients tested), the proportion of positives fell from 46/53 to 29/53 (one-tailed P=0.002, OR 3.43, 95% CI 1.48-7.96). For initial vs 24-120h (n=42), the proportions were 38/42 and 4/42 respectively (P < 0.001, OR 35.0, 95% CI 4.79-255.48). For initial vs > 120h (n=36), these were 33/36 and 2/36 (P < 0.001,OR ∞). These were also calculated for a positive in initial or < 24h vs. 24-120h (n=42), with proportions of 41/42 and 4/42 (P < 0.001, OR 38.0, 95% CI 5.22-276.78); and for initial or < 24h vs > 120h (n=36), with proportions of 35/36 and 2/36 respectively (P < 0.001, OR ∞). This data appears to show that taking an initial BC followed by a BC within 24 hours of antibiotic commencement would maximise blood culture yield while minimising the risk of false negative results. This could potentially remove the need for as many as 46% of BC samples without adversely affecting patient care. BC yield decreases sharply after 48 hours of antibiotic use, and may not provide any clinically useful information after this time. Further multi-centre studies would validate these findings, and provide a foundation for future health policy generation.

Keywords: antibiotics, blood culture, efficacy, inpatient

Procedia PDF Downloads 161
2567 Participatory Monitoring Strategy to Address Stakeholder Engagement Impact in Co-creation of NBS Related Project: The OPERANDUM Case

Authors: Teresa Carlone, Matteo Mannocchi

Abstract:

In the last decade, a growing number of International Organizations are pushing toward green solutions for adaptation to climate change. This is particularly true in the field of Disaster Risk Reduction (DRR) and land planning, where Nature-Based Solutions (NBS) had been sponsored through funding programs and planning tools. Stakeholder engagement and co-creation of NBS is growing as a practice and research field in environmental projects, fostering the consolidation of a multidisciplinary socio-ecological approach in addressing hydro-meteorological risk. Even thou research and financial interests are constantly spread, the NBS mainstreaming process is still at an early stage as innovative concepts and practices make it difficult to be fully accepted and adopted by a multitude of different actors to produce wide scale societal change. The monitoring and impact evaluation of stakeholders’ participation in these processes represent a crucial aspect and should be seen as a continuous and integral element of the co-creation approach. However, setting up a fit for purpose-monitoring strategy for different contexts is not an easy task, and multiple challenges emerge. In this scenario, the Horizon 2020 OPERANDUM project, designed to address the major hydro-meteorological risks that negatively affect European rural and natural territories through the co-design, co-deployment, and assessment of Nature-based Solution, represents a valid case study to test a monitoring strategy from which set a broader, general and scalable monitoring framework. Applying a participative monitoring methodology, based on selected indicators list that combines quantitative and qualitative data developed within the activity of the project, the paper proposes an experimental in-depth analysis of the stakeholder engagement impact in the co-creation process of NBS. The main focus will be to spot and analyze which factors increase knowledge, social acceptance, and mainstreaming of NBS, promoting also a base-experience guideline to could be integrated with the stakeholder engagement strategy in current and future similar strongly collaborative approach-based environmental projects, such as OPERANDUM. Measurement will be carried out through survey submitted at a different timescale to the same sample (stakeholder: policy makers, business, researchers, interest groups). Changes will be recorded and analyzed through focus groups in order to highlight causal explanation and to assess the proposed list of indicators to steer the conduction of similar activities in other projects and/or contexts. The idea of the paper is to contribute to the construction of a more structured and shared corpus of indicators that can support the evaluation of the activities of involvement and participation of various levels of stakeholders in the co-production, planning, and implementation of NBS to address climate change challenges.

Keywords: co-creation and collaborative planning, monitoring, nature-based solution, participation & inclusion, stakeholder engagement

Procedia PDF Downloads 98
2566 Studies on the Recovery of Calcium and Magnesium from Red Seawater by Nanofiltration Membrane

Authors: Mohamed H. Sorour, Hayam F. Shaalan, Heba A. Hani, Mahmoud A. El-Toukhy

Abstract:

This paper reports the results of nanofiltration (NF) polymeric membrane for the recovery of divalent ions (calcium and magnesium) from Red Seawater. Pilot plant experiments have been carried out using Alfa-Laval (NF 2517/48) membrane module. System was operated in both total recirculation mode (permeate and brine) and brine recirculation mode under hydraulic pressure of 15 bar. Impacts of some chelating agents on both flux and rejection have been also investigated. Results indicated that pure water permeability ranges from 17 to 85.5 L/m²h at 2-15 bar. Comparison with seawater permeability under the same operating pressure values reveals lower values of 8.9-31 L/m²h manifesting the effect of the osmotic pressure of seawater. Overall total dissolved solids (TDS) reduction was almost constant without incorporation of chelating agents. On the contrary of expectations, the use of chelating agents N-(2-hydroxyethyl) ethylene diamine-N,N´,N´-triacetic acid (HEDTA) and ethylene glycol bis (2-aminoethyl ether)-N,N,N´,N´-tetraacetic acid (EGTA) showed flux decline of about 3-15%. Analysis of rejection data of total recirculation mode showed reasonable rejection values of 35%, 59% and 90% for Ca, Mg and SO₄, respectively. Operating under brine recirculation mode only showed a decrease of rejection to 33%, 56% and 86% for Ca, Mg and SO₄, respectively. The use of chelating agents has no substantial effect on NF membrane performance except for increasing the total Ca rejection to 48 and 65% for EGTA and HEDTA, respectively. Results, in general, confirmed the powerful separation of NF technology for softening and recovery of divalent ions from seawater. It is anticipated that increasing operating pressure beyond the limits of our investigations would improve the rejection and flux values. A trade-off should be considered between operating cost (due to higher pressure and marginal benefits as manifested by expected improved performance). The experimental results fit well with the formulated rejection empirical correlations and the published ones.

Keywords: nanofiltration, seawater, recovery, calcium, magnesium

Procedia PDF Downloads 149
2565 Classifying Turbomachinery Blade Mode Shapes Using Artificial Neural Networks

Authors: Ismail Abubakar, Hamid Mehrabi, Reg Morton

Abstract:

Currently, extensive signal analysis is performed in order to evaluate structural health of turbomachinery blades. This approach is affected by constraints of time and the availability of qualified personnel. Thus, new approaches to blade dynamics identification that provide faster and more accurate results are sought after. Generally, modal analysis is employed in acquiring dynamic properties of a vibrating turbomachinery blade and is widely adopted in condition monitoring of blades. The analysis provides useful information on the different modes of vibration and natural frequencies by exploring different shapes that can be taken up during vibration since all mode shapes have their corresponding natural frequencies. Experimental modal testing and finite element analysis are the traditional methods used to evaluate mode shapes with limited application to real live scenario to facilitate a robust condition monitoring scheme. For a real time mode shape evaluation, rapid evaluation and low computational cost is required and traditional techniques are unsuitable. In this study, artificial neural network is developed to evaluate the mode shape of a lab scale rotating blade assembly by using result from finite element modal analysis as training data. The network performance evaluation shows that artificial neural network (ANN) is capable of mapping the correlation between natural frequencies and mode shapes. This is achieved without the need of extensive signal analysis. The approach offers advantage from the perspective that the network is able to classify mode shapes and can be employed in real time including simplicity in implementation and accuracy of the prediction. The work paves the way for further development of robust condition monitoring system that incorporates real time mode shape evaluation.

Keywords: modal analysis, artificial neural network, mode shape, natural frequencies, pattern recognition

Procedia PDF Downloads 141
2564 Nitrous Oxide Wastage: Putting Strategies “In the Pipeline” to Reduce Carbon Emissions from Nitrous Oxide

Authors: F. Gallop, C. Ward, M. Zaky, M. Vaghela, R. Sabaratnam

Abstract:

Nitrous oxide (N₂O) has been used in anaesthesia for over 150 years owing to advantageous physical and pharmacological properties. However, with a global warming potential of 310, we have an urgent responsibility to reduce its usage and emission. Anecdotal evidence in our hospital trust suggests minimal N₂O usage, yet our theatres receive a staggering supply. This warranted further investigation. We used a data collection tool to prospectively capture quantitative and qualitative data regarding N₂O cases during one week: this recorded demographics, N₂O indications, clinical management, and total N₂O consumption in litres. In addition, N₂O usage in dental sedation suites and paediatric theatres was separately quantified. Pipeline supply data was acquired from British Oxygen Company accounts. We captured 490 cases. 4% (n=19) used N₂O, 63% (n=12) of these in dental theatres. Common N₂0 indications were induction speed (37%) and rapidly increasing anaesthesia depth (32%). In adult cases, N₂O was always used intraoperatively rather than solely at induction. 74% (n=14) of anaesthetists reported environmental concern over using N₂O. The week’s total N₂O usage was 8109 litres, amounting to 421,668 litres annually. However, the annual N₂O pipeline supply is 2,997,000 litres; an enormous 1.8 million Kg of CO₂. Our results supportively demonstrate that the N₂O pipeline supply greatly exceeds its clinical use. Acknowledging clinical areas not audited, the discrepancy between supply and usage suggests approximately 2.5 million litres of yearly wastage. We consequently recommend terminating the N₂O pipeline supply in minimally used areas, eliminating 1.5 million Kg of CO₂ emissions. High usage clinical areas could consider portable N₂O cylinders as an alternative. In Sweden, N₂O destruction technology is routinely used to minimise CO₂ emissions. Our results support National Health System investment in similar infrastructure.

Keywords: anaesthesia, environment, medical gases, nitrous oxide, sustainability

Procedia PDF Downloads 122
2563 Characterization, Classification and Fertility Capability Classification of Three Rice Zones of Ebonyi State, Southeastern Nigeria

Authors: Sunday Nathaniel Obasi, Chiamak Chinasa Obasi

Abstract:

Soil characterization and classification provide the basic information necessary to create a functional evaluation and soil classification schemes. Fertility capability classification (FCC) on the other hand is a technical system that groups the soils according to kinds of problems they present for management of soil physical and chemical properties. This research was carried out in Ebonyi state, southeastern Nigeria, which is an agrarian state and a leading rice producing part of southeastern Nigeria. In order to maximize the soil and enhance the productivity of rice in Ebonyi soils, soil classification, and fertility classification information need to be supplied. The state was grouped into three locations according to their agricultural zones namely; Ebonyi north, Ebonyi central and Ebonyi south representing Abakaliki, Ikwo and Ivo locations respectively. Major rice growing areas of the soils were located and two profile pits were sunk in each of the studied zones from which soils were characterized, classified and fertility capability classification (FCC) developed. Soil classification was done using United State Department of Agriculture (USDA) Soil Taxonomy and correlated with World Reference Base for soil resources. Results obtained classified Abakaliki 1 and Abakaliki 2 as Typic Fluvaquents (Ochric Fluvisols). Ikwo 1 was classified as Vertic Eutrudepts (Eutric Vertisols) while Ikwo 2 was classified as Typic Eutrudepts (Eutric Cambisols). Ivo 1 and Ivo 2 were both classified as Aquic Eutrudepts (Gleyic Leptosols). Fertility capability classification (FCC) revealed that all studied soils had mostly loamy topsoils and subsoils except Ikwo 1 with clayey topsoil. Limitations encountered in the studied soils include; dryness (d), low ECEC (e), low nutrient capital reserve (k) and water logging/ anaerobic condition (gley). Thus, FCC classifications were Ldek for Abakaliki 1 and 2, Ckv for Ikwo 1, LCk for Ikwo 2 while Ivo 1 and 2 were Legk and Lgk respectively.

Keywords: soil classification, soil fertility, limitations, modifiers, Southeastern Nigeria

Procedia PDF Downloads 120
2562 Leveraging on Application of Customer Relationship Management Strategy as Business Driving Force: A Case Study of Major Industries

Authors: Odunayo S. Faluse, Roger Telfer

Abstract:

Customer relationship management is a business strategy that is centred on the idea that ‘Customer is the driving force of any business’ i.e. Customer is placed in a central position in any business. However, this belief coupled with the advancement in information technology in the past twenty years has experienced a change. In any form of business today it can be concluded that customers are the modern dictators to whom the industry always adjusts its business operations due to the increase in availability of information, intense market competition and ever growing negotiating ideas of customers in the process of buying and selling. The most vital role of any organization is to satisfy or meet customer’s needs and demands, which eventually determines customer’s long-term value to the industry. Therefore, this paper analyses and describes the application of customer relationship management operational strategies in some of the major industries in business. Both developed and up-coming companies nowadays value the quality of customer services and client’s loyalty, they also recognize the customers that are not very sensitive when it comes to changes in price and thereby realize that attracting new customers is more tasking and expensive than retaining the existing customers. However, research shows that several factors have recently amounts to the sudden rise in the execution of CRM strategies in the marketplace, such as a diverted attention of some organization towards integrating ideas in retaining existing customers rather than attracting new one, gathering data about customers through the use of internal database system and acquiring of external syndicate data, also exponential increase in technological intelligence. Apparently, with this development in business operations, CRM research in Academia remain nascent; hence this paper gives detailed critical analysis of the recent advancement in the use of CRM and key research opportunities for future development in using the implementation of CRM as a determinant factor for successful business optimization.

Keywords: agriculture, banking, business strategies, CRM, education, healthcare

Procedia PDF Downloads 211
2561 Impact of Water Storage Structures on Groundwater Recharge in Jeloula Basin, Central Tunisia

Authors: I. Farid, K. Zouari

Abstract:

An attempt has been made to examine the effect of water storage structures on groundwater recharge in a semi-arid agroclimatic setting in Jeloula Basin (Central Tunisia). In this area, surface water in rivers is seasonal, and therefore groundwater is the perennial source of water supply for domestic and agricultural purposes. Three pumped storage water power plants (PSWPP) have been built to increase the overall water availability in the basin and support agricultural livelihoods of rural smallholders. The scale and geographical dispersion of these multiple lakes restrict the understanding of these coupled human-water systems and the identification of adequate strategies to support riparian farmers. In the present review, hydrochemistry and isotopic tools were combined to get an insight into the processes controlling mineralization and recharge conditions in the investigated aquifer system. This study showed a slight increase in the groundwater level, especially after the artificial recharge operations and a decline when the water volume moves down during drought periods. Chemical data indicate that the main sources of salinity in the waters are related to water-rock interactions. Data inferred from stable isotopes in groundwater samples indicated recharge with modern rainfall. The investigated surface water samples collected from the PSWPP are affected by a significant evaporation and reveal large seasonal variations, which could be controlled by the water volume changes in the open surface reservoirs and the meteorological conditions during evaporation, condensation, and precipitation. The geochemical information is comparable to the isotopic results and illustrates that the chemical and isotopic signatures of reservoir waters differ clearly from those of groundwaters. These data confirm that the contribution of the artificial recharge operations from the PSWPP is very limited.

Keywords: Jeloula basin, recharge, hydrochemistry, isotopes

Procedia PDF Downloads 132
2560 Technological Developments to Reduce Wind Blade Turbine Levelized Cost of Energy

Authors: Pedro Miguel Cardoso Carneiro, Ricardo André Nunes Borges, João Pedro Soares Loureiro, Hermínio Maio Graça Fernandes

Abstract:

Wind energy has been exponentially growing over the last years and will allow countries to progress regarding the decarbonization objective. In parallel, the maintenance activities have also been increasing in consequence of ageing and deterioration of the wind farms. The time available for wind blade maintenance is given by the weather window that is based upon weather conditions. Most of the wind blade repair and maintenance activities require a narrow window of temperature and humidity. Due to this limitation, the current weather windows result only on approximately 35% days/year are used for maintenance, that takes place mostly during summertime. This limitation creates large economic losses in the energy production of the wind towers, since they can be inoperative or with the energy production output reduced for days or weeks due to existing damages. Another important aspect is that the maintenance costs are higher due to the high standby time and seasonality imposed on the technicians. To reduce the relevant maintenance costs of blades and energy loses some technological developments were carried out to significantly improve this reality. The focus of this activity was to develop a series of key developments to have in the near future a suspended access equipment that can operate in harsh conditions, wind rain, cold/hot environment. To this end we have identified key areas that need to be revised and require new solutions to be found; a habitat system, multi-configurable roof and floor, roof and floor interface to blade, secondary attachment solutions to the blade and to the tower. On this paper we will describe the advances produced during a national R&D project made in partnership with an end-user (Onrope) and a test center (ISQ).

Keywords: wind turbine maintenance, cost reduction, technological innovations, wind turbine blade

Procedia PDF Downloads 76
2559 Numerical Investigation on Design Method of Timber Structures Exposed to Parametric Fire

Authors: Robert Pečenko, Karin Tomažič, Igor Planinc, Sabina Huč, Tomaž Hozjan

Abstract:

Timber is favourable structural material due to high strength to weight ratio, recycling possibilities, and green credentials. Despite being flammable material, it has relatively high fire resistance. Everyday engineering practice around the word is based on an outdated design of timber structures considering standard fire exposure, while modern principles of performance-based design enable use of advanced non-standard fire curves. In Europe, standard for fire design of timber structures EN 1995-1-2 (Eurocode 5) gives two methods, reduced material properties method and reduced cross-section method. In the latter, fire resistance of structural elements depends on the effective cross-section that is a residual cross-section of uncharred timber reduced additionally by so called zero strength layer. In case of standard fire exposure, Eurocode 5 gives a fixed value of zero strength layer, i.e. 7 mm, while for non-standard parametric fires no additional comments or recommendations for zero strength layer are given. Thus designers often implement adopted 7 mm rule also for parametric fire exposure. Since the latest scientific evidence suggests that proposed value of zero strength layer can be on unsafe side for standard fire exposure, its use in the case of a parametric fire is also highly questionable and more numerical and experimental research in this field is needed. Therefore, the purpose of the presented study is to use advanced calculation methods to investigate the thickness of zero strength layer and parametric charring rates used in effective cross-section method in case of parametric fire. Parametric studies are carried out on a simple solid timber beam that is exposed to a larger number of parametric fire curves Zero strength layer and charring rates are determined based on the numerical simulations which are performed by the recently developed advanced two step computational model. The first step comprises of hygro-thermal model which predicts the temperature, moisture and char depth development and takes into account different initial moisture states of timber. In the second step, the response of timber beam simultaneously exposed to mechanical and fire load is determined. The mechanical model is based on the Reissner’s kinematically exact beam model and accounts for the membrane, shear and flexural deformations of the beam. Further on, material non-linear and temperature dependent behaviour is considered. In the two step model, the char front temperature is, according to Eurocode 5, assumed to have a fixed temperature of around 300°C. Based on performed study and observations, improved levels of charring rates and new thickness of zero strength layer in case of parametric fires are determined. Thus, the reduced cross section method is substantially improved to offer practical recommendations for designing fire resistance of timber structures. Furthermore, correlations between zero strength layer thickness and key input parameters of the parametric fire curve (for instance, opening factor, fire load, etc.) are given, representing a guideline for a more detailed numerical and also experimental research in the future.

Keywords: advanced numerical modelling, parametric fire exposure, timber structures, zero strength layer

Procedia PDF Downloads 153
2558 The Biosphere as a Supercomputer Directing and Controlling Evolutionary Processes

Authors: Igor A. Krichtafovitch

Abstract:

The evolutionary processes are not linear. Long periods of quiet and slow development turn to rather rapid emergences of new species and even phyla. During Cambrian explosion, 22 new phyla were added to the previously existed 3 phyla. Contrary to the common credence the natural selection or a survival of the fittest cannot be accounted for the dominant evolution vector which is steady and accelerated advent of more complex and more intelligent living organisms. Neither Darwinism nor alternative concepts including panspermia and intelligent design propose a satisfactory solution for these phenomena. The proposed hypothesis offers a logical and plausible explanation of the evolutionary processes in general. It is based on two postulates: a) the Biosphere is a single living organism, all parts of which are interconnected, and b) the Biosphere acts as a giant biological supercomputer, storing and processing the information in digital and analog forms. Such supercomputer surpasses all human-made computers by many orders of magnitude. Living organisms are the product of intelligent creative action of the biosphere supercomputer. The biological evolution is driven by growing amount of information stored in the living organisms and increasing complexity of the biosphere as a single organism. Main evolutionary vector is not a survival of the fittest but an accelerated growth of the computational complexity of the living organisms. The following postulates may summarize the proposed hypothesis: biological evolution as a natural life origin and development is a reality. Evolution is a coordinated and controlled process. One of evolution’s main development vectors is a growing computational complexity of the living organisms and the biosphere’s intelligence. The intelligent matter which conducts and controls global evolution is a gigantic bio-computer combining all living organisms on Earth. The information is acting like a software stored in and controlled by the biosphere. Random mutations trigger this software, as is stipulated by Darwinian Evolution Theories, and it is further stimulated by the growing demand for the Biosphere’s global memory storage and computational complexity. Greater memory volume requires a greater number and more intellectually advanced organisms for storing and handling it. More intricate organisms require the greater computational complexity of biosphere in order to keep control over the living world. This is an endless recursive endeavor with accelerated evolutionary dynamic. New species emerge when two conditions are met: a) crucial environmental changes occur and/or global memory storage volume comes to its limit and b) biosphere computational complexity reaches critical mass capable of producing more advanced creatures. The hypothesis presented here is a naturalistic concept of life creation and evolution. The hypothesis logically resolves many puzzling problems with the current state evolution theory such as speciation, as a result of GM purposeful design, evolution development vector, as a need for growing global intelligence, punctuated equilibrium, happening when two above conditions a) and b) are met, the Cambrian explosion, mass extinctions, happening when more intelligent species should replace outdated creatures.

Keywords: supercomputer, biological evolution, Darwinism, speciation

Procedia PDF Downloads 148
2557 A Continuous Real-Time Analytic for Predicting Instability in Acute Care Rapid Response Team Activations

Authors: Ashwin Belle, Bryce Benson, Mark Salamango, Fadi Islim, Rodney Daniels, Kevin Ward

Abstract:

A reliable, real-time, and non-invasive system that can identify patients at risk for hemodynamic instability is needed to aid clinicians in their efforts to anticipate patient deterioration and initiate early interventions. The purpose of this pilot study was to explore the clinical capabilities of a real-time analytic from a single lead of an electrocardiograph to correctly distinguish between rapid response team (RRT) activations due to hemodynamic (H-RRT) and non-hemodynamic (NH-RRT) causes, as well as predict H-RRT cases with actionable lead times. The study consisted of a single center, retrospective cohort of 21 patients with RRT activations from step-down and telemetry units. Through electronic health record review and blinded to the analytic’s output, each patient was categorized by clinicians into H-RRT and NH-RRT cases. The analytic output and the categorization were compared. The prediction lead time prior to the RRT call was calculated. The analytic correctly distinguished between H-RRT and NH-RRT cases with 100% accuracy, demonstrating 100% positive and negative predictive values, and 100% sensitivity and specificity. In H-RRT cases, the analytic detected hemodynamic deterioration with a median lead time of 9.5 hours prior to the RRT call (range 14 minutes to 52 hours). The study demonstrates that an electrocardiogram (ECG) based analytic has the potential for providing clinical decision and monitoring support for caregivers to identify at risk patients within a clinically relevant timeframe allowing for increased vigilance and early interventional support to reduce the chances of continued patient deterioration.

Keywords: critical care, early warning systems, emergency medicine, heart rate variability, hemodynamic instability, rapid response team

Procedia PDF Downloads 136
2556 Bio-Psycho-Social Consequences and Effects in Fall-Efficacy Scale in Seniors Using Exercise Intervention of Motor Learning According to Yoga Techniques

Authors: Milada Krejci, Martin Hill, Vaclav Hosek, Dobroslava Jandova, Jiri Kajzar, Pavel Blaha

Abstract:

The paper declares effects of exercise intervention of the research project “Basic research of balance changes in seniors”, granted by the Czech Science Foundation. The objective of the presented study is to define predictors, which influence bio-psycho-social consequences and effects of balance ability in senior 65 years old and above. We focused on the Fall-Efficacy Scale changes evaluation in seniors. Comprehensive hypothesis of the project declares, that motion uncertainty (dyskinesia) can negatively affect the well-being of a senior in bio-psycho-social context. In total, random selection and testing of 100 seniors (30 males, 70 females) from Prague and Central Bohemian region was provided. The sample was divided by stratified random selection into experimental and control groups, who underwent input and output testing. For diagnostics the methods of Medical Anamnesis, Functional anthropological examinations, Tinetti Balance Assessment Tool, SF-36 Health Survey, Anamnestic comparative self-assessment scale were used. Intervention method called "Life in Balance" based on yoga techniques was applied in four-week cycle. Results of multivariate regression were verified by repeated measures ANOVA: subject factor, phase of intervention (between-subject factor), body fluid (within-subject factor) and phase of intervention × body fluid interaction). ANOVA was performed with a repetition involving the factors of subjects, experimental/control group, phase of intervention (independent variable), and x phase interaction followed by Bonferroni multiple comparison assays with a test strength of at least 0.8 on the probability level p < 0.05. In the paper results of the first-year investigation of the three years running project are analysed. Results of balance tests confirmed no significant difference between females and males in pre-test. Significant improvements in balance and walking ability were observed in experimental group in females comparing to males (F = 128.4, p < 0.001). In the females control group, there was no significant change in post- test, while in the female experimental group positive changes in posture and spine flexibility in post-tests were found. It seems that females even in senior age react better to incentives of intervention in balance and spine flexibility. On the base of results analyses, we can declare the significant improvement in social balance markers after intervention in the experimental group (F = 10.5, p < 0.001). In average, seniors are used to take four drugs daily. Number of drugs can contribute to allergy symptoms and balance problems. It can be concluded that static balance and walking ability of seniors according Tinetti Balance scale correlate significantly with psychic and social monitored markers.

Keywords: exercises, balance, seniors 65+, health, mental and social balance

Procedia PDF Downloads 123
2555 Phytoplankton Structure and Invasive Cyanobacterial Species of Polish Temperate Lakes: Their Associations with Environmental Parameters and Findings About Their Toxic Properties

Authors: Tumer Orhun Aykut, Robin Michael Crucitti-Thoo, Agnieszka Rudak, Iwona Jasser

Abstract:

Due to eutrophication connected to the growing human population, intensive agriculture, industrialization, and reinforcement of global warming, freshwater resources are changing negatively in every region of the World. This change also concerns the replacement of native species by invasive ones that can spread in many ways. Biological invasions are a developing problem to ecosystem continuity and their presence is mostly common in freshwater bodies. The occurrence and potential invasion of the species depends on associations between abiotic and biotic variables. Due to climate change, many species can extend their range from low to high latitudes and differ in their geographic ranges. In addition, the hydrological issues strongly influence the physicochemical parameters and biological processes, especially the growth rates of species and bloom formation of Cyanobacteria. Among tropical invasive species noted in temperate Europe, Raphidiopsis raciborskii, Chrysosporum bergii, and Sphaerospermopsis aphanizomenoides are considered a serious threat. R. raciborskii being the most important one as it is already known as a highly invasive species in almost all around the World, is a freshwater, planktonic, filamentous, potentially toxic, and nitrogen-fixing Cyanobacteria. This study aimed to investigate the presence of invasive cyanobacterial species in temperate lakes in Northeastern Poland, reveal the composition of phytoplankton communities, determine the effect of environmental variables, and identify the toxic properties of invasive Cyanobacteria and other phytoplankton groups. Our study was conducted in twenty-five lakes in August 2023. The lakes represent a geographical gradient from central Poland to the Northeast and have different depths, sizes, and trophic statuses. According to performed analyses, the presence of R. raciborskii was recorded in five lakes: Szczęśliwickie (Warsaw), Mikołajskie, Rekąty, Sztynorckie (Masurian Lakeland), and further East, in Pobondzie (Suwałki Lakeland). On the other hand, C. bergii was found in three lakes: Rekąty (Masurian Lakeland), Żabinki, and Pobondzie (Suwałki Lakeland), while S. aphanizomenoides only in Pobondzie (Suwałki Lakeland). Maximum phytoplankton diversity was found in Lake Rekąty, a small and shallow lake mentioned above. The highest phytoplankton biomass was detected in highly eutrophic Lake Suskie, followed by Lake Sztynorckie. In this last lake, which is also strongly eutrophic, the highest biomass of R. raciborskii was found. Cyanophyceae had the highest biovolume and was followed by Chlorophyceae in the entire study. Numerous environmental parameters, including nutrients, were studied, and their relationships with the invasive species and the whole phytoplankton community will be presented. In addition, toxic properties of environmental DNA results from each lake will also be shown. In conclusion, investigated invasive cyanobacterial species were found in a few Northeastern Polish temperate lakes, but the number of individuals was quite low, so the biomass was quite low. It has been observed that the structure of phytoplankton changed based on lakes and environmental parameters.

Keywords: biological invasion, cyanobacteria, cyanotoxins, phytoplankton ecology, sanger sequencing

Procedia PDF Downloads 17
2554 The Impact of Board Characteristics on Firm Performance: Evidence from Banking Industry in India

Authors: Manmeet Kaur, Madhu Vij

Abstract:

The Board of Directors in a firm performs the primary role of an internal control mechanism. This Study seeks to understand the relationship between internal governance and performance of banks in India. The research paper investigates the effect of board structure (proportion of nonexecutive directors, gender diversity, board size and meetings per year) on the firm performance. This paper evaluates the impact of corporate governance mechanisms on bank’s financial performance using panel data for 28 listed banks in National Stock Exchange of India for the period of 2008-2014. Returns on Asset, Return on Equity, Tobin’s Q and Net Interest Margin were used as the financial performance indicators. To estimate the relationship among governance and bank performance initially the Study uses Pooled Ordinary Least Square (OLS) Estimation and Generalized Least Square (GLS) Estimation. Then a well-developed panel Generalized Method of Moments (GMM) Estimator is developed to investigate the dynamic nature of performance and governance relationship. The Study empirically confirms that two-step system GMM approach controls the problem of unobserved heterogeneity and endogeneity as compared to the OLS and GLS approach. The result suggests that banks with small board, boards with female members, and boards that meet more frequently tend to be more efficient and subsequently have a positive impact on performance of banks. The study offers insights to policy makers interested in enhancing the quality of governance of banks in India. Also, the findings suggest that board structure plays a vital role in the improvement of corporate governance mechanism for financial institutions. There is a need to have efficient boards in banks to improve the overall health of the financial institutions and the economic development of the country.

Keywords: board of directors, corporate governance, GMM estimation, Indian banking

Procedia PDF Downloads 244
2553 Study of the Anaerobic Degradation Potential of High Strength Molasses Wastewater

Authors: M. Mischopoulou, P. Naidis, S. Kalamaras, T. Kotsopoulos, P. Samaras

Abstract:

The treatment of high strength wastewater by an Upflow Anaerobic Sludge Blanket (UASB) reactor has several benefits, such as high organic removal efficiency, short hydraulic retention time along with low operating costs. In addition, high volumes of biogas are released in these reactors, which can be utilized in several industrial facilities for energy production. This study aims at the examination of the application potential of anaerobic treatment of wastewater, with high molasses content derived from yeast manufacturing, by a lab-scale UASB reactor. The molasses wastewater and the sludge used in the experiments were collected from the wastewater treatment plant of a baker’s yeast manufacturing company. The experimental set-up consisted of a 15 L thermostated UASB reactor at 37 ◦C. Before the reactor start-up, the reactor was filled with sludge and molasses wastewater at a ratio 1:1 v/v. Influent was fed to the reactor at a flowrate of 12 L/d, corresponding to a hydraulic residence time of about 30 h. Effluents were collected from the system outlet and were analyzed for the determination of the following parameters: COD, pH, total solids, volatile solids, ammonium, phosphates and total nitrogen according to the standard methods of analysis. In addition, volatile fatty acid (VFA) composition of the effluent was determined by a gas chromatograph equipped with a flame ionization detector (FID), as an indicator to evaluate the process efficiency. The volume of biogas generated in the reactor was daily measured by the water displacement method, while gas composition was analyzed by a gas chromatograph equipped with a thermal conductivity detector (TCD). The effluent quality was greatly enhanced due to the use of the UASB reactor and high rate of biogas production was observed. The anaerobic treatment of the molasses wastewater by the UASB reactor improved the biodegradation potential of the influent, resulting at high methane yields and an effluent with better quality than the raw wastewater.

Keywords: anaerobic digestion, biogas production, molasses wastewater, UASB reactor

Procedia PDF Downloads 261
2552 Maintaining Energy Security in Natural Gas Pipeline Operations by Empowering Process Safety Principles Through Alarm Management Applications

Authors: Huseyin Sinan Gunesli

Abstract:

Process Safety Management is a disciplined framework for managing the integrity of systems and processes that handle hazardous substances. It relies on good design principles, well-implemented automation systems, and operating and maintenance practices. Alarm Management Systems play a critically important role in the safe and efficient operation of modern industrial plants. In that respect, Alarm Management is one of the critical factors feeding the safe operations of the plants in the manner of applying effective process safety principles. Trans Anatolian Natural Gas Pipeline (TANAP) is part of the Southern Gas Corridor, which extends from the Caspian Sea to Italy. TANAP transports Natural Gas from the Shah Deniz gas field of Azerbaijan, and possibly from other neighboring countries, to Turkey and through Trans Adriatic Pipeline (TAP) Pipeline to Europe. TANAP plays a crucial role in maintaining Energy Security for the region and Europe. In that respect, the application of Process Safety principles is vital to deliver safe, reliable and efficient Natural Gas delivery to Shippers both in the region and Europe. Effective Alarm Management is one of those Process Safety principles which feeds safe operations of the TANAP pipeline. Alarm Philosophy was designed and implemented in TANAP Pipeline according to the relevant standards. However, it is essential to manage the alarms received in the control room effectively to maintain safe operations. In that respect, TANAP has commenced Alarm Management & Rationalization program as of February 2022 after transferring to Plateau Regime, reaching the design parameters. While Alarm Rationalization started, there were more than circa 2300 alarms received per hour from one of the compressor stations. After applying alarm management principles such as reviewing and removal of bad actors, standing, stale, chattering, fleeting alarms, comprehensive review and revision of alarm set points through a change management principle, conducting alarm audits/design verification and etc., it has been achieved to reduce down to circa 40 alarms per hour. After the successful implementation of alarm management principles as specified above, the number of alarms has been reduced to industry standards. That significantly improved operator vigilance to focus on mainly important and critical alarms to avoid any excursion beyond safe operating limits leading to any potential process safety events. Following the ‟What Gets Measured, Gets Managed” principle, TANAP has identified key Performance Indicators (KPIs) to manage Process Safety principles effectively, where Alarm Management has formed one of the key parameters of those KPIs. However, review and analysis of the alarms were performed manually. Without utilizing Alarm Management Software, achieving full compliance with international standards is almost infeasible. In that respect, TANAP has started using one of the industry-wide known Alarm Management Applications to maintain full review and analysis of alarms and define actions as required. That actually significantly empowered TANAP’s process safety principles in terms of Alarm Management.

Keywords: process safety principles, energy security, natural gas pipeline operations, alarm rationalization, alarm management, alarm management application

Procedia PDF Downloads 83
2551 Improving Search Engine Performance by Removing Indexes to Malicious URLs

Authors: Durga Toshniwal, Lokesh Agrawal

Abstract:

As the web continues to play an increasing role in information exchange, and conducting daily activities, computer users have become the target of miscreants which infects hosts with malware or adware for financial gains. Unfortunately, even a single visit to compromised web site enables the attacker to detect vulnerabilities in the user’s applications and force the downloading of multitude of malware binaries. We provide an approach to effectively scan the so-called drive-by downloads on the Internet. Drive-by downloads are result of URLs that attempt to exploit their visitors and cause malware to be installed and run automatically. To scan the web for malicious pages, the first step is to use a crawler to collect URLs that live on the Internet, and then to apply fast prefiltering techniques to reduce the amount of pages that are needed to be examined by precise, but slower, analysis tools (such as honey clients or antivirus programs). Although the technique is effective, it requires a substantial amount of resources. A main reason is that the crawler encounters many pages on the web that are legitimate and needs to be filtered. In this paper, to characterize the nature of this rising threat, we present implementation of a web crawler on Python, an approach to search the web more efficiently for pages that are likely to be malicious, filtering benign pages and passing remaining pages to antivirus program for detection of malwares. Our approaches starts from an initial seed of known, malicious web pages. Using these seeds, our system generates search engines queries to identify other malicious pages that are similar to the ones in the initial seed. By doing so, it leverages the crawling infrastructure of search engines to retrieve URLs that are much more likely to be malicious than a random page on the web. The results shows that this guided approach is able to identify malicious web pages more efficiently when compared to random crawling-based approaches.

Keywords: web crawler, malwares, seeds, drive-by-downloads, security

Procedia PDF Downloads 218
2550 Performance and Limitations of Likelihood Based Information Criteria and Leave-One-Out Cross-Validation Approximation Methods

Authors: M. A. C. S. Sampath Fernando, James M. Curran, Renate Meyer

Abstract:

Model assessment, in the Bayesian context, involves evaluation of the goodness-of-fit and the comparison of several alternative candidate models for predictive accuracy and improvements. In posterior predictive checks, the data simulated under the fitted model is compared with the actual data. Predictive model accuracy is estimated using information criteria such as the Akaike information criterion (AIC), the Bayesian information criterion (BIC), the Deviance information criterion (DIC), and the Watanabe-Akaike information criterion (WAIC). The goal of an information criterion is to obtain an unbiased measure of out-of-sample prediction error. Since posterior checks use the data twice; once for model estimation and once for testing, a bias correction which penalises the model complexity is incorporated in these criteria. Cross-validation (CV) is another method used for examining out-of-sample prediction accuracy. Leave-one-out cross-validation (LOO-CV) is the most computationally expensive variant among the other CV methods, as it fits as many models as the number of observations. Importance sampling (IS), truncated importance sampling (TIS) and Pareto-smoothed importance sampling (PSIS) are generally used as approximations to the exact LOO-CV and utilise the existing MCMC results avoiding expensive computational issues. The reciprocals of the predictive densities calculated over posterior draws for each observation are treated as the raw importance weights. These are in turn used to calculate the approximate LOO-CV of the observation as a weighted average of posterior densities. In IS-LOO, the raw weights are directly used. In contrast, the larger weights are replaced by their modified truncated weights in calculating TIS-LOO and PSIS-LOO. Although, information criteria and LOO-CV are unable to reflect the goodness-of-fit in absolute sense, the differences can be used to measure the relative performance of the models of interest. However, the use of these measures is only valid under specific circumstances. This study has developed 11 models using normal, log-normal, gamma, and student’s t distributions to improve the PCR stutter prediction with forensic data. These models are comprised of four with profile-wide variances, four with locus specific variances, and three which are two-component mixture models. The mean stutter ratio in each model is modeled as a locus specific simple linear regression against a feature of the alleles under study known as the longest uninterrupted sequence (LUS). The use of AIC, BIC, DIC, and WAIC in model comparison has some practical limitations. Even though, IS-LOO, TIS-LOO, and PSIS-LOO are considered to be approximations of the exact LOO-CV, the study observed some drastic deviations in the results. However, there are some interesting relationships among the logarithms of pointwise predictive densities (lppd) calculated under WAIC and the LOO approximation methods. The estimated overall lppd is a relative measure that reflects the overall goodness-of-fit of the model. Parallel log-likelihood profiles for the models conditional on equal posterior variances in lppds were observed. This study illustrates the limitations of the information criteria in practical model comparison problems. In addition, the relationships among LOO-CV approximation methods and WAIC with their limitations are discussed. Finally, useful recommendations that may help in practical model comparisons with these methods are provided.

Keywords: cross-validation, importance sampling, information criteria, predictive accuracy

Procedia PDF Downloads 378
2549 The Analysis of Underground Economy Transaction Existence of Junk Night Market (JNM) in Malang City

Authors: Sebastiana Viphindratin, Silvi Asna

Abstract:

The under ground economy phenomenon is exist in Indonesia. There are some factors which affect the existence this underground economy activity. One of them is a hierarchy power structure that handles the underground economy existence. The example of the existence of underground economy is the occurring informal market in Indonesia. Malang city is one of the city which has this kind of market. Junk night market (JNM) as an underground economy activity is arising in that city. The JNM is located in Gatot Subroto Sidewalk Street. The JNM is a illegal market which sell thrift, antique, imitation and black market goods. The JNM is interesting topic to be discussed, because this market is running in long time without any policy from local government. The JNM activity has their own “power” that run the market rules. Thus, it is important to analyze how the existence and power structure of JNM actors community are in Malang city. This research using qualitative method with phenomenological approach where we try to understand the phenomenon and related actors deeply. The aim of this research is to know the existence and power structure of JNM actors community in Malang. In JNM, there is no any entry barriers and tax charge from Malang government itself. Price competition also occurs because the buyer can do a bargain with the seller. In maintaining buyer loyalty, the JNM actors also do pre-order system. Even though, this market is an illegal market but the JNM actors also give the goods guarantee (without legal contract) as a formal market. In JNM actor’s community, there is no hierarchy and formal power structure. The role in JNM is managed by informal leaders who come up from the trading activity problems that are sidewalk and parking area dividing. Therefore, can be concluded that even the JNM is illegal market but it can survive with natural market pattern. In JNM development, JNM has positive and negative impact for Malang city. The positive impact of JNM is this market can open a new employment but the negative impact is there is no tax income from that market. Therefore, suggested that the government of Malang city should manage and give appropriate policies in this case.

Keywords: junk night market (JNM), Malang city, underground economy, illegal

Procedia PDF Downloads 394
2548 Comparative Assessment of Microplastic Pollution in Surface Water and Sediment of the Gomati and Saryu Rivers, India

Authors: Amit K. Mishra, Jaswant Singh

Abstract:

The menace of plastic, which significantly pollutes the aquatic environment, has emerged as a global problem. There is an emerging concern about microplastics (MPs) accumulation in aquatic ecosystems. It is familiar to everyone that the ultimate end for most of the plastic debris is the ocean. Rivers are the efficient carriers for transferring MPs from terrestrial to aquatic, further from upstream to downstream areas, and ultimately to oceans. The root cause study can provide an effective solution to a problem; hence, tracing of MPs in the riverine system can illustrate the long-term microplastic pollution. This study aimed to investigate the occurrence and distribution of microplastic contamination in surface water and sediment of the two major river systems of Uttar Pradesh, India. One is the Gomti River, Lucknow, a tributary of the Ganga, and the second is the Saryu River, the lower part of the Ghagra River, which flows through the city of Ayodhya. In this study, the distribution and abundance of MPs in surface water and sediments of two rivers were compared. Samples of water and sediment were collected from different (four from each river) sampling stations in the river catchment of two rivers. Plastic particles were classified according to type, shape, and color. In this study, 1523 (average abundance 254) and 143 (average abundance 26) microplastics were identified in all studied sites in the Gomati River and Saryu River, respectively. Observations on samples of water showed that the average MPs concentration was 392 (±69.6) and 63 ((±18.9) particles per 50l of water, whereas the sediment sample showed that the average MPs concentration was 116 (±42.9) and 46 (±12.5) particles per 250gm of dry sediment in the Gomati River and Saryu River, respectively. The high concentration of microplastics in the Lucknow area can be attributed to human activities, population density, and the entry of various effluents into the river. Microplastics with fibrous shapes were dominated, followed by fragment shapes in all the samples. The present study is a pioneering effort to count MPs in the Gomati and Saryu River systems.

Keywords: freshwater, Gomati, microplastics, Saryu, sediment

Procedia PDF Downloads 65