Search results for: channel estimation
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 3132

Search results for: channel estimation

582 Characterization of Transmembrane Proteins with Five Alpha-Helical Regions

Authors: Misty Attwood, Helgi Schioth

Abstract:

Transmembrane proteins are important components in many essential cell processes such as signal transduction, cell-cell signalling, transport of solutes, structural adhesion activities, and protein trafficking. Due to their involvement in diverse critical activities, transmembrane proteins are implicated in different disease pathways and hence are the focus of intense interest in understanding functional activities, their pathogenesis in disease, and their potential as pharmaceutical targets. Further, as the structure and function of proteins are correlated, investigating a group of proteins with the same tertiary structure, i.e., the same number of transmembrane regions, may give understanding about their functional roles and potential as therapeutic targets. In this in silico bioinformatics analysis, we identify and comprehensively characterize the previously unstudied group of proteins with five transmembrane-spanning regions (5TM). We classify nearly 60 5TM proteins in which 31 are members of ten families that contain two or more family members and all members are predicted to contain the 5TM architecture. Furthermore, nine singlet proteins that contain the 5TM architecture without paralogues detected in humans were also identifying, indicating the evolution of single unique proteins with the 5TM structure. Interestingly, more than half of these proteins function in localization activities through movement or tethering of cell components and more than one-third are involved in transport activities, particularly in the mitochondria. Surprisingly, no receptor activity was identified within this family in sharp contrast with other TM families. Three major 5TM families were identified and include the Tweety family, which are pore-forming subunits of the swelling-dependent volume regulated anion channel in astrocytes; the sidoreflexin family that acts as mitochondrial amino acid transporters; and the Yip1 domain family engaged in vesicle budding and intra-Golgi transport. About 30% of the proteins have enhanced expression in the brain, liver, or testis. Importantly, 60% of these proteins are identified as cancer prognostic markers, where they are associated with clinical outcomes of various tumour types, indicating further investigation into the function and expression of these proteins is important. This study provides the first comprehensive analysis of proteins with 5TM regions and provides details of the unique characteristics and application in pharmaceutical development.

Keywords: 5TM, cancer prognostic marker, drug targets, transmembrane protein

Procedia PDF Downloads 109
581 A Bayesian Classification System for Facilitating an Institutional Risk Profile Definition

Authors: Roman Graf, Sergiu Gordea, Heather M. Ryan

Abstract:

This paper presents an approach for easy creation and classification of institutional risk profiles supporting endangerment analysis of file formats. The main contribution of this work is the employment of data mining techniques to support set up of the most important risk factors. Subsequently, risk profiles employ risk factors classifier and associated configurations to support digital preservation experts with a semi-automatic estimation of endangerment group for file format risk profiles. Our goal is to make use of an expert knowledge base, accuired through a digital preservation survey in order to detect preservation risks for a particular institution. Another contribution is support for visualisation of risk factors for a requried dimension for analysis. Using the naive Bayes method, the decision support system recommends to an expert the matching risk profile group for the previously selected institutional risk profile. The proposed methods improve the visibility of risk factor values and the quality of a digital preservation process. The presented approach is designed to facilitate decision making for the preservation of digital content in libraries and archives using domain expert knowledge and values of file format risk profiles. To facilitate decision-making, the aggregated information about the risk factors is presented as a multidimensional vector. The goal is to visualise particular dimensions of this vector for analysis by an expert and to define its profile group. The sample risk profile calculation and the visualisation of some risk factor dimensions is presented in the evaluation section.

Keywords: linked open data, information integration, digital libraries, data mining

Procedia PDF Downloads 426
580 Advancing Customer Service Management Platform: Case Study of Social Media Applications

Authors: Iseoluwa Bukunmi Kolawole, Omowunmi Precious Isreal

Abstract:

Social media has completely revolutionized the ways communication used to take place even a decade ago. It makes use of computer mediated technologies which helps in the creation of information and sharing. Social media may be defined as the production, consumption and exchange of information across platforms for social interaction. The social media has become a forum in which customer’s look for information about companies to do business with and request answers to questions about their products and services. Customer service may be termed as a process of ensuring customer’s satisfaction by meeting and exceeding their wants. In delivering excellent customer service, knowing customer’s expectations and where they are reaching out is important in meeting and exceeding customer’s want. Facebook is one of the most used social media platforms among others which also include Twitter, Instagram, Whatsapp and LinkedIn. This indicates customers are spending more time on social media platforms, therefore calls for improvement in customer service delivery over the social media pages. Millions of people channel their issues, complaints, complements and inquiries through social media. This study have being able to identify what social media customers want, their expectations and how they want to be responded to by brands and companies. However, the applied research methodology used in this paper was a mixed methods approach. The authors of d paper used qualitative method such as gathering critical views of experts on social media and customer relationship management to analyse the impacts of social media on customer's satisfaction through interviews. The authors also used quantitative such as online survey methods to address issues at different stages and to have insight about different aspects of the platforms i.e. customer’s and company’s perception about the effects of social media. Thereby exploring and gaining better understanding of how brands make use of social media as a customer relationship management tool. And an exploratory research approach strategy was applied analysing how companies need to create good customer support using social media in order to improve good customer service delivery, customer retention and referrals. Therefore many companies have preferred social media platform application as a medium of handling customer’s queries and ensuring their satisfaction, this is because social media tools are considered more transparent and effective in its operations when dealing with customer relationship management.

Keywords: brands, customer service, information, social media

Procedia PDF Downloads 268
579 Belief-Based Games: An Appropriate Tool for Uncertain Strategic Situation

Authors: Saied Farham-Nia, Alireza Ghaffari-Hadigheh

Abstract:

Game theory is a mathematical tool to study the behaviors of a rational and strategic decision-makers, that analyze existing equilibrium in interest conflict situation and provides an appropriate mechanisms for cooperation between two or more player. Game theory is applicable for any strategic and interest conflict situation in politics, management and economics, sociology and etc. Real worlds’ decisions are usually made in the state of indeterminacy and the players often are lack of the information about the other players’ payoffs or even his own, which leads to the games in uncertain environments. When historical data for decision parameters distribution estimation is unavailable, we may have no choice but to use expertise belief degree, which represents the strength with that we believe the event will happen. To deal with belief degrees, we have use uncertainty theory which is introduced and developed by Liu based on normality, duality, subadditivity and product axioms to modeling personal belief degree. As we know, the personal belief degree heavily depends on the personal knowledge concerning the event and when personal knowledge changes, cause changes in the belief degree too. Uncertainty theory not only theoretically is self-consistent but also is the best among other theories for modeling belief degree on practical problem. In this attempt, we primarily reintroduced Expected Utility Function in uncertainty environment according to uncertainty theory axioms to extract payoffs. Then, we employed Nash Equilibrium to investigate the solutions. For more practical issues, Stackelberg leader-follower Game and Bertrand Game, as a benchmark models are discussed. Compared to existing articles in the similar topics, the game models and solution concepts introduced in this article can be a framework for problems in an uncertain competitive situation based on experienced expert’s belief degree.

Keywords: game theory, uncertainty theory, belief degree, uncertain expected value, Nash equilibrium

Procedia PDF Downloads 415
578 A Conceptual Framework for Vulnerability Assessment of Climate Change Impact on Oil and Gas Critical Infrastructures in the Niger Delta

Authors: Justin A. Udie, Subhes C. Bhatthacharyya, Leticia Ozawa-Meida

Abstract:

The impact of climate change is severe in the Niger Delta and critical oil and gas infrastructures are vulnerable. This is partly due to lack of specific impact assessment framework to assess impact indices on both existing and new infrastructures. The purpose of this paper is to develop a framework for the assessment of climate change impact on critical oil and gas infrastructure in the region. Comparative and documentary methods as well as analysis of frameworks were used to develop a flexible, integrated and conceptual four dimensional framework underpinning; 1. Scoping – the theoretical identification of inherent climate burdens, review of exposure, adaptive capacities and delineation of critical infrastructure; 2. Vulnerability assessment – presents a systematic procedure for the assessment of infrastructure vulnerability. It provides real time re-scoping, practical need for data collection, analysis and review. Physical examination of systems is encouraged to complement the scoped data and ascertain the level of exposure to relevant climate risks in the area; 3. New infrastructure – consider infrastructures that are still at developmental level. It seeks to suggest the inclusion of flexible adaptive capacities in original design of infrastructures in line with climate threats and projections; 4. The Mainstreaming Climate Impact Assessment into government’s environmental decision making approach. Though this framework is designed specifically for the estimation of exposure, adaptive capacities and criticality of vulnerable oil and gas infrastructures in the Niger Delta to climate burdens; it is recommended for researchers and experts as a first-hand generic and practicable tool which can be used for the assessment of other infrastructures perceived as critical and vulnerable. The paper does not provide further tools that synch into the methodological approach but presents pointers upon which a pragmatic methodology can be developed.

Keywords: adaptation, assessment, conceptual, climate, change, framework, vulnerability

Procedia PDF Downloads 317
577 Cadaveric Assessment of Kidney Dimensions Among Nigerians - A Preliminary Report

Authors: Rotimi Sunday Ajani, Omowumi Femi-Akinlosotu

Abstract:

Background: The usually paired human kidneys are retroperitoneal urinary organs with some endocrine functions. Standard text books of anatomy ascribe single value to each of the dimension of length, width and thickness. Research questions: These values do not give consideration to racial and genetic variability in human morphology. They may thus be erroneous to students and clinicians working on Nigerians. Objectives: The study aimed at establishing reference values of the kidney length, width and thickness for Nigerians using the cadaveric model. Methodology: The length, width, thickness and weight of sixty kidneys harvested from cadavers of thirty adult Nigerians (Male: Female; 27: 3) were measured. Respective volume was calculated using the ellipsoid formula. Results: The mean length of the kidney was 9.84±0.89 cm (9.63±0.88 {right}; 10.06±0.86 {left}), width- 5.18±0.70 cm (5.21±0.72 {right}; 5.14±0.70 {left}), thickness-3.45±0.56 cm (3.36±0.58 {right}, 3.53±0.55 {left}), weight-125.06±22.34 g (122.36±21.70 {right}; 127.76 ±24.02 {left}) and volume of 95.45± 24.40 cm3 (91.73± 26.84 {right}; 99.17± 25.75 {left}). Discussion: Though the values of the parameters measured were higher for the left kidney (except for the width), they were not statistically significant. The various parameters obtained by this study differ from those of similar studies from other continents. Conclusion: Stating single value for each of the parameter of length, width and thickness of the kidney as currently obtained in textbooks of anatomy may be incomplete information and hence misleading. Thus, there is the need to emphasize racial differences when stating the normal values of kidney dimensions in textbooks of anatomy. Implication for Research and Innovation: The results of the study showed the dimensions of the kidney (length, width and thickness) have interracial vagaries as they were different from those of similar studies and values stated in standard textbooks of human anatomy. Future direction: This is a preliminary report and the study will continue so that more data will be obtained.

Keywords: kidney dimensions, cadaveric estimation, adult nigerians, racial differences

Procedia PDF Downloads 99
576 Waters Colloidal Phase Extraction and Preconcentration: Method Comparison

Authors: Emmanuelle Maria, Pierre Crançon, Gaëtane Lespes

Abstract:

Colloids are ubiquitous in the environment and are known to play a major role in enhancing the transport of trace elements, thus being an important vector for contaminants dispersion. Colloids study and characterization are necessary to improve our understanding of the fate of pollutants in the environment. However, in stream water and groundwater, colloids are often very poorly concentrated. It is therefore necessary to pre-concentrate colloids in order to get enough material for analysis, while preserving their initial structure. Many techniques are used to extract and/or pre-concentrate the colloidal phase from bulk aqueous phase, but yet there is neither reference method nor estimation of the impact of these different techniques on the colloids structure, as well as the bias introduced by the separation method. In the present work, we have tested and compared several methods of colloidal phase extraction/pre-concentration, and their impact on colloids properties, particularly their size distribution and their elementary composition. Ultrafiltration methods (frontal, tangential and centrifugal) have been considered since they are widely used for the extraction of colloids in natural waters. To compare these methods, a ‘synthetic groundwater’ was used as a reference. The size distribution (obtained by Field-Flow Fractionation (FFF)) and the chemical composition of the colloidal phase (obtained by Inductively Coupled Plasma Mass Spectrometry (ICPMS) and Total Organic Carbon analysis (TOC)) were chosen as comparison factors. In this way, it is possible to estimate the pre-concentration impact on the colloidal phase preservation. It appears that some of these methods preserve in a more efficient manner the colloidal phase composition while others are easier/faster to use. The choice of the extraction/pre-concentration method is therefore a compromise between efficiency (including speed and ease of use) and impact on the structural and chemical composition of the colloidal phase. In perspective, the use of these methods should enhance the consideration of colloidal phase in the transport of pollutants in environmental assessment studies and forensics.

Keywords: chemical composition, colloids, extraction, preconcentration methods, size distribution

Procedia PDF Downloads 215
575 Algorithm for Predicting Cognitive Exertion and Cognitive Fatigue Using a Portable EEG Headset for Concussion Rehabilitation

Authors: Lou J. Pino, Mark Campbell, Matthew J. Kennedy, Ashleigh C. Kennedy

Abstract:

A concussion is complex and nuanced, with cognitive rest being a key component of recovery. Cognitive overexertion during rehabilitation from a concussion is associated with delayed recovery. However, daily living imposes cognitive demands that may be unavoidable and difficult to quantify. Therefore, a portable tool capable of alerting patients before cognitive overexertion occurs could allow patients to maintain their quality of life while preventing symptoms and recovery setbacks. EEG allows for a sensitive measure of cognitive exertion. Clinical 32-lead EEG headsets are not practical for day-to-day concussion rehabilitation management. However, there are now commercially available and affordable portable EEG headsets. Thus, these headsets can potentially be used to continuously monitor cognitive exertion during mental tasks to alert the wearer of overexertion, with the aim of preventing the occurrence of symptoms to speed recovery times. The objective of this study was to test an algorithm for predicting cognitive exertion from EEG data collected from a portable headset. EEG data were acquired from 10 participants (5 males, 5 females). Each participant wore a portable 4 channel EEG headband while completing 10 tasks: rest (eyes closed), rest (eyes open), three levels of the increasing difficulty of logic puzzles, three levels of increasing difficulty in multiplication questions, rest (eyes open), and rest (eyes closed). After each task, the participant was asked to report their perceived level of cognitive exertion using the NASA Task Load Index (TLX). Each participant then completed a second session on a different day. A customized machine learning model was created using data from the first session. The performance of each model was then tested using data from the second session. The mean correlation coefficient between TLX scores and predicted cognitive exertion was 0.75 ± 0.16. The results support the efficacy of the algorithm for predicting cognitive exertion. This demonstrates that the algorithms developed in this study used with portable EEG devices have the potential to aid in the concussion recovery process by monitoring and warning patients of cognitive overexertion. Preventing cognitive overexertion during recovery may reduce the number of symptoms a patient experiences and may help speed the recovery process.

Keywords: cognitive activity, EEG, machine learning, personalized recovery

Procedia PDF Downloads 220
574 Economics of Sugandhakokila (Cinnamomum Glaucescens (Nees) Dury) in Dang District of Nepal: A Value Chain Perspective

Authors: Keshav Raj Acharya, Prabina Sharma

Abstract:

Sugandhakokila (Cinnamomum glaucescens Nees. Dury) is a large evergreen native tree species; mostly confined naturally in mid-hills of Rapti Zone of Nepal. The species is identified as prioritized for agro-technology development as well as for research and development by a department of plant resources. This species is band for export outside the country without processing by the government of Nepal to encourage the value addition within the country. The present study was carried out in Chillikot village of Dang district to find out the economic contribution of C. glaucescens in the local economy and to document the major conservation threats for this species. Participatory Rural Appraisal (PRA) tools such as Household survey, key informants interviews and focus group discussions were carried out to collect the data. The present study reveals that about 1.7 million Nepalese rupees (NPR) have been contributed annually in the local economy of 29 households from the collection of C. glaucescens berries in the study area. The average annual income of each family was around NPR 67,165.38 (US$ 569.19) from the sale of the berries which contributes about 53% of the total household income. Six different value chain actors are involved in C. glaucescens business. Maximum profit margin was taken by collector followed by producer, exporter and processor. The profit margin was found minimum to regional and village traders. The total profit margin for producers was NPR 138.86/kg, and regional traders have gained NPR 17/kg. However, there is a possibility to increase the profit of producers by NPR 8.00 more for each kg of berries through the initiation of community forest user group and village cooperatives in the area. Open access resource, infestation by an insect to over matured trees and browsing by goats were identified as major conservation threats for this species. Handing over the national forest as a community forest, linking the producers with the processor through organized market channel and replacing the old tree through new plantation has been recommended for future.

Keywords: community forest, conservation threats, C. glaucescens, value chain analysis

Procedia PDF Downloads 139
573 Portable, Noninvasive and Wireless Near Infrared Spectroscopy Device to Monitor Skeletal Muscle Metabolism during Exercise

Authors: Adkham Paiziev, Fikrat Kerimov

Abstract:

Near Infrared Spectroscopy (NIRS) is one of the biophotonic techniques which can be used to monitor oxygenation and hemodynamics in a variety of human tissues, including skeletal muscle. In the present work, we are offering tissue oximetry (OxyPrem) to measure hemodynamic parameters of skeletal muscles in rest and exercise. Purpose: - To elaborate the new wireless, portable, noninvasive, wearable NIRS device to measure skeletal muscle oxygenation during exercise. - To test this device on brachioradialis muscle of wrestler volunteers by using combined method of arterial occlusion (AO) and NIRS (AO+NIRS). Methods: Oxyprem NIRS device has been used together with AO test. AO test and Isometric brachioradialis muscle contraction experiments have been performed on one group of wrestler volunteers. ‘Accu- Measure’ caliper (USA) to measure skinfold thickness (SFT) has been used. Results: Elaborated device consists on power supply box, a sensor head and installed ‘Tubis’ software for data acquisition and to compute deoxyhemoglobin ([HHb), oxyhemoglobin ([O2Hb]), tissue oxygenation (StO2) and muscle tissue oxygen consumption (mVO2). Sensor head consists on four light sources with three light emitting diodes with nominal wavelengths of 760 nm, 805 nm, and 870 nm, and two detectors. AO and isometric voluntary forearm muscle contraction (IVFMC) on five healthy male subjects (23,2±0.84 in age, 0.43±0.05cm of SFT ) and four female subjects (22.0±1.0 in age and 0.24±0.04 cm SFT) has been measured. mVO2 for control group has been calculated (-0.65%/sec±0.07) for male and -0.69%/±0.19 for female subjects). Tissue oxygenation index for wrestlers in average about 75% whereas for control group StO2 =63%. Second experiment was connected with quality monitoring muscle activity during IVFMC at 10%,30% and 50% of MVC. It has been shown, that the concentration changes of HbO2 and HHb positively correlated to the contraction intensity. Conclusion: We have presented a portable multi-channel wireless NIRS device for real-time monitoring of muscle activity. The miniaturized NIRS sensor and the usage of wireless communication make the whole device have a compact-size, thus can be used in muscle monitoring.

Keywords: skeletal muscle, oxygenation, instrumentation, near infrared spectroscopy

Procedia PDF Downloads 275
572 Power Quality Modeling Using Recognition Learning Methods for Waveform Disturbances

Authors: Sang-Keun Moon, Hong-Rok Lim, Jin-O Kim

Abstract:

This paper presents a Power Quality (PQ) modeling and filtering processes for the distribution system disturbances using recognition learning methods. Typical PQ waveforms with mathematical applications and gathered field data are applied to the proposed models. The objective of this paper is analyzing PQ data with respect to monitoring, discriminating, and evaluating the waveform of power disturbances to ensure the system preventative system failure protections and complex system problem estimations. Examined signal filtering techniques are used for the field waveform noises and feature extractions. Using extraction and learning classification techniques, the efficiency was verified for the recognition of the PQ disturbances with focusing on interactive modeling methods in this paper. The waveform of selected 8 disturbances is modeled with randomized parameters of IEEE 1159 PQ ranges. The range, parameters, and weights are updated regarding field waveform obtained. Along with voltages, currents have same process to obtain the waveform features as the voltage apart from some of ratings and filters. Changing loads are causing the distortion in the voltage waveform due to the drawing of the different patterns of current variation. In the conclusion, PQ disturbances in the voltage and current waveforms indicate different types of patterns of variations and disturbance, and a modified technique based on the symmetrical components in time domain was proposed in this paper for the PQ disturbances detection and then classification. Our method is based on the fact that obtained waveforms from suggested trigger conditions contain potential information for abnormality detections. The extracted features are sequentially applied to estimation and recognition learning modules for further studies.

Keywords: power quality recognition, PQ modeling, waveform feature extraction, disturbance trigger condition, PQ signal filtering

Procedia PDF Downloads 186
571 Remote Observation of Environmental Parameters on the Surface of the Maricunga Salt Flat, Atacama Region, Chile

Authors: Lican Guzmán, José Manuel Lattus, Mariana Cervetto, Mauricio Calderón

Abstract:

Today the estimation of effects produced by climate change in high Andean wetland environments is confronted by big challenges. This study provides a way to an analysis by remote sensing how some Ambiental aspects have evolved on the Maricunga salt flat in the last 30 years, divided into the summer and winter seasons, and if global warming is conditioning these changes. The first step to achieve this goal was the recompilation of geological, hydrological, and morphometric antecedents to ensure an adequate contextualization of its environmental parameters. After this, software processing and analysis of Landsat 5,7 and 8 satellite imagery was required to get the vegetation, water, surface temperature, and soil moisture indexes (NDVI, NDWI, LST, and SMI) in order to see how their spatial-temporal conditions have evolved in the area of study during recent decades. Results show a tendency of regular increase in surface temperature and disponibility of water during both seasons but with slight drought periods during summer. Soil moisture factor behaves as a constant during the dry season and with a tendency to increase during wintertime. Vegetation analysis shows an areal and quality increase of its surface sustained through time that is consistent with the increase of water supply and temperature in the basin mentioned before. Roughly, the effects of climate change can be described as positive for the Maricunga salt flat; however, the lack of exact correlation in dates of the imagery available to remote sensing analysis could be a factor for misleading in the interpretation of results.

Keywords: global warming, geology, SIG, Atacama Desert, Salar de Maricunga, environmental geology, NDVI, SMI, LST, NDWI, Landsat

Procedia PDF Downloads 78
570 Formulating Anti-Insurgency Curriculum Conceptual and Design Principles for Translation into Anti-Terrorist Curriculum Framework for Muslim Secondary Schools

Authors: Saheed Ahmad Rufai

Abstract:

The growing nature of insurgencies in their various forms in the Muslim world is now of great concern to both the leadership and the citizenry. The high sense of insecurity occasioned by the unpleasant experience has in fact attained an alarming rate in the estimation of both Muslims and non-Muslims alike. Consequently, the situation began to attract contributions from scholars and researchers in security-related fields of humanities and social sciences. However, there is little evidence of contribution to the discourse and the scholarship involved by scholars in the field of education. The purpose of this proposed study is to contribute an education dimension to the growing scholarship on the subject. The study which is situated in the broad scholarship of curriculum making and grounded in both the philosophical and sociological foundations of the curriculum, employs a combination of curriculum criticism and creative synthesis, as methods, in reconstructing Muslim schools’ educational blueprint. The significance of the proposed study lies in its potential to contribute a useful addition to the scholarship of curriculum construction in the context of the Muslim world. The significance also lies in its potential to offer an ameliorative proposal over unnecessary insurgency or militancy thereby paving the way for the enthronement of a regime characterized by peaceful, harmonious and tranquil co-existence among people of diverse orientations and ideological persuasions in the Muslim world. The study is restricted to only the first two stages of curriculum making namely the formulation of philosophy which concerns the articulation of objectives, aims, purposes, goals, and principles, as well as the second stage which covers the translation of such principles to an anti-insurgency secondary school curriculum for the Muslim world.

Keywords: education for conflict resolution, anti-insurgency curriculum principles, peace education, anti-terrorist curriculum framework, curriculum for Muslim secondary schools

Procedia PDF Downloads 224
569 Deterministic and Stochastic Modeling of a Micro-Grid Management for Optimal Power Self-Consumption

Authors: D. Calogine, O. Chau, S. Dotti, O. Ramiarinjanahary, P. Rasoavonjy, F. Tovondahiniriko

Abstract:

Mafate is a natural circus in the north-western part of Reunion Island, without an electrical grid and road network. A micro-grid concept is being experimented in this area, composed of a photovoltaic production combined with electrochemical batteries, in order to meet the local population for self-consumption of electricity demands. This work develops a discrete model as well as a stochastic model in order to reach an optimal equilibrium between production and consumptions for a cluster of houses. The management of the energy power leads to a large linearized programming system, where the time interval of interest is 24 hours The experimental data are solar production, storage energy, and the parameters of the different electrical devices and batteries. The unknown variables to evaluate are the consumptions of the various electrical services, the energy drawn from and stored in the batteries, and the inhabitants’ planning wishes. The objective is to fit the solar production to the electrical consumption of the inhabitants, with an optimal use of the energies in the batteries by satisfying as widely as possible the users' planning requirements. In the discrete model, the different parameters and solutions of the linear programming system are deterministic scalars. Whereas in the stochastic approach, the data parameters and the linear programming solutions become random variables, then the distributions of which could be imposed or established by estimation from samples of real observations or from samples of optimal discrete equilibrium solutions.

Keywords: photovoltaic production, power consumption, battery storage resources, random variables, stochastic modeling, estimations of probability distributions, mixed integer linear programming, smart micro-grid, self-consumption of electricity.

Procedia PDF Downloads 110
568 Seismicity and Ground Response Analysis for MP Tourism Office in Indore, India

Authors: Deepshikha Shukla, C. H. Solanki, Mayank Desai

Abstract:

In the last few years, it has been observed that earthquake is proving a threat to the scientist across the world. With a large number of earthquakes occurring in day to day life, the threat to life and property has increased manifolds which call for an urgent attention of all the researchers globally to carry out the research in the field of Earthquake Engineering. Any hazard related to the earthquake and seismicity is considered to be seismic hazards. The common forms of seismic hazards are Ground Shaking, Structure Damage, Structural Hazards, Liquefaction, Landslides, Tsunami to name a few. Among all the natural hazards, the most devastating and damaging is the earthquake as all other hazards are triggered only after the occurrence of an earthquake. In order to quantify and estimate the seismicity and seismic hazards, many methods and approaches have been proposed in the past few years. Such approaches are Mathematical, Conventional and Computational. Convex Set Theory, Empirical Green’s Function are some of the Mathematical Approaches whereas the Deterministic and Probabilistic Approaches are the Conventional Approach for the estimation of the seismic Hazards. Ground response and Ground Shaking of a particular area or region plays an important role in the damage caused due to the earthquake. In this paper, seismic study using Deterministic Approach and 1 D Ground Response Analysis has been carried out for Madhya Pradesh Tourism Office in Indore Region in Madhya Pradesh in Central India. Indore lies in the seismic zone III (IS: 1893, 2002) in the Seismic Zoning map of India. There are various faults and lineament in this area and Narmada Some Fault and Gavilgadh fault are the active sources of earthquake in the study area. Deepsoil v6.1.7 has been used to perform the 1 D Linear Ground Response Analysis for the study area. The Peak Ground Acceleration (PGA) of the city ranges from 0.1g to 0.56g.

Keywords: seismicity, seismic hazards, deterministic, probabilistic methods, ground response analysis

Procedia PDF Downloads 165
567 Investigating a Deterrence Function for Work Trips for Perth Metropolitan Area

Authors: Ali Raouli, Amin Chegenizadeh, Hamid Nikraz

Abstract:

The Perth metropolitan area and its surrounding regions have been expanding rapidly in recent decades and it is expected that this growth will continue in the years to come. With this rapid growth and the resulting increase in population, consideration should be given to strategic planning and modelling for the future expansion of Perth. The accurate estimation of projected traffic volumes has always been a major concern for the transport modelers and planners. Development of a reliable strategic transport model depends significantly on the inputs data into the model and the calibrated parameters of the model to reflect the existing situation. Trip distribution is the second step in four-step modelling (FSM) which is complex due to its behavioral nature. Gravity model is the most common method for trip distribution. The spatial separation between the Origin and Destination (OD) zones will be reflected in gravity model by applying deterrence functions which provide an opportunity to include people’s behavior in choosing their destinations based on distance, time and cost of their journeys. Deterrence functions play an important role for distribution of the trips within a study area and would simulate the trip distances and therefore should be calibrated for any particular strategic transport model to correctly reflect the trip behavior within the modelling area. This paper aims to review the most common deterrence functions and propose a calibrated deterrence function for work trips within the Perth Metropolitan Area based on the information obtained from the latest available Household data and Perth and Region Travel Survey (PARTS) data. As part of this study, a four-step transport model using EMME software has been developed for Perth Metropolitan Area to assist with the analysis and findings.

Keywords: deterrence function, four-step modelling, origin destination, transport model

Procedia PDF Downloads 168
566 Exploring the Visual Representations of Neon Signs and Its Vernacular Tacit Knowledge of Neon Making

Authors: Brian Kwok

Abstract:

Hong Kong is well-known for its name as "the Pearl of the Orient", due to its spectacular night-view with vast amount of decorative neon lights on the streets. Neon signs are first used as the pervasive media of communication for all kinds of commercial advertising, ranging from movie theatres to nightclubs and department stores, and later appropriated by artists as medium of artwork. As a well-established visual language, it displays texts in bilingual format due to British's colonial influence, which are sometimes arranged in an opposite reading order. Research on neon signs as a visual representation is rare but significant because they are part of people’s collective memories of the unique cityscapes which associate the shifting values of people's daily lives and culture identity. Nevertheless, with the current policy to remove abandoned neon signs, their total number dramatically declines recently. The Buildings Department found an estimation of 120,000 unauthorized signboards (including neon signs) in Hong Kong in 2013, and the removal of such is at a rate of estimated 1,600 per year since 2006. In other words, the vernacular cultural values and historical continuity of neon signs will gradually be vanished if no immediate action is taken in documenting them for the purpose of research and cultural preservation. Therefore, the Hong Kong Neon Signs Archive project was established in June of 2015, and over 100 neon signs are photo-documented so far. By content analysis, this project will explore the two components of neon signs – the use of visual languages and vernacular tacit knowledge of neon makers. It attempts to answer these questions about Hong Kong's neon signs: 'What are the ways in which visual representations are used to produce our cityscapes and streetscapes?'; 'What are the visual languages and conventions of usage in different business types?'; 'What the intact knowledge are applied when producing these visual forms of neon signs?'

Keywords: cityscapes, neon signs, tacit knowledge, visual representation

Procedia PDF Downloads 301
565 The On-Board Critical Message Transmission Design for Navigation Satellite Delay/Disruption Tolerant Network

Authors: Ji-yang Yu, Dan Huang, Guo-ping Feng, Xin Li, Lu-yuan Wang

Abstract:

The navigation satellite network, especially the Beidou MEO Constellation, can relay data effectively with wide coverage and is applied in navigation, detection, and position widely. But the constellation has not been completed, and the amount of satellites on-board is not enough to cover the earth, which makes the data-relay disrupted or delayed in the transition process. The data-relay function needs to tolerant the delay or disruption in some extension, which make the Beidou MEO Constellation a delay/disruption-tolerant network (DTN). The traditional DTN designs mainly employ the relay table as the basic of data path schedule computing. But in practical application, especially in critical condition, such as the war-time or the infliction heavy losses on the constellation, parts of the nodes may become invalid, then the traditional DTN design could be useless. Furthermore, when transmitting the critical message in the navigation system, the maximum priority strategy is used, but the nodes still inquiry the relay table to design the path, which makes the delay more than minutes. Under this circumstances, it needs a function which could compute the optimum data path on-board in real-time according to the constellation states. The on-board critical message transmission design for navigation satellite delay/disruption-tolerant network (DTN) is proposed, according to the characteristics of navigation satellite network. With the real-time computation of parameters in the network link, the least-delay transition path is deduced to retransmit the critical message in urgent conditions. First, the DTN model for constellation is established based on the time-varying matrix (TVM) instead of the time-varying graph (TVG); then, the least transition delay data path is deduced with the parameters of the current node; at last, the critical message transits to the next best node. For the on-board real-time computing, the time delay and misjudges of constellation states in ground stations are eliminated, and the residual information channel for each node can be used flexibly. Compare with the minute’s delay of traditional DTN; the proposed transmits the critical message in seconds, which improves the re-transition efficiency. The hardware is implemented in FPGA based on the proposed model, and the tests prove the validity.

Keywords: critical message, DTN, navigation satellite, on-board, real-time

Procedia PDF Downloads 343
564 High-Resolution Flood Hazard Mapping Using Two-Dimensional Hydrodynamic Model Anuga: Case Study of Jakarta, Indonesia

Authors: Hengki Eko Putra, Dennish Ari Putro, Tri Wahyu Hadi, Edi Riawan, Junnaedhi Dewa Gede, Aditia Rojali, Fariza Dian Prasetyo, Yudhistira Satya Pribadi, Dita Fatria Andarini, Mila Khaerunisa, Raditya Hanung Prakoswa

Abstract:

Catastrophe risk management can only be done if we are able to calculate the exposed risks. Jakarta is an important city economically, socially, and politically and in the same time exposed to severe floods. On the other hand, flood risk calculation is still very limited in the area. This study has calculated the risk of flooding for Jakarta using 2-Dimensional Model ANUGA. 2-Dimensional model ANUGA and 1-Dimensional Model HEC-RAS are used to calculate the risk of flooding from 13 major rivers in Jakarta. ANUGA can simulate physical and dynamical processes between the streamflow against river geometry and land cover to produce a 1-meter resolution inundation map. The value of streamflow as an input for the model obtained from hydrological analysis on rainfall data using hydrologic model HEC-HMS. The probabilistic streamflow derived from probabilistic rainfall using statistical distribution Log-Pearson III, Normal and Gumbel, through compatibility test using Chi Square and Smirnov-Kolmogorov. Flood event on 2007 is used as a comparison to evaluate the accuracy of model output. Property damage estimations were calculated based on flood depth for 1, 5, 10, 25, 50, and 100 years return period against housing value data from the BPS-Statistics Indonesia, Centre for Research and Development of Housing and Settlements, Ministry of Public Work Indonesia. The vulnerability factor was derived from flood insurance claim. Jakarta's flood loss estimation for the return period of 1, 5, 10, 25, 50, and 100 years, respectively are Rp 1.30 t; Rp 16.18 t; Rp 16.85 t; Rp 21.21 t; Rp 24.32 t; and Rp 24.67 t of the total value of building Rp 434.43 t.

Keywords: 2D hydrodynamic model, ANUGA, flood, flood modeling

Procedia PDF Downloads 275
563 Modeling of Glycine Transporters in Mammalian Using the Probability Approach

Authors: K. S. Zaytsev, Y. R. Nartsissov

Abstract:

Glycine is one of the key inhibitory neurotransmitters in Central nervous system (CNS) meanwhile glycinergic transmission is highly dependable on its appropriate reuptake from synaptic cleft. Glycine transporters (GlyT) of types 1 and 2 are the enzymes providing glycine transport back to neuronal and glial cells along with Na⁺ and Cl⁻ co-transport. The distribution and stoichiometry of GlyT1 and GlyT2 differ in details, and GlyT2 is more interesting for the research as it reuptakes glycine to neuron cells, whereas GlyT1 is located in glial cells. In the process of GlyT2 activity, the translocation of the amino acid is accompanied with binding of both one chloride and three sodium ions consequently (two sodium ions for GlyT1). In the present study, we developed a computer simulator of GlyT2 and GlyT1 activity based on known experimental data for quantitative estimation of membrane glycine transport. The trait of a single protein functioning was described using the probability approach where each enzyme state was considered separately. Created scheme of transporter functioning realized as a consequence of elemental steps allowed to take into account each event of substrate association and dissociation. Computer experiments using up-to-date kinetic parameters allowed receiving the number of translocated glycine molecules, Na⁺ and Cl⁻ ions per time period. Flexibility of developed software makes it possible to evaluate glycine reuptake pattern in time under different internal characteristics of enzyme conformational transitions. We investigated the behavior of the system in a wide range of equilibrium constant (from 0.2 to 100), which is not determined experimentally. The significant influence of equilibrium constant in the range from 0.2 to 10 on the glycine transfer process is shown. The environmental conditions such as ion and glycine concentrations are decisive if the values of the constant are outside the specified range.

Keywords: glycine, inhibitory neurotransmitters, probability approach, single protein functioning

Procedia PDF Downloads 119
562 Use of the Budyko Framework to Estimate the Virtual Water Content in Shijiazhuang Plain, North China

Authors: Enze Zhang

Abstract:

One of the most challenging steps in implementing virtual water content (VWC) analysis of crops is to get properly the total volume of consumptive water use (CWU) and, therefore, the choice of a reliable crop CWU estimation method. In practice, lots of previous researches obtaining CWU of crops follow a classical procedure for calculating crop evapotranspiration which is determined by multiplying reference evapotranspiration by appropriate coefficient, such as crop coefficient and water stress coefficients. However, this manner of calculation requires lots of field experimental data at point scale and more seriously, when current growing conditions differ from the standard conditions, may easily produce deviation between the calculated CWU and the actual CWU. Since evapotranspiration caused by crop planting always plays a vital role in surface water-energy balance in an agricultural region, this study decided to alternatively estimates crop evapotranspiration by Budyko framework. After brief introduce the development process of Budyko framework. We choose a modified Budyko framework under unsteady-state to better evaluated the actual CWU and apply it in an agricultural irrigation area in North China Plain which rely on underground water for irrigation. With the agricultural statistic data, this calculated CWU was further converted into VWC and its subdivision of crops at the annual scale. Results show that all the average values of VWC, VWC_blue and VWC_green show a downward trend with increased agricultural production and improved acreage. By comparison with the previous research, VWC calculated by Budyko framework agree well with part of the previous research and for some other research the value is greater. Our research also suggests that this methodology and findings may be reliable and convenient for investigation of virtual water throughout various agriculture regions of the world.

Keywords: virtual water content, Budyko framework, consumptive water use, crop evapotranspiration

Procedia PDF Downloads 333
561 3D Modeling for Frequency and Time-Domain Airborne EM Systems with Topography

Authors: C. Yin, B. Zhang, Y. Liu, J. Cai

Abstract:

Airborne EM (AEM) is an effective geophysical exploration tool, especially suitable for ridged mountain areas. In these areas, topography will have serious effects on AEM system responses. However, until now little study has been reported on topographic effect on airborne EM systems. In this paper, an edge-based unstructured finite-element (FE) method is developed for 3D topographic modeling for both frequency and time-domain airborne EM systems. Starting from the frequency-domain Maxwell equations, a vector Helmholtz equation is derived to obtain a stable and accurate solution. Considering that the AEM transmitter and receiver are both located in the air, the scattered field method is used in our modeling. The Galerkin method is applied to discretize the Helmholtz equation for the final FE equations. Solving the FE equations, the frequency-domain AEM responses are obtained. To accelerate the calculation speed, the response of source in free-space is used as the primary field and the PARDISO direct solver is used to deal with the problem with multiple transmitting sources. After calculating the frequency-domain AEM responses, a Hankel’s transform is applied to obtain the time-domain AEM responses. To check the accuracy of present algorithm and to analyze the characteristic of topographic effect on airborne EM systems, both the frequency- and time-domain AEM responses for 3 model groups are simulated: 1) a flat half-space model that has a semi-analytical solution of EM response; 2) a valley or hill earth model; 3) a valley or hill earth with an abnormal body embedded. Numerical experiments show that close to the node points of the topography, AEM responses demonstrate sharp changes. Special attentions need to be paid to the topographic effects when interpreting AEM survey data over rugged topographic areas. Besides, the profile of the AEM responses presents a mirror relation with the topographic earth surface. In comparison to the topographic effect that mainly occurs at the high-frequency end and early time channels, the EM responses of underground conductors mainly occur at low frequencies and later time channels. For the signal of the same time channel, the dB/dt field reflects the change of conductivity better than the B-field. The research of this paper will serve airborne EM in the identification and correction of the topographic effects.

Keywords: 3D, Airborne EM, forward modeling, topographic effect

Procedia PDF Downloads 317
560 Development of Immuno-Modulators: Application of Molecular Dynamics Simulation

Authors: Ruqaiya Khalil, Saman Usmani, Zaheer Ul-Haq

Abstract:

The accurate characterization of ligand binding affinity is indispensable for designing molecules with optimized binding affinity. Computational tools help in many directions to predict quantitative correlations between protein-ligand structure and their binding affinities. Molecular dynamics (MD) simulation is a modern state-of-the-art technique to evaluate the underlying basis of ligand-protein interactions by characterizing dynamic and energetic properties during the event. Autoimmune diseases arise from an abnormal immune response of the body against own tissues. The current regimen for the described condition is limited to immune-modulators having compromised pharmacodynamics and pharmacokinetics profiles. One of the key player mediating immunity and tolerance, thus invoking autoimmunity is Interleukin-2; a cytokine influencing the growth of T cells. Molecular dynamics simulation techniques are applied to seek insight into the inhibitory mechanisms of newly synthesized compounds that manifested immunosuppressant potentials during in silico pipeline. In addition to estimation of free energies associated with ligand binding, MD simulation yielded us a great deal of information about ligand-macromolecule interactions to evaluate the pattern of interactions and the molecular basis of inhibition. The present study is a continuum of our efforts to identify interleukin-2 inhibitors of both natural and synthetic origin. Herein, we report molecular dynamics simulation studies of Interluekin-2 complexed with different antagonists previously reported by our group. The study of protein-ligand dynamics enabled us to gain a better understanding of the contribution of different active site residues in ligand binding. The results of the study will be used as the guide to rationalize the fragment based synthesis of drug-like interleukin-2 inhibitors as immune-modulators.

Keywords: immuno-modulators, MD simulation, protein-ligand interaction, structure-based drug design

Procedia PDF Downloads 262
559 Is Sodium Channel Nav1.7 an Ideal Therapeutically Analgesic Target? A Systematic Review

Authors: Yutong Wan, John N. Wood

Abstract:

Introduction: SCN9A encoded Nav1.7 is an ideal therapeutic target with minimal side effects for the pharmaceutical industry because SCN9A variants can cause both human gains of function pain-related mutations and loss of function pain-free mutations. This study reviews the clinical effectiveness of existing Nav1.7 inhibitors, which theoretically should be powerful analgesics. Methods: A systematic review is conducted on the effectiveness of current Nav1.7 blockers undergoing clinical trials. Studies were mainly extracted from PubMed, U.S. National Library of Medicine Clinical Trials, World Health Organization International Clinical Trials Registry, ISRCTN registry platform, and Integrated Research Approval System by NHS. Only studies with full text available and those conducted using double-blinded, placebo controlled, and randomised designs and reporting at least one analgesic measurement were included. Results: Overall, 61 trials were screened, and eight studies covering PF 05089771 (Pfizer), TV 45070 (Teva & Xenon), and BIIB074 (Biogen) met the inclusion criteria. Most studies were excluded because results were not published. All three compounds demonstrated insignificant analgesic effects, and the comparison between PF 05089771 and pregabalin/ibuprofen showed that PF 05089771 was a much weaker analgesic. All three drug candidates only have mild side effects, indicating the potentials for further investigation of Nav1.7 antagonists. Discussion: The failure of current Nav1.7 small molecule inhibitors might attribute to ignorance of the key role of endogenous systems in Nav1.7 null mutants, the lack of selectivity and blocking potency, and central impermeability. The synergistic combination of analgesic drugs, a recent UCL patent, combining a small dose of Nav1.7 blockers and opioids or enkephalinase inhibitors dramatically enhanced the analgesic effects. Conclusion: The current clinical testing Nav1.7 blockers are generally disappointing. However, the newer generation of Nav1.7 targeting analgesics has overcome the major constraints of its predecessors.

Keywords: chronic pain, Nav1.7 blockers, SCN9A, systematic review

Procedia PDF Downloads 131
558 Estimation of Microbial-N Supply to Small Intestine in Angora Goats Fed by Different Roughage Sources

Authors: Nurcan Cetinkaya

Abstract:

The aim of the study was to estimate the microbial-N flow to small intestine based on daily urinary purine derivatives(PD) mainly xanthine, hypoxanthine, uric acid and allantoin excretion in Angora goats fed by grass hay and concentrate (Period I); barley straw and concentrate (Period II). Daily urine samples were collected during last 3 days of each period from 10 individually penned Angora bucks( LW 30-35 Kg, 2-3 years old) receiving ad libitum grass hay or barley straw and 300 g/d concentrate. Fresh water was always available. 4N H2SO4 was added to collected daily urine .samples to keep pH under 3 to avoid of uric acid precipitation. Diluted urine samples were stored at -20°C until analysis. Urine samples were analyzed for xanthine, hypoxanthine, uric acid, allantoin and creatinine by High-Performance Liquid Chromatographic Method (HPLC). Urine was diluted 1:15 in ratio with water and duplicate samples were prepared for HPLC analysis. Calculated mean levels (n=60) for urinary xanthine, hypoxanthine, uric acid, allantoin, total PD and creatinine excretion were 0.39±0.02 , 0.26±0.03, 0.59±0.06, 5.91±0.50, 7.15±0.57 and 3.75±0.40 mmol/L for Period I respectively; 0.35±0.03, 0.21±0.02, 0.55±0.05, 5.60±0.47, 6.71±0.46 and 3.73±0.41 mmol/L for Period II respectively.Mean values of Period I and II were significantly different (P< 0.05) except creatinine excretion. Estimated mean microbial-N supply to the small intestine for Period I and II in Angora goats were 5.72±0.46 and 5.41±0.61 g N/d respectively. The effects of grass hay and barley straw feeding on microbial-N supply to small intestine were found significantly different (P< 0.05). In conclusion, grass hay showed a better effect on the ruminal microbial protein synthesis compared to barley straw, therefore; grass hay is suggested as roughage source in Angora goat feeding.

Keywords: angora goat, HPLC method, microbial-N supply to small intestine, urinary purine derivatives

Procedia PDF Downloads 223
557 Method for Improving ICESAT-2 ATL13 Altimetry Data Utility on Rivers

Authors: Yun Chen, Qihang Liu, Catherine Ticehurst, Chandrama Sarker, Fazlul Karim, Dave Penton, Ashmita Sengupta

Abstract:

The application of ICESAT-2 altimetry data in river hydrology critically depends on the accuracy of the mean water surface elevation (WSE) at a virtual station (VS) where satellite observations intersect with water. The ICESAT-2 track generates multiple VSs as it crosses the different water bodies. The difficulties are particularly pronounced in large river basins where there are many tributaries and meanders often adjacent to each other. One challenge is to split photon segments along a beam to accurately partition them to extract only the true representative water height for individual elements. As far as we can establish, there is no automated procedure to make this distinction. Earlier studies have relied on human intervention or river masks. Both approaches are unsatisfactory solutions where the number of intersections is large, and river width/extent changes over time. We describe here an automated approach called “auto-segmentation”. The accuracy of our method was assessed by comparison with river water level observations at 10 different stations on 37 different dates along the Lower Murray River, Australia. The congruence is very high and without detectable bias. In addition, we compared different outlier removal methods on the mean WSE calculation at VSs post the auto-segmentation process. All four outlier removal methods perform almost equally well with the same R2 value (0.998) and only subtle variations in RMSE (0.181–0.189m) and MAE (0.130–0.142m). Overall, the auto-segmentation method developed here is an effective and efficient approach to deriving accurate mean WSE at river VSs. It provides a much better way of facilitating the application of ICESAT-2 ATL13 altimetry to rivers compared to previously reported studies. Therefore, the findings of our study will make a significant contribution towards the retrieval of hydraulic parameters, such as water surface slope along the river, water depth at cross sections, and river channel bathymetry for calculating flow velocity and discharge from remotely sensed imagery at large spatial scales.

Keywords: lidar sensor, virtual station, cross section, mean water surface elevation, beam/track segmentation

Procedia PDF Downloads 62
556 Deflagration and Detonation Simulation in Hydrogen-Air Mixtures

Authors: Belyayev P. E., Makeyeva I. R., Mastyuk D. A., Pigasov E. E.

Abstract:

Previously, the phrase ”hydrogen safety” was often used in terms of NPP safety. Due to the rise of interest to “green” and, particularly, hydrogen power engineering, the problem of hydrogen safety at industrial facilities has become ever more urgent. In Russia, the industrial production of hydrogen is meant to be performed by placing a chemical engineering plant near NPP, which supplies the plant with the necessary energy. In this approach, the production of hydrogen involves a wide range of combustible gases, such as methane, carbon monoxide, and hydrogen itself. Considering probable incidents, sudden combustible gas outburst into open space with further ignition is less dangerous by itself than ignition of the combustible mixture in the presence of many pipelines, reactor vessels, and any kind of fitting frames. Even ignition of 2100 cubic meters of the hydrogen-air mixture in open space gives velocity and pressure that are much lesser than velocity and pressure in Chapman-Jouguet condition and do not exceed 80 m/s and 6 kPa accordingly. However, the space blockage, the significant change of channel diameter on the way of flame propagation, and the presence of gas suspension lead to significant deflagration acceleration and to its transition into detonation or quasi-detonation. At the same time, process parameters acquired from the experiments at specific experimental facilities are not general, and their application to different facilities can only have a conventional and qualitative character. Yet, conducting deflagration and detonation experimental investigation for each specific industrial facility project in order to determine safe infrastructure unit placement does not seem feasible due to its high cost and hazard, while the conduction of numerical experiments is significantly cheaper and safer. Hence, the development of a numerical method that allows the description of reacting flows in domains with complex geometry seems promising. The base for this method is the modification of Kuropatenko method for calculating shock waves recently developed by authors, which allows using it in Eulerian coordinates. The current work contains the results of the development process. In addition, the comparison of numerical simulation results and experimental series with flame propagation in shock tubes with orifice plates is presented.

Keywords: CFD, reacting flow, DDT, gas explosion

Procedia PDF Downloads 90
555 Cardiac Protective Effect of Olive Oil against Ischemia Reperfusion- Induced Cardiac Arrhythmias in Isolated Diabetic Rat Hearts

Authors: Ishfaq A. Bukhari, Bassem Yousef Sheikh, Abdulrahman Almotrefi, Osama Yousaf, Amer Mahmood

Abstract:

Olive oil is the primary source of fat in the Mediterranean diet which is associated with a low mortality for cardiovascular disease. Olive oil is rich in monounsaturated fatty acids, and has been reported for variety of beneficial cardiovascular effects including blood pressure lowering, anti-platelet, anti-diabetic and anti-inflammatory effects. Growing number evidences from preclinical and clinical studies have shown that olive oil improves insulin resistance, decrease vessels stiffness and prevent thromboembolism. We evaluated the effects of olive against streptozotocin-induced physiological disorders in the animal models of diabetes and ischemia and reperfusion (I/R)- induced cardiac arrhythmias. Diabetes was induced in male rats with a single intraperitoneal injection of streptozotocin (60 mg/kg), rats were treated for two months with olive oil (1 ml/kg p.o). Control animals received saline. Blood glucose, body weight were monitored every 14 days. At the end of the treatment rats were sacrificed hearts were isolated for mounting on langedorff’s apparatus. The blood glucose and body weight was not significantly different in the control and olive treated animals. The control diabetic animals exhibited 100% incidence of I/R –induced ventricular fibrillation which was reduced to 0% with olive oil, treatment. The duration of ventricular fibrillation reduced from 98.8± 2.3 (control) to 0 seconds in the olive oil treated group. Diltiazem, a calcium channel blocker (1 µm/L) showed similar results and protected the I/R-induced cardiac disorders. The biochemical analysis of the cardiac tissues showed that diabetes and I/R produce marked pathological changes in the cardiomyocytes including decreased glutathione (GSH) and increased oxidative stress (Malondialdehyde; MDA). Pretreatment of animals with olive oil (1 ml/kg p.o) increased GSH and MDA levels. Olive oil also improved the diabetic-induced histopathological changes in the cardiomyocytes. These finding indicates that olive possesses cardiac protective properties. Further studies are under way in our lab to explore the mechanism of the cardio-protective effect of olive oil.

Keywords: diabeties, ischemia-reperfusion, olive oil, rats heart

Procedia PDF Downloads 463
554 Comparison of Marital Conflict Resolution Procedures and Parenting Styles between Nurses with Fixed and Rotating Shifts in Public Hospitals of Bandar Abbas, Iran

Authors: S. Abdolvahab Samavi, Kobra Hajializadeh, S. Abdolhadi Samavi

Abstract:

Nursing is a critical work that that can effect on the health of the society. A parenting style is a psychological construct demonstrating standard policies that parents use in their child rearing. The quality of parenting is more critical than the quantity spend with the child. Also, marital Conflict resolution is conceptualized as the methods and processes involved in facilitating the peaceful ending of conflict between couples. Both of these variables were affected by job status in nurses. Aim of this study was to compare the Marital Conflict Resolution and Parenting Styles between Nurses with fixed and rotating shifts in public hospitals of Bandar Abbas, Iran. Statistical population includes all married Nurses in hospitals of Bandar Abbas (900 Persons). For sample size estimation, the Morgan table was used, 270 people were selected by random sampling method. Conflict solution styles and Baumrind parenting styles questionnaire were used for collecting data about study variables. For analysis of data, descriptive and inferential statistics were used. Results showed there was significant difference between both groups in conflict solution styles. According to study results, nurses with fixed shifts had an effective conflict solution styles. Also, there was significant difference between both groups in Parenting Styles. According to study results, nurses with fixed shifts had an effective parenting style. Totally, results of this study showed that job status of nurses affected on Marital Conflict Resolution and Parenting Styles of nurses. Managers of health system should be consider these issues about work of nurses and if possible, married nurses employed at fixed day (vs. rotating) shift.

Keywords: marital conflict resolution procedures, parenting styles, nurses with fixed and rotating shifts, public hospitals

Procedia PDF Downloads 424
553 A Segmentation Method for Grayscale Images Based on the Firefly Algorithm and the Gaussian Mixture Model

Authors: Donatella Giuliani

Abstract:

In this research, we propose an unsupervised grayscale image segmentation method based on a combination of the Firefly Algorithm and the Gaussian Mixture Model. Firstly, the Firefly Algorithm has been applied in a histogram-based research of cluster means. The Firefly Algorithm is a stochastic global optimization technique, centered on the flashing characteristics of fireflies. In this context it has been performed to determine the number of clusters and the related cluster means in a histogram-based segmentation approach. Successively these means are used in the initialization step for the parameter estimation of a Gaussian Mixture Model. The parametric probability density function of a Gaussian Mixture Model is represented as a weighted sum of Gaussian component densities, whose parameters are evaluated applying the iterative Expectation-Maximization technique. The coefficients of the linear super-position of Gaussians can be thought as prior probabilities of each component. Applying the Bayes rule, the posterior probabilities of the grayscale intensities have been evaluated, therefore their maxima are used to assign each pixel to the clusters, according to their gray-level values. The proposed approach appears fairly solid and reliable when applied even to complex grayscale images. The validation has been performed by using different standard measures, more precisely: the Root Mean Square Error (RMSE), the Structural Content (SC), the Normalized Correlation Coefficient (NK) and the Davies-Bouldin (DB) index. The achieved results have strongly confirmed the robustness of this gray scale segmentation method based on a metaheuristic algorithm. Another noteworthy advantage of this methodology is due to the use of maxima of responsibilities for the pixel assignment that implies a consistent reduction of the computational costs.

Keywords: clustering images, firefly algorithm, Gaussian mixture model, meta heuristic algorithm, image segmentation

Procedia PDF Downloads 217