Search results for: damage prediction models
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 10121

Search results for: damage prediction models

3821 Designing Equivalent Model of Floating Gate Transistor

Authors: Birinderjit Singh Kalyan, Inderpreet Kaur, Balwinder Singh Sohi

Abstract:

In this paper, an equivalent model for floating gate transistor has been proposed. Using the floating gate voltage value, capacitive coupling coefficients has been found at different bias conditions. The amount of charge present on the gate has been then calculated using the transient models of hot electron programming and Fowler-Nordheim Tunnelling. The proposed model can be extended to the transient conditions as well. The SPICE equivalent model is designed and current-voltage characteristics and Transfer characteristics are comparatively analysed. The dc current-voltage characteristics, as well as dc transfer characteristics, have been plotted for an FGMOS with W/L=0.25μm/0.375μm, the inter-poly capacitance of 0.8fF for both programmed and erased states. The Comparative analysis has been made between the present model and capacitive coefficient coupling methods which were already available.

Keywords: FGMOS, floating gate transistor, capacitive coupling coefficient, SPICE model

Procedia PDF Downloads 531
3820 Understanding the Dynamics of Linker Histone Using Mathematical Modeling and FRAP Experiments

Authors: G. Carrero, C. Contreras, M. J. Hendzel

Abstract:

Linker histones or histones H1 are highly mobile nuclear proteins that regulate the organization of chromatin and limit DNA accessibility by binding to the chromatin structure (DNA and associated proteins). It is known that this binding process is driven by both slow (strong binding) and rapid (weak binding) interactions. However, the exact binding mechanism has not been fully described. Moreover, the existing models only account for one type of bound population that does not distinguish explicitly between the weakly and strongly bound proteins. Thus, we propose different systems of reaction-diffusion equations to describe explicitly the rapid and slow interactions during a FRAP (Fluorescence Recovery After Photobleaching) experiment. We perform a model comparison analysis to characterize the binding mechanism of histone H1 and provide new meaningful biophysical information on the kinetics of histone H1.

Keywords: FRAP (Fluorescence Recovery After Photobleaching), histone H1, histone H1 binding kinetics, linker histone, reaction-diffusion equation

Procedia PDF Downloads 419
3819 Implementation of Cloud Customer Relationship Management in Banking Sector: Strategies, Benefits and Challenges

Authors: Ngoc Dang Khoa Nguyen, Imran Ali

Abstract:

The cloud customer relationship management (CRM) has emerged as an innovative tool to augment the customer satisfaction and performance of banking systems. Cloud CRM allows to collect, analyze and utilize customer-associated information and update the systems, thereby offer superior customer service. Cloud technologies have invaluable potential to ensure innovative customer experiences, successful collaboration, enhanced speed to marketplace and IT effectiveness. As such, many leading banks have been attracted towards adoption of such innovative and customer-driver solutions to revolutionize their existing business models. Chief Information Officers (CIOs) are already implemented or in the process of implementation of cloud CRM. However, many organizations are still reluctant to take such initiative due to the lack of information on the factors influencing its implementation. This paper, therefore, aims to delve into the strategies, benefits and challenges intertwined in the implementation of Cloud CRM in banking sector and provide reliable solutions.

Keywords: banking sector, cloud computing, cloud CRM, strategy

Procedia PDF Downloads 149
3818 A Timed and Colored Petri Nets for Modeling and Verify Cloud System Elasticity

Authors: Walid Louhichi, Mouhebeddine Berrima, Narjes Ben Rajed

Abstract:

Elasticity is the essential property of cloud computing. As the name suggests, it constitutes the ability of a cloud system to adjust resource provisioning in relation to fluctuating workload. There are two types of elasticity operations, vertical and horizontal. In this work, we are interested in horizontal scaling, which is ensured by two mechanisms; scaling in and scaling out. Following the sizing of the system, we can adopt scaling in in the event of over-supply and scaling out in the event of under-supply. In this paper, we propose a formal model, based on colored and temporized Petri nets, for the modeling of the duplication and the removal of a virtual machine from a server. This model is based on formal Petri Nets modeling language. The proposed models are edited, verified, and simulated with two examples implemented in CPNtools, which is a modeling tool for colored and timed Petri nets.

Keywords: cloud computing, elasticity, elasticity controller, petri nets, scaling in, scaling out

Procedia PDF Downloads 138
3817 Molecular Characterization of Major Isolated Organism Involved in Bovine Subclinical Mastitis

Authors: H. K. Ratre, M. Roy, S. Roy, M. S. Parmar, V. Bhagat

Abstract:

Mastitis is a common problem of dairy industries. Reduction in milk production and an irreparable damage to the udder associated with the disease are common causes of culling of dairy cows. Milk from infected animals is not suitable for drinking and for making different milk products. So, it has a major economic importance in dairy cattle. The aims of this study were to investigate the bacteriological panorama in milk from udder quarters with subclinical mastitis and to carried out for the molecular characterization of the major isolated organisms, from subclinical mastitis-affected cows in and around Durg and Rajnandgaon district of Chhattisgarh. Isolation and identification of bacteria from the milk samples of subclinical mastitis-affected cows were done by standard and routine culture procedures. A total of 78 isolates were obtained from cows and among the various bacteria isolated, Staphylococcus spp. occupied prime position with occurrence rate of 51.282%. However, other bacteria isolated includeStreptococcus spp. (20.512%), Micrococcus spp. (14.102%), E. coli (8.974%), Klebsiela spp. (2.564%), Salmonella spp. (1.282%) and Proteus spp. (1.282%). Staphylococcus spp. was isolated as the major causative agent of subclinical mastitis in the studied area. Molecular characterization of Staphylococus aureusisolates was done for genetic expression of the virulence genes like ‘nuc’ encoding thermonucleaseexoenzyme, coa and spa by PCR amplification of the respective genes in 25 Staphylococcus isolates. In the present study, 15 isolates (77.27%) out of 20 coagulase positive isolates were found to be genotypically positive for ‘nuc’ where as 20 isolates (52.63%) out of 38 CNS expressed the presence of the same virulence gene. In the present study, three Staphylococcus isolates were found to be genotypically positive for coa gene. The Amplification of the coa gene yielded two different products of 627, 710 bp. The amplification of the gene segment encoding the IgG binding region of protein A (spa) revealed a size of 220 and 253bp in twostaphylococcus isolates. The X-region binding of the spa gene produced an amplicon of 315 bp in one Staphylococcal isolates. Staphylococcus aureus was found to be major isolate (51.28%) responsible for causing subclinical mastitis in cows which also showed expression of virulence genesnuc, coa and spa.

Keywords: mastitis, bacteria, characterization, expression, gene

Procedia PDF Downloads 202
3816 Efficacy and Safety of Inhaled Nebulized Chemotherapy in Treatment of Patients with Newly Diagnosed Pulmonary Tuberculosis in Comparison to Standard Antimycobacterial Therapy

Authors: M. Kuzhko, M. Gumeniuk, D. Butov, T. Tlustova, O. Denysov, T. Sprynsian

Abstract:

Abstract: The objective of this work was to study the efficacy and safety of inhaled nebulized chemotherapy in the treatment of patients with newly diagnosed pulmonary tuberculosis in comparison with standard antimycobacterial therapy. Materials and methods: The study involved 68 patients aged between 20 and 70 years with newly diagnosed pulmonary tuberculosis. Patients were allocated to two groups. The first (main, n=21) group of patients received standard chemotherapy and further 0.15 g of isoniazid and rifampicin 0.15 g inhaled through a nebulizer, also they received salmeterol 50 mcg + fluticasone propionate 250 mcg at 2 breaths twice a day for 2 months. The second (control, n=47) group of patients received standard chemotherapy, consisting of orally administered isoniazid (0.3 g), rifampicin (0.6 g), pyrazinamide (2 g), ethambutol (1.2 g) with a dose reduction after the intensive phase of the therapy. The anti-TB drugs were procured through the Ukraine’s centralized national supply system. Results: Intoxication symptoms in the first group reduced following 1.39±0.18 months, whereas in the second group, intoxication symptoms reduced following 2.7±0.1 months, p<.001. Moreover, respiratory symptoms regression in the first group was observed following 1.6±0.2 months, whereas in the second group – following 2.5±0.2 months, p<0.05. Bacillary excretion period evaluated within 1 month was reduced, as it was shown by 66.6±10.5% in the main group compared to 27.6±6.5%, p<0.05, in the control group. In addition, period of cavities healing was reduced to 2.9±0.2 months in the main group compared to 3.7±0.1 months, p<0.05, in the control group. Residual radiological lung damage findings (large residual changes) were observed in 22 (23.8±9.5 %) patients of the main group versus 24 (51.0±7.2 %) patients in the control group, p<0.05. After completion of treatment scar stenosis of the bronchi II-III art. diagnosed in 3 (14.2±7.8%) patients in main group and 17 (68.0±6.8%) - control group, p<0.05. The duration of hospital treatment was 2.4±0.4 months in main group and 4.1±0.4 months in control group, p<0.05. Conclusion: Administration of of inhaled nebulized chemotherapy in patients with newly diagnosed pulmonary tuberculosis resulted in a comparatively quick reduction of disease manifestation.

Keywords: inhaled nebulized chemotherapy, pulmonary tuberculosis, tuberculosis, treatment of tuberculosis

Procedia PDF Downloads 181
3815 Frequency Selective Filters for Estimating the Equivalent Circuit Parameters of Li-Ion Battery

Authors: Arpita Mondal, Aurobinda Routray, Sreeraj Puravankara, Rajashree Biswas

Abstract:

The most difficult part of designing a battery management system (BMS) is battery modeling. A good battery model can capture the dynamics which helps in energy management, by accurate model-based state estimation algorithms. So far the most suitable and fruitful model is the equivalent circuit model (ECM). However, in real-time applications, the model parameters are time-varying, changes with current, temperature, state of charge (SOC), and aging of the battery and this make a great impact on the performance of the model. Therefore, to increase the equivalent circuit model performance, the parameter estimation has been carried out in the frequency domain. The battery is a very complex system, which is associated with various chemical reactions and heat generation. Therefore, it’s very difficult to select the optimal model structure. As we know, if the model order is increased, the model accuracy will be improved automatically. However, the higher order model will face the tendency of over-parameterization and unfavorable prediction capability, while the model complexity will increase enormously. In the time domain, it becomes difficult to solve higher order differential equations as the model order increases. This problem can be resolved by frequency domain analysis, where the overall computational problems due to ill-conditioning reduce. In the frequency domain, several dominating frequencies can be found in the input as well as output data. The selective frequency domain estimation has been carried out, first by estimating the frequencies of the input and output by subspace decomposition, then by choosing the specific bands from the most dominating to the least, while carrying out the least-square, recursive least square and Kalman Filter based parameter estimation. In this paper, a second order battery model consisting of three resistors, two capacitors, and one SOC controlled voltage source has been chosen. For model identification and validation hybrid pulse power characterization (HPPC) tests have been carried out on a 2.6 Ah LiFePO₄ battery.

Keywords: equivalent circuit model, frequency estimation, parameter estimation, subspace decomposition

Procedia PDF Downloads 131
3814 A Novel Approach of Power Transformer Diagnostic Using 3D FEM Parametrical Model

Authors: M. Brandt, A. Peniak, J. Makarovič, P. Rafajdus

Abstract:

This paper deals with a novel approach of power transformers diagnostics. This approach identifies the exact location and the range of a fault in the transformer and helps to reduce operation costs related to handling of the faulty transformer, its disassembly and repair. The advantage of the approach is a possibility to simulate healthy transformer and also all faults, which can occur in transformer during its operation without its disassembling, which is very expensive in practice. The approach is based on creating frequency dependent impedance of the transformer by sweep frequency response analysis measurements and by 3D FE parametrical modeling of the fault in the transformer. The parameters of the 3D FE model are the position and the range of the axial short circuit. Then, by comparing the frequency dependent impedances of the parametrical models with the measured ones, the location and the range of the fault is identified. The approach was tested on a real transformer and showed high coincidence between the real fault and the simulated one.

Keywords: transformer, parametrical model of transformer, fault, sweep frequency response analysis, finite element method

Procedia PDF Downloads 469
3813 The Extent of Land Use Externalities in the Fringe of Jakarta Metropolitan: An Application of Spatial Panel Dynamic Land Value Model

Authors: Rahma Fitriani, Eni Sumarminingsih, Suci Astutik

Abstract:

In a fast growing region, conversion of agricultural lands which are surrounded by some new development sites will occur sooner than expected. This phenomenon has been experienced by many regions in Indonesia, especially the fringe of Jakarta (BoDeTaBek). Being Indonesia’s capital city, rapid conversion of land in this area is an unavoidable process. The land conversion expands spatially into the fringe regions, which were initially dominated by agricultural land or conservation sites. Without proper control or growth management, this activity will invite greater costs than benefits. The current land use is the use which maximizes its value. In order to maintain land for agricultural activity or conservation, some efforts are needed to keep the land value of this activity as high as possible. In this case, the knowledge regarding the functional relationship between land value and its driving forces is necessary. In a fast growing region, development externalities are the assumed dominant driving force. Land value is the product of the past decision of its use leading to its value. It is also affected by the local characteristics and the observed surrounded land use (externalities) from the previous period. The effect of each factor on land value has dynamic and spatial virtues; an empirical spatial dynamic land value model will be more useful to capture them. The model will be useful to test and to estimate the extent of land use externalities on land value in the short run as well as in the long run. It serves as a basis to formulate an effective urban growth management’s policy. This study will apply the model to the case of land value in the fringe of Jakarta Metropolitan. The model will be used further to predict the effect of externalities on land value, in the form of prediction map. For the case of Jakarta’s fringe, there is some evidence about the significance of neighborhood urban activity – negative externalities, the previous land value and local accessibility on land value. The effects are accumulated dynamically over years, but they will fully affect the land value after six years.

Keywords: growth management, land use externalities, land value, spatial panel dynamic

Procedia PDF Downloads 241
3812 Reconstruction of Age-Related Generations of Siberian Larch to Quantify the Climatogenic Dynamics of Woody Vegetation Close the Upper Limit of Its Growth

Authors: A. P. Mikhailovich, V. V. Fomin, E. M. Agapitov, V. E. Rogachev, E. A. Kostousova, E. S. Perekhodova

Abstract:

Woody vegetation among the upper limit of its habitat is a sensitive indicator of biota reaction to regional climate changes. Quantitative assessment of temporal and spatial changes in the distribution of trees and plant biocenoses calls for the development of new modeling approaches based upon selected data from measurements on the ground level and ultra-resolution aerial photography. Statistical models were developed for the study area located in the Polar Urals. These models allow obtaining probabilistic estimates for placing Siberian Larch trees into one of the three age intervals, namely 1-10, 11-40 and over 40 years, based on the Weilbull distribution of the maximum horizontal crown projection. Authors developed the distribution map for larch trees with crown diameters exceeding twenty centimeters by deciphering aerial photographs made by a UAV from an altitude equal to fifty meters. The total number of larches was equal to 88608, forming the following distribution row across the abovementioned intervals: 16980, 51740, and 19889 trees. The results demonstrate that two processes can be observed in the course of recent decades: first is the intensive forestation of previously barren or lightly wooded fragments of the study area located within the patches of wood, woodlands, and sparse stand, and second, expansion into mountain tundra. The current expansion of the Siberian Larch in the region replaced the depopulation process that occurred in the course of the Little Ice Age from the late 13ᵗʰ to the end of the 20ᵗʰ century. Using data from field measurements of Siberian larch specimen biometric parameters (including height, diameter at root collar and at 1.3 meters, and maximum projection of the crown in two orthogonal directions) and data on tree ages obtained at nine circular test sites, authors developed a model for artificial neural network including two layers with three and two neurons, respectively. The model allows quantitative assessment of a specimen's age based on height and maximum crone projection values. Tree height and crown diameters can be quantitatively assessed using data from aerial photographs and lidar scans. The resulting model can be used to assess the age of all Siberian larch trees. The proposed approach, after validation, can be applied to assessing the age of other tree species growing near the upper tree boundaries in other mountainous regions. This research was collaboratively funded by the Russian Ministry for Science and Education (project No. FEUG-2023-0002) and Russian Science Foundation (project No. 24-24-00235) in the field of data modeling on the basis of artificial neural network.

Keywords: treeline, dynamic, climate, modeling

Procedia PDF Downloads 45
3811 Evaluation of Compatibility between Produced and Injected Waters and Identification of the Causes of Well Plugging in a Southern Tunisian Oilfield

Authors: Sonia Barbouchi, Meriem Samcha

Abstract:

Scale deposition during water injection into aquifer of oil reservoirs is a serious problem experienced in the oil production industry. One of the primary causes of scale formation and injection well plugging is mixing two waters which are incompatible. Considered individually, the waters may be quite stable at system conditions and present no scale problems. However, once they are mixed, reactions between ions dissolved in the individual waters may form insoluble products. The purpose of this study is to identify the causes of well plugging in a southern Tunisian oilfield, where fresh water has been injected into the producing wells to counteract the salinity of the formation waters and inhibit the deposition of halite. X-ray diffraction (XRD) mineralogical analysis has been carried out on scale samples collected from the blocked well. Two samples collected from both formation water and injected water were analysed using inductively coupled plasma atomic emission spectroscopy, ion chromatography and other standard laboratory techniques. The results of complete waters analysis were the typical input parameters, to determine scaling tendency. Saturation indices values related to CaCO3, CaSO4, BaSO4 and SrSO4 scales were calculated for the water mixtures at different share, under various conditions of temperature, using a computerized scale prediction model. The compatibility study results showed that mixing the two waters tends to increase the probability of barite deposition. XRD analysis confirmed the compatibility study results, since it proved that the analysed deposits consisted predominantly of barite with minor galena. At the studied temperatures conditions, the tendency for barite scale is significantly increasing with the increase of fresh water share in the mixture. The future scale inhibition and removal strategies to be implemented in the concerned oilfield are being derived in a large part from the results of the present study.

Keywords: compatibility study, produced water, scaling, water injection

Procedia PDF Downloads 155
3810 Experimental Investigation on Tsunami Acting on Bridges

Authors: Iman Mazinani, Zubaidah Ismail, Ahmad Mustafa Hashim, Amir Reza Saba

Abstract:

Two tragic tsunamis that devastated the west coast of Sumatra Island, Indonesia in 2004 and North East Japan in 2011 had damaged bridges to various extents. Tsunamis have resulted in the catastrophic deterioration of infrastructures i.e. coastal structures, utilities and transportation facilities. A bridge structure performs vital roles to enable people to perform activities related to their daily needs and for development. A damaged bridge needs to be repaired expeditiously. In order to understand the effects of tsunami forces on bridges, experimental tests are carried out to measure the characteristics of hydrodynamic force at various wave heights. Coastal bridge models designed at a 1:40 scale are used in a 24.0 m long hydraulic flume with a cross section of 1.5 m by 2.0 m. The horizontal forces and uplift forces in all cases show that forces increase nonlinearly with increasing wave amplitude.

Keywords: tsunami, bridge, horizontal force, uplift force

Procedia PDF Downloads 285
3809 Development of a Steam or Microwave-Assisted Sequential Salt-Alkali Pretreatment for Sugarcane Leaf Waste

Authors: Preshanthan Moodley

Abstract:

This study compares two different pretreatments for sugarcane leaf waste (SLW): steam salt-alkali (SSA) and microwave salt-alkali (MSA). The two pretreatment types were modelled, optimized, and validated with R² > 0.97. Reducing sugar yields of 1.21g/g were obtained with optimized SSA pretreatment using 1.73M ZnCl₂, 1.36M NaOH and 9.69% solid loading, and 1.17g/g with optimized MSA pretreatment using 1.67M ZnCl₂, 1.52M NaOH at 400W for 10min. A lower pretreatment time (10min) was required for the MSA model (83% lower). The structure of pretreated SLW was assessed using scanning electron microscopy (SEM) and Fourier Transform Infrared analysis (FTIR). The optimized SSA and MSA models showed lignin removal of 80.5 and 73% respectively. The MSA pretreatment was further examined on sorghum leaves and Napier grass and showed yield improvements of 1.9- and 2.8-fold compared to recent reports. The developed pretreatment methods demonstrated high efficiency at enhancing enzymatic hydrolysis on various lignocellulosic substrates.

Keywords: lignocellulosic biomass, pretreatment, salt, sugarcane leaves

Procedia PDF Downloads 247
3808 Improvement in Blast Furnace Performance Using Softening - Melting Zone Profile Prediction Model at G Blast Furnace, Tata Steel Jamshedpur

Authors: Shoumodip Roy, Ankit Singhania, K. R. K. Rao, Ravi Shankar, M. K. Agarwal, R. V. Ramna, Uttam Singh

Abstract:

The productivity of a blast furnace and the quality of the hot metal produced are significantly dependent on the smoothness and stability of furnace operation. The permeability of the furnace bed, as well as the gas flow pattern, influences the steady control of process parameters. The softening – melting zone that is formed inside the furnace contributes largely in distribution of the gas flow and the bed permeability. A better shape of softening-melting zone enhances the performance of blast furnace, thereby reducing the fuel rates and improving furnace life. Therefore, predictive model of the softening- melting zone profile can be utilized to control and improve the furnace operation. The shape of softening-melting zone depends upon the physical and chemical properties of the agglomerates and iron ore charged in the furnace. The variations in the agglomerate proportion in the burden at G Blast furnace disturbed the furnace stability. During such circumstances, it was analyzed that a w-shape softening-melting zone profile was formed inside the furnace. The formation of w-shape zone resulted in poor bed permeability and non-uniform gas flow. There was a significant increase in the heat loss at the lower zone of the furnace. The fuel demand increased, and the huge production loss was incurred. Therefore, visibility of softening-melting zone profile was necessary in order to pro-actively optimize the process parameters and thereby to operate the furnace smoothly. Using stave temperatures, a model was developed that predicted the shape of the softening-melting zone inside the furnace. It was observed that furnace operated smoothly during inverse V-shape of the zone and vice-versa during w-shape. This model helped to control the heat loss, optimize the burden distribution and lower the fuel rate at G Blast Furnace, TSL Jamshedpur. As a result of furnace stabilization productivity increased by 10% and fuel rate reduced by 80 kg/thm. Details of the process have been discussed in this paper.

Keywords: agglomerate, blast furnace, permeability, softening-melting

Procedia PDF Downloads 238
3807 Influence of a High-Resolution Land Cover Classification on Air Quality Modelling

Authors: C. Silveira, A. Ascenso, J. Ferreira, A. I. Miranda, P. Tuccella, G. Curci

Abstract:

Poor air quality is one of the main environmental causes of premature deaths worldwide, and mainly in cities, where the majority of the population lives. It is a consequence of successive land cover (LC) and use changes, as a result of the intensification of human activities. Knowing these landscape modifications in a comprehensive spatiotemporal dimension is, therefore, essential for understanding variations in air pollutant concentrations. In this sense, the use of air quality models is very useful to simulate the physical and chemical processes that affect the dispersion and reaction of chemical species into the atmosphere. However, the modelling performance should always be evaluated since the resolution of the input datasets largely dictates the reliability of the air quality outcomes. Among these data, the updated LC is an important parameter to be considered in atmospheric models, since it takes into account the Earth’s surface changes due to natural and anthropic actions, and regulates the exchanges of fluxes (emissions, heat, moisture, etc.) between the soil and the air. This work aims to evaluate the performance of the Weather Research and Forecasting model coupled with Chemistry (WRF-Chem), when different LC classifications are used as an input. The influence of two LC classifications was tested: i) the 24-classes USGS (United States Geological Survey) LC database included by default in the model, and the ii) CLC (Corine Land Cover) and specific high-resolution LC data for Portugal, reclassified according to the new USGS nomenclature (33-classes). Two distinct WRF-Chem simulations were carried out to assess the influence of the LC on air quality over Europe and Portugal, as a case study, for the year 2015, using the nesting technique over three simulation domains (25 km2, 5 km2 and 1 km2 horizontal resolution). Based on the 33-classes LC approach, particular emphasis was attributed to Portugal, given the detail and higher LC spatial resolution (100 m x 100 m) than the CLC data (5000 m x 5000 m). As regards to the air quality, only the LC impacts on tropospheric ozone concentrations were evaluated, because ozone pollution episodes typically occur in Portugal, in particular during the spring/summer, and there are few research works relating to this pollutant with LC changes. The WRF-Chem results were validated by season and station typology using background measurements from the Portuguese air quality monitoring network. As expected, a better model performance was achieved in rural stations: moderate correlation (0.4 – 0.7), BIAS (10 – 21µg.m-3) and RMSE (20 – 30 µg.m-3), and where higher average ozone concentrations were estimated. Comparing both simulations, small differences grounded on the Leaf Area Index and air temperature values were found, although the high-resolution LC approach shows a slight enhancement in the model evaluation. This highlights the role of the LC on the exchange of atmospheric fluxes, and stresses the need to consider a high-resolution LC characterization combined with other detailed model inputs, such as the emission inventory, to improve air quality assessment.

Keywords: land use, spatial resolution, WRF-Chem, air quality assessment

Procedia PDF Downloads 138
3806 Hemispheric Locus and Gender Predict the Delay between the Moment of Stroke and Hospitalization

Authors: D. Anderlini, G. Wallis

Abstract:

Background: The number of people experiencing stroke is steadily increasing due to changes in diet and lifestyle, to longer life expectancy resulting in older population, to higher survival rates as a consequence of improvements during the acute phase. This study considers what risk factors might contribute to delayed entry to hospital for treatment. Methods: We analyzed data from 2472 patients admitted to the Stroke Unit of the Royal Brisbane Women's Hospital, Australia, between 2002 to 2011. Results: Previous studies have reported that factors which can contribute to delay include the patient’s age, the time of day, physical location, visit the GP instead of going to the emergency, means of transport, severity of symptoms and type of stroke. Contrary to findings of other studies, we found a strong correlation between side of lesion and delay in admission: patients with right hemisphere lesions had an average delay of 3.78 days, while patients with left hemisphere lesions had an average delay of 1.49 days. Damage to the right hemisphere generally ends in motor impairment in the non-dominant hand and no speech impediment. In contrast, left hemisphere lesions can result in deficit to; dominant hand function and aphasia which will be noticed even if their impact on performance is relatively minor. A finding which goes against many previous studies, is the fact that women get to the hospital much sooner than men, showing an average delay of 0.92 days in women vs. 3.36 days in men. Conclusion: Acute surgical-pharmacological therapies are most effective if applied immediately after stroke. Hence delays to admission can be crucial to the degree of recovery. The tendency of patients to overlook symptoms of right hemisphere lesion should be the target of information campaigns both for the general public and GPs. Why do men go to hospital so late? We don't know yet! Nevertheless an awareness plan specifically direct to male population should be on the agenda of Health Departments.

Keywords: gender, admission delay, stroke location, bioinformatics, biomedicine

Procedia PDF Downloads 216
3805 Stream Extraction from 1m-DTM Using ArcGIS

Authors: Jerald Ruta, Ricardo Villar, Jojemar Bantugan, Nycel Barbadillo, Jigg Pelayo

Abstract:

Streams are important in providing water supply for industrial, agricultural and human consumption, In short when there are streams there are lives. Identifying streams are essential since many developed cities are situated in the vicinity of these bodies of water and in flood management, it serves as basin for surface runoff within the area. This study aims to process and generate features from high-resolution digital terrain model (DTM) with 1-meter resolution using Hydrology Tools of ArcGIS. The raster was then filled, processed flow direction and accumulation, then raster calculate and provide stream order, converted to vector, and clearing undesirable features using the ancillary or google earth. In field validation streams were classified whether perennial, intermittent or ephemeral. Results show more than 90% of the extracted feature were accurate in assessment through field validation.

Keywords: digital terrain models, hydrology tools, strahler method, stream classification

Procedia PDF Downloads 257
3804 Testing the Capital Structure Behavior of Malaysian Firms: Shariah vs. Non-Shariah Compliant

Authors: Asyraf Abdul Halim, Mohd Edil Abd Sukor, Obiyathulla Ismath Bacha

Abstract:

This paper attempts to investigate the capital structure behavior of Shariah compliant firms of various levels as well those firms who are consistently Shariah non-compliant in Malaysia. The paper utilizes a unique dataset of firms of the heterogeneous level of Shariah-compliancy status over a 20 year period from the year 1997 to 2016. The paper focuses on the effects of dynamic forces behind capital structure variation such as the optimal capital structure behavior based on the trade-off, pecking order, market timing and firmly fixed effect models of capital structure. This study documents significant evidence in support of the trade-off theory with a high speed of adjustment (SOA) as well as for the time-invariant firm fixed effects across all Shariah compliance group.

Keywords: capital structure, market timing, trade-off theory, equity risk premium, Shariah-compliant firms

Procedia PDF Downloads 297
3803 Role of P53, KI67 and Cyclin a Immunohistochemical Assay in Predicting Wilms’ Tumor Mortality

Authors: Ahmed Atwa, Ashraf Hafez, Mohamed Abdelhameed, Adel Nabeeh, Mohamed Dawaba, Tamer Helmy

Abstract:

Introduction and Objective: Tumour staging and grading do not usually reflect the future behavior of Wilms' tumor (WT) regarding mortality. Therefore, in this study, P53, Ki67 and cyclin A immunohistochemistry were used in a trial to predict WT cancer-specific survival (CSS). Methods: In this nonconcurrent cohort study, patients' archived data, including age at presentation, gender, history, clinical examination and radiological investigations, were retrieved then the patients were reviewed at the outpatient clinic of a tertiary care center by history-taking, clinical examination and radiological investigations to detect the oncological outcome. Cases that received preoperative chemotherapy or died due to causes other than WT were excluded. Formalin-fixed, paraffin-embedded specimens obtained from the previously preserved blocks at the pathology laboratory were taken on positively charged slides for IHC with p53, Ki67 and cyclin A. All specimens were examined by an experienced histopathologist devoted to the urological practice and blinded to the patient's clinical findings. P53 and cyclin A staining were scored as 0 (no nuclear staining),1 (<10% nuclear staining), 2 (10-50% nuclear staining) and 3 (>50% nuclear staining). Ki67 proliferation index (PI) was graded as low, borderline and high. Results: Of the 75 cases, 40 (53.3%) were males and 35 (46.7%) were females, and the median age was 36 months (2-216). With a mean follow-up of 78.6±31 months, cancer-specific mortality (CSM) occurred in 15 (20%) and 11 (14.7%) patients, respectively. Kaplan-Meier curve was used for survival analysis, and groups were compared using the Log-rank test. Multivariate logistic regression and Cox regression were not used because only one variable (cyclin A) had shown statistical significance (P=.02), whereas the other significant factor (residual tumor) had few cases. Conclusions: Cyclin A IHC should be considered as a marker for the prediction of WT CSS. Prospective studies with a larger sample size are needed.

Keywords: wilms’ tumour, nephroblastoma, urology, survival

Procedia PDF Downloads 55
3802 The Use of Bleomycin and Analogues to Probe the Chromatin Structure of Human Genes

Authors: Vincent Murray

Abstract:

The chromatin structure at the transcription start sites (TSSs) of genes is very important in the control of gene expression. In order for gene expression to occur, the chromatin structure at the TSS has to be altered so that the transcriptional machinery can be assembled and RNA transcripts can be produced. In particular, the nucleosome structure and positioning around the TSS has to be changed. Bleomycin is utilized as an anti-tumor agent to treat Hodgkin's lymphoma, squamous cell carcinoma, and testicular cancer. Bleomycin produces DNA damage in human cells and DNA strand breaks, especially double-strand breaks, are thought to be responsible for the cancer chemotherapeutic activity of bleomycin. Bleomycin is a large glycopeptide with molecular weight of approximately 1500 Daltons and hence its DNA strand cleavage activity can be utilized as a probe of chromatin structure. In this project, Illumina next-generation DNA sequencing technology was used to determine the position of DNA double-strand breaks at the TSSs of genes in intact cells. In this genome-wide study, it was found that bleomycin cleavage preferentially occurred at the TSSs of actively transcribed human genes in comparison with non-transcribed genes. There was a correlation between the level of enhanced bleomycin cleavage at TSSs and the degree of transcriptional activity. In addition, bleomycin was able to determine the position of nucleosomes at the TSSs of human genes. Bleomycin analogues were also utilized as probes of chromatin structure at the TSSs of human genes. In a similar manner to bleomycin, the bleomycin analogues 6′-deoxy-BLM Z and zorbamycin preferentially cleaved at the TSSs of human genes. Interestingly this degree of enhanced TSS cleavage inversely correlated with the cytotoxicity (IC50 values) of BLM analogues. This indicated that the degree of cleavage by bleomycin analogues at the TSSs of human genes was very important in the cytotoxicity of bleomycin and analogues. It also provided a deeper insight into the mechanism of action of this cancer chemotherapeutic agent since actively transcribed genes were preferentially targeted.

Keywords: anti-cancer activity, chromatin structure, cytotoxicity, gene expression, next-generation DNA sequencing

Procedia PDF Downloads 105
3801 Popular eReaders

Authors: Tom D. Gedeon, Ujala Rampaul

Abstract:

The evaluation of electronic consumer goods are most often done from the perspective of analysing the latest models, comparing their advantages and disadvantages with respect to price. This style of evaluation is often performed by one or a few product experts on a wide range of features that may not be applicable to each user. We instead used a scenario-based approach to evaluate a number of e-readers. The setting is similar to a user who is interested in a new product or technology and has allocated a limited budget. We evaluate the quality and usability of e-readers available within that budget range. This is based on the assumption of a rational market which prices older second hand devices the same as functionally equivalent new devices. We describe our evaluation and comparison of four branded eReaders, as the initial stage of a larger project. The scenario has a range of tasks approximating a busy person who does not bother to read the manual. We found that navigation within books to be the most significant differentiator between the eReaders in our scenario based evaluation process.

Keywords: eReader, scenario based, price comparison, Kindle, Kobo, Nook, Sony, technology adoption

Procedia PDF Downloads 511
3800 Microfluidic Based High Throughput Screening System for Photodynamic Therapy against Cancer Cells

Authors: Rina Lee, Chung-Hun Oh, Eunjin Lee, Jeongyun Kim

Abstract:

The Photodynamic therapy (PDT) is a treatment that uses a photosensitizer as a drug to damage and kill cancer cells. After injecting the photosensitizer into the bloodstream, the drug is absorbed by cancer cells selectively. Then the area to be treated is exposed to specific wavelengths of light and the photosensitizer produces a form of oxygen that kills nearby cancer cells. PDT is has an advantage to destroy the tumor with minimized side-effects on normal cells. But, PDT is not a completed method for cancer therapy. Because the mechanism of PDT is quite clear yet and the parameters such as intensity of light and dose of photosensitizer are not optimized for different types of cancers. To optimize these parameters, we suggest a novel microfluidic system to automatically control intensity of light exposure with a personal computer (PC). A polydimethylsiloxane (PDMS) microfluidic chip is composed with (1) a cell culture channels layer where cancer cells were trapped to be tested with various dosed photofrin (1μg/ml used for the test) as the photosensitizer and (2) a color dye layer as a neutral density (ND) filter to reduce intensity of light which exposes the cell culture channels filled with cancer cells. Eight different intensity of light (10%, 20%, …, 100%) are generated through various concentrations of blue dye filling the ND filter. As a light source, a light emitting diode (LED) with 635nm wavelength was placed above the developed PDMS microfluidic chip. The total time for light exposure was 30 minutes and HeLa and PC3 cell lines of cancer cells were tested. The cell viability of cells was evaluated with a Live/Dead assay kit (L-3224, Invitrogen, USA). The stronger intensity of light exposed, the lower viability of the cell was observed, and vice versa. Therefore, this system was demonstrated through investigating the PDT against cancer cell to optimize the parameters as critical light intensity and dose of photosensitizer. Our results suggest that the system can be used for optimizing the combinational parameters of light intensity and photosensitizer dose against diverse cancer cell types.

Keywords: photodynamic therapy, photofrin, high throughput screening, hela

Procedia PDF Downloads 373
3799 Applications of Analytical Probabilistic Approach in Urban Stormwater Modeling in New Zealand

Authors: Asaad Y. Shamseldin

Abstract:

Analytical probabilistic approach is an innovative approach for urban stormwater modeling. It can provide information about the long-term performance of a stormwater management facility without being computationally very demanding. This paper explores the application of the analytical probabilistic approach in New Zealand. The paper presents the results of a case study aimed at development of an objective way of identifying what constitutes a rainfall storm event and the estimation of the corresponding statistical properties of storms using two selected automatic rainfall stations located in the Auckland region in New Zealand. The storm identification and the estimation of the storm statistical properties are regarded as the first step in the development of the analytical probabilistic models. The paper provides a recommendation about the definition of the storm inter-event time to be used in conjunction with the analytical probabilistic approach.

Keywords: hydrology, rainfall storm, storm inter-event time, New Zealand, stormwater management

Procedia PDF Downloads 327
3798 Modelling the Impact of Installation of Heat Cost Allocators in District Heating Systems Using Machine Learning

Authors: Danica Maljkovic, Igor Balen, Bojana Dalbelo Basic

Abstract:

Following the regulation of EU Directive on Energy Efficiency, specifically Article 9, individual metering in district heating systems has to be introduced by the end of 2016. These directions have been implemented in member state’s legal framework, Croatia is one of these states. The directive allows installation of both heat metering devices and heat cost allocators. Mainly due to bad communication and PR, the general public false image was created that the heat cost allocators are devices that save energy. Although this notion is wrong, the aim of this work is to develop a model that would precisely express the influence of installation heat cost allocators on potential energy savings in each unit within multifamily buildings. At the same time, in recent years, a science of machine learning has gain larger application in various fields, as it is proven to give good results in cases where large amounts of data are to be processed with an aim to recognize a pattern and correlation of each of the relevant parameter as well as in the cases where the problem is too complex for a human intelligence to solve. A special method of machine learning, decision tree method, has proven an accuracy of over 92% in prediction general building consumption. In this paper, a machine learning algorithms will be used to isolate the sole impact of installation of heat cost allocators on a single building in multifamily houses connected to district heating systems. Special emphasises will be given regression analysis, logistic regression, support vector machines, decision trees and random forest method.

Keywords: district heating, heat cost allocator, energy efficiency, machine learning, decision tree model, regression analysis, logistic regression, support vector machines, decision trees and random forest method

Procedia PDF Downloads 232
3797 Effect of Stiffeners on the Behavior of Slender Built up Steel I-Beams

Authors: M. E. Abou-Hashem El Dib, M. K. Swailem, M. M. Metwally, A. I. El Awady

Abstract:

This paper presents the effect of stiffeners on the behavior of slender steel I-beams. Nonlinear three dimensional finite element models are developed to represent the stiffened steel I-beams. The well established finite element (ANSYS 13.0) program is used to simulate the geometric and material nonlinear nature of the problem. Verification is achieved by comparing the obtained numerical results with the results of previous published experimental work. The parameters considered in the analysis are the horizontal stiffener's position and the horizontal stiffener's dimensions as well as the number of vertical stiffeners. The studied dimensions of the horizontal stiffeners include the stiffener width, the stiffener thickness and the stiffener length. The results of the achieved numerical parametric study for slender steel I-beams show the significant effect of stiffeners on the beam behavior and its failure load.

Keywords: beams, local buckling, slender, stiffener, thin walled section

Procedia PDF Downloads 268
3796 Examining Predictive Coding in the Hierarchy of Visual Perception in the Autism Spectrum Using Fast Periodic Visual Stimulation

Authors: Min L. Stewart, Patrick Johnston

Abstract:

Predictive coding has been proposed as a general explanatory framework for understanding the neural mechanisms of perception. As such, an underweighting of perceptual priors has been hypothesised to underpin a range of differences in inferential and sensory processing in autism spectrum disorders. However, empirical evidence to support this has not been well established. The present study uses an electroencephalography paradigm involving changes of facial identity and person category (actors etc.) to explore how levels of autistic traits (AT) affect predictive coding at multiple stages in the visual processing hierarchy. The study uses a rapid serial presentation of faces, with hierarchically structured sequences involving both periodic and aperiodic repetitions of different stimulus attributes (i.e., person identity and person category) in order to induce contextual expectations relating to these attributes. It investigates two main predictions: (1) significantly larger and late neural responses to change of expected visual sequences in high-relative to low-AT, and (2) significantly reduced neural responses to violations of contextually induced expectation in high- relative to low-AT. Preliminary frequency analysis data comparing high and low-AT show greater and later event-related-potentials (ERPs) in occipitotemporal areas and prefrontal areas in high-AT than in low-AT for periodic changes of facial identity and person category but smaller ERPs over the same areas in response to aperiodic changes of identity and category. The research advances our understanding of how abnormalities in predictive coding might underpin aberrant perceptual experience in autism spectrum. This is the first stage of a research project that will inform clinical practitioners in developing better diagnostic tests and interventions for people with autism.

Keywords: hierarchical visual processing, face processing, perceptual hierarchy, prediction error, predictive coding

Procedia PDF Downloads 97
3795 Sea-Spray Calculations Using the MESO-NH Model

Authors: Alix Limoges, William Bruch, Christophe Yohia, Jacques Piazzola

Abstract:

A number of questions arise concerning the long-term impact of the contribution of marine aerosol fluxes generated at the air-sea interface on the occurrence of intense events (storms, floods, etc.) in the coastal environment. To this end, knowledge is needed on sea-spray emission rates and the atmospheric dynamics of the corresponding particles. Our aim is to implement the mesoscale model MESO-NH on the study area using an accurate sea-spray source function to estimate heat fluxes and impact on the precipitations. Based on an original and complete sea-spray source function, which covers a large size spectrum since taking into consideration the sea-spray produced by both bubble bursting and surface tearing process, we propose a comparison between model simulations and experimental data obtained during an oceanic scientific cruise on board the navy ship Atalante. The results show the relevance of the sea-spray flux calculations as well as their impact on the heat fluxes and AOD.

Keywords: atmospheric models, sea-spray source, sea-spray dynamics, aerosols

Procedia PDF Downloads 136
3794 Handling Missing Data by Using Expectation-Maximization and Expectation-Maximization with Bootstrapping for Linear Functional Relationship Model

Authors: Adilah Abdul Ghapor, Yong Zulina Zubairi, A. H. M. R. Imon

Abstract:

Missing value problem is common in statistics and has been of interest for years. This article considers two modern techniques in handling missing data for linear functional relationship model (LFRM) namely the Expectation-Maximization (EM) algorithm and Expectation-Maximization with Bootstrapping (EMB) algorithm using three performance indicators; namely the mean absolute error (MAE), root mean square error (RMSE) and estimated biased (EB). In this study, we applied the methods of imputing missing values in two types of LFRM namely the full model of LFRM and in LFRM when the slope is estimated using a nonparametric method. Results of the simulation study suggest that EMB algorithm performs much better than EM algorithm in both models. We also illustrate the applicability of the approach in a real data set.

Keywords: expectation-maximization, expectation-maximization with bootstrapping, linear functional relationship model, performance indicators

Procedia PDF Downloads 440
3793 Multi-Objective Simulated Annealing Algorithms for Scheduling Just-In-Time Assembly Lines

Authors: Ghorbanali Mohammadi

Abstract:

New approaches to sequencing mixed-model manufacturing systems are present. These approaches have attracted considerable attention due to their potential to deal with difficult optimization problems. This paper presents Multi-Objective Simulated Annealing Algorithms (MOSAA) approaches to the Just-In-Time (JIT) sequencing problem where workload-smoothing (WL) and the number of set-ups (St) are to be optimized simultaneously. Mixed-model assembly lines are types of production lines where varieties of product models similar in product characteristics are assembled. Moreover, this type of problem is NP-hard. Two annealing methods are proposed to solve the multi-objective problem and find an efficient frontier of all design configurations. The performances of the two methods are tested on several problems from the literature. Experimentation demonstrates the relative desirable performance of the presented methodology.

Keywords: scheduling, just-in-time, mixed-model assembly line, sequencing, simulated annealing

Procedia PDF Downloads 112
3792 Development and Adaptation of a LGBM Machine Learning Model, with a Suitable Concept Drift Detection and Adaptation Technique, for Barcelona Household Electric Load Forecasting During Covid-19 Pandemic Periods (Pre-Pandemic and Strict Lockdown)

Authors: Eric Pla Erra, Mariana Jimenez Martinez

Abstract:

While aggregated loads at a community level tend to be easier to predict, individual household load forecasting present more challenges with higher volatility and uncertainty. Furthermore, the drastic changes that our behavior patterns have suffered due to the COVID-19 pandemic have modified our daily electrical consumption curves and, therefore, further complicated the forecasting methods used to predict short-term electric load. Load forecasting is vital for the smooth and optimized planning and operation of our electric grids, but it also plays a crucial role for individual domestic consumers that rely on a HEMS (Home Energy Management Systems) to optimize their energy usage through self-generation, storage, or smart appliances management. An accurate forecasting leads to higher energy savings and overall energy efficiency of the household when paired with a proper HEMS. In order to study how COVID-19 has affected the accuracy of forecasting methods, an evaluation of the performance of a state-of-the-art LGBM (Light Gradient Boosting Model) will be conducted during the transition between pre-pandemic and lockdowns periods, considering day-ahead electric load forecasting. LGBM improves the capabilities of standard Decision Tree models in both speed and reduction of memory consumption, but it still offers a high accuracy. Even though LGBM has complex non-linear modelling capabilities, it has proven to be a competitive method under challenging forecasting scenarios such as short series, heterogeneous series, or data patterns with minimal prior knowledge. An adaptation of the LGBM model – called “resilient LGBM” – will be also tested, incorporating a concept drift detection technique for time series analysis, with the purpose to evaluate its capabilities to improve the model’s accuracy during extreme events such as COVID-19 lockdowns. The results for the LGBM and resilient LGBM will be compared using standard RMSE (Root Mean Squared Error) as the main performance metric. The models’ performance will be evaluated over a set of real households’ hourly electricity consumption data measured before and during the COVID-19 pandemic. All households are located in the city of Barcelona, Spain, and present different consumption profiles. This study is carried out under the ComMit-20 project, financed by AGAUR (Agència de Gestiód’AjutsUniversitaris), which aims to determine the short and long-term impacts of the COVID-19 pandemic on building energy consumption, incrementing the resilience of electrical systems through the use of tools such as HEMS and artificial intelligence.

Keywords: concept drift, forecasting, home energy management system (HEMS), light gradient boosting model (LGBM)

Procedia PDF Downloads 93