Search results for: skewed generalized error distribution
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 7409

Search results for: skewed generalized error distribution

2579 Aerial Photogrammetry-Based Techniques to Rebuild the 30-Years Landform Changes of a Landslide-Dominated Watershed in Taiwan

Authors: Yichin Chen

Abstract:

Taiwan is an island characterized by an active tectonics and high erosion rates. Monitoring the dynamic landscape of Taiwan is an important issue for disaster mitigation, geomorphological research, and watershed management. Long-term and high spatiotemporal landform data is essential for quantifying and simulating the geomorphological processes and developing warning systems. Recently, the advances in unmanned aerial vehicle (UAV) and computational photogrammetry technology have provided an effective way to rebuild and monitor the topography changes in high spatio-temporal resolutions. This study rebuilds the 30-years landform change in the Aiyuzi watershed in 1986-2017 by using the aerial photogrammetry-based techniques. The Aiyuzi watershed, located in central Taiwan and has an area of 3.99 Km², is famous for its frequent landslide and debris flow disasters. This study took the aerial photos by using UAV and collected multi-temporal historical, stereo photographs, taken by the Aerial Survey Office of Taiwan’s Forestry Bureau. To rebuild the orthoimages and digital surface models (DSMs), Pix4DMapper, a photogrammetry software, was used. Furthermore, to control model accuracy, a set of ground control points was surveyed by using eGPS. The results show that the generated DSMs have the ground sampling distance (GSD) of ~10 cm and ~0.3 cm from the UAV’s and historical photographs, respectively, and vertical error of ~1 m. By comparing the DSMs, there are many deep-seated landslides (with depth over 20 m) occurred on the upstream in the Aiyuzi watershed. Even though a large amount of sediment is delivered from the landslides, the steep main channel has sufficient capacity to transport sediment from the channel and to erode the river bed to ~20 m in depth. Most sediments are transported to the outlet of watershed and deposits on the downstream channel. This case study shows that UAV and photogrammetry technology are useful for topography change monitoring effectively.

Keywords: aerial photogrammetry, landslide, landform change, Taiwan

Procedia PDF Downloads 153
2578 Diffusion Treatment of Niobium and Molybdenum on Pur Titanium and Titanium Alloy Ti-64al and Their Properties

Authors: Kaouka Alaeddine, K. Benarous

Abstract:

This study aims to obtain a surface of pure titanium and titanium alloy Ti-64Al with high performance by the diffusion process. Two agents metal alloy have been used in this treatment, niobium (Nb) and molybdenum (Mo), spread on elemental titanium and Ti-64Al alloy. Nb and Mo are used as powder form to increase the contact surface and to improve the distribution. Both Mo and Nb are distributed on samples of Ti and Ti-64Al at 1100 °C and 1200 °C for 3 h. They were performed to effect different experiments objectives. This work was achieved to improve some properties and microstructure of Ti and Ti-64Al surface, using optical microscopy and SEM and study some mechanical properties. The effects of temperature and the powder contents on the microstructure of Ti and Ti-64Al alloy, different phases and hardness value of Ti and Ti-64Al alloy were determined. Experimental results indicate that increasing the powder contents and/or the temperature, the α + β phases change to the equiaxed β lamellar structure. In particular, experiments in 1200 °C were created by diffusion α + β phases both equiaxed β phase laminar and α + β phase, thus meeting the objectives were established in the work. In addition, simulation results are used for comparison with the experimental results by DICTRA software.

Keywords: diffusion, powder metallurgy, titanium alloy, molybdenum, niobium

Procedia PDF Downloads 142
2577 A Smart Contract Project: Peer-to-Peer Energy Trading with Price Forecasting in Microgrid

Authors: Şakir Bingöl, Abdullah Emre Aydemir, Abdullah Saado, Ahmet Akıl, Elif Canbaz, Feyza Nur Bulgurcu, Gizem Uzun, Günsu Bilge Dal, Muhammedcan Pirinççi

Abstract:

Smart contracts, which can be applied in many different areas, from financial applications to the internet of things, come to the fore with their security, low cost, and self-executing features. In this paper, it is focused on peer-to-peer (P2P) energy trading and the implementation of the smart contract on the Ethereum blockchain. It is assumed a microgrid consists of consumers and prosumers that can produce solar and wind energy. The proposed architecture is a system where the prosumer makes the purchase or sale request in the smart contract and the maximum price obtained through the distribution system operator (DSO) by forecasting. It is aimed to forecast the hourly maximum unit price of energy by using deep learning instead of a fixed pricing. In this way, it will make the system more reliable as there will be more dynamic and accurate pricing. For this purpose, Istanbul's energy generation, energy consumption and market clearing price data were used. The consistency of the available data and forecasting results is observed and discussed with graphs.

Keywords: energy trading smart contract, deep learning, microgrid, forecasting, Ethereum, peer to peer

Procedia PDF Downloads 130
2576 Optimal Allocation of Multiple Emergency Resources for a Single Potential Accident Node: A Mixed Integer Linear Program

Authors: Yongjian Du, Jinhua Sun, Kim M. Liew, Huahua Xiao

Abstract:

Optimal allocation of emergency resources before a disaster is of great importance for emergency response. In reality, the pre-protection for a single critical node where accidents may occur is common. In this study, a model is developed to determine location and inventory decisions of multiple emergency resources among a set of candidate stations to minimize the total cost based on the constraints of budgetary and capacity. The total cost includes the economic accident loss which is accorded with probability distribution of time and the warehousing cost of resources which is increasing over time. A ratio is set to measure the degree of a storage station only serving the target node that becomes larger with the decrease of the distance between them. For the application of linear program, it is assumed that the length of travel time to the accident scene of emergency resources has a linear relationship with the economic accident loss. A computational experiment is conducted to illustrate how the proposed model works, and the results indicate its effectiveness and practicability.

Keywords: emergency response, integer linear program, multiple emergency resources, pre-allocation decisions, single potential accident node

Procedia PDF Downloads 148
2575 A Partially Accelerated Life Test Planning with Competing Risks and Linear Degradation Path under Tampered Failure Rate Model

Authors: Fariba Azizi, Firoozeh Haghighi, Viliam Makis

Abstract:

In this paper, we propose a method to model the relationship between failure time and degradation for a simple step stress test where underlying degradation path is linear and different causes of failure are possible. It is assumed that the intensity function depends only on the degradation value. No assumptions are made about the distribution of the failure times. A simple step-stress test is used to shorten failure time of products and a tampered failure rate (TFR) model is proposed to describe the effect of the changing stress on the intensities. We assume that some of the products that fail during the test have a cause of failure that is only known to belong to a certain subset of all possible failures. This case is known as masking. In the presence of masking, the maximum likelihood estimates (MLEs) of the model parameters are obtained through an expectation-maximization (EM) algorithm by treating the causes of failure as missing values. The effect of incomplete information on the estimation of parameters is studied through a Monte-Carlo simulation. Finally, a real example is analyzed to illustrate the application of the proposed methods.

Keywords: cause of failure, linear degradation path, reliability function, expectation-maximization algorithm, intensity, masked data

Procedia PDF Downloads 327
2574 Electrokinetics and Stability of Solder Powders in Aqueous Media

Authors: Terence Lucero F. Menor, Manolo G. Mena, Herman D. Mendoza

Abstract:

Solder pastes are widely used in creating mechanical, thermal and electrical connection between electronic components. Continued miniaturization of consumer electronics drives manufacturers to achieve smaller, lighter, and faster electronic packages at low cost. This faces them to the difficult challenge of dispensing solder pastes in extremely precise and repeatable manner. The most common problem in solder paste dispensing is the clogging of dispensers which results from agglomeration and settling of solder powders leading to increase on the effective particle size and uneven distribution of particles in the mixture. In this work, microelectrophoresis was employed to investigate the effect of pH and KNO₃ concentration on the electrokinetic behavior and stability of SAC305, PbSn5Ag2.5 and Sn powders in aqueous media. Results revealed that the electrokinetic behavior of the three types of solder powders are similar, which was attributed to high SnO₂ content on the surface of the particles. Electrokinetic measurements showed that the zeta potentials of the solder powders are highly dependent on pH and KNO₃ concentration with isoelectric points ranging from 3.5 to 5.5. Results were verified using stability tests.

Keywords: electrokinetic behavior, isoelectric point, solder powder, stability, surface analysis

Procedia PDF Downloads 227
2573 Transport of Analytes under Mixed Electroosmotic and Pressure Driven Flow of Power Law Fluid

Authors: Naren Bag, S. Bhattacharyya, Partha P. Gopmandal

Abstract:

In this study, we have analyzed the transport of analytes under a two dimensional steady incompressible flow of power-law fluids through rectangular nanochannel. A mathematical model based on the Cauchy momentum-Nernst-Planck-Poisson equations is considered to study the combined effect of mixed electroosmotic (EO) and pressure driven (PD) flow. The coupled governing equations are solved numerically by finite volume method. We have studied extensively the effect of key parameters, e.g., flow behavior index, concentration of the electrolyte, surface potential, imposed pressure gradient and imposed electric field strength on the net average flow across the channel. In addition to study the effect of mixed EOF and PD on the analyte distribution across the channel, we consider a nonlinear model based on general convective-diffusion-electromigration equation. We have also presented the retention factor for various values of electrolyte concentration and flow behavior index.

Keywords: electric double layer, finite volume method, flow behavior index, mixed electroosmotic/pressure driven flow, non-Newtonian power-law fluids, numerical simulation

Procedia PDF Downloads 307
2572 Multi-Objective Electric Vehicle Charge Coordination for Economic Network Management under Uncertainty

Authors: Ridoy Das, Myriam Neaimeh, Yue Wang, Ghanim Putrus

Abstract:

Electric vehicles are a popular transportation medium renowned for potential environmental benefits. However, large and uncontrolled charging volumes can impact distribution networks negatively. Smart charging is widely recognized as an efficient solution to achieve both improved renewable energy integration and grid relief. Nevertheless, different decision-makers may pursue diverse and conflicting objectives. In this context, this paper proposes a multi-objective optimization framework to control electric vehicle charging to achieve both energy cost reduction and peak shaving. A weighted-sum method is developed due to its intuitiveness and efficiency. Monte Carlo simulations are implemented to investigate the impact of uncertain electric vehicle driving patterns and provide decision-makers with a robust outcome in terms of prospective cost and network loading. The results demonstrate that there is a conflict between energy cost efficiency and peak shaving, with the decision-makers needing to make a collaborative decision.

Keywords: electric vehicles, multi-objective optimization, uncertainty, mixed integer linear programming

Procedia PDF Downloads 177
2571 Numerical Simulation of Supersonic Gas Jet Flows and Acoustics Fields

Authors: Lei Zhang, Wen-jun Ruan, Hao Wang, Peng-Xin Wang

Abstract:

The source of the jet noise is generated by rocket exhaust plume during rocket engine testing. A domain decomposition approach is applied to the jet noise prediction in this paper. The aerodynamic noise coupling is based on the splitting into acoustic sources generation and sound propagation in separate physical domains. Large Eddy Simulation (LES) is used to simulate the supersonic jet flow. Based on the simulation results of the flow-fields, the jet noise distribution of the sound pressure level is obtained by applying the Ffowcs Williams-Hawkings (FW-H) acoustics equation and Fourier transform. The calculation results show that the complex structures of expansion waves, compression waves and the turbulent boundary layer could occur due to the strong interaction between the gas jet and the ambient air. In addition, the jet core region, the shock cell and the sound pressure level of the gas jet increase with the nozzle size increasing. Importantly, the numerical simulation results of the far-field sound are in good agreement with the experimental measurements in directivity.

Keywords: supersonic gas jet, Large Eddy Simulation(LES), acoustic noise, Ffowcs Williams-Hawkings(FW-H) equations, nozzle size

Procedia PDF Downloads 408
2570 Study of Radioactivity of Oil and Gas

Authors: Harish Aryal, Thalia Balderas, Alondra Rodriguez

Abstract:

Radioactivity present in nature possess a major challenge to public health and occupational concerns. Even at low doses, NORM can cause radiation-induced cancers, heritable diseases, genetic defects, etc. There have not been enough radiological studies and consequently, there is a lack of supportive data. In addition, there is no universal medical surveillance program for low-level doses and there is a need for NORM management guidelines for appropriate control. Naturally Occurring Radioactive Material (NORM) is present everywhere during oil/gas exploration. Currently, there is limited data available to quantify radioactivity. This research presents the study of radioactivity in different areas in the United States to be encouraged to be used for further study in Texas or similar areas within the oil and gas industry. Many materials that are found in the oil and gas industry are NORM (Naturally Occurring Radioactive Materials). The NORM is made of various types of materials, including Radium 226, Radium 228, and Radon 222. Efforts to characterize the geographic distribution of NORM have been limited by poor statistical representation in this area of study. In addition, the fate of NORM in the environment has not been fully defined, and few human health risk assessments have been conducted. To further comprehend how to measure radioactivity in oil and gas, it will be essential to understand the amount and type of radioactivity that is wasted on the water and soil of the industry.

Keywords: NORM, radium 226, radon 222, radionuclides, geological formations

Procedia PDF Downloads 84
2569 Critical Evaluation of Groundwater Monitoring Networks for Machine Learning Applications

Authors: Pedro Martinez-Santos, Víctor Gómez-Escalonilla, Silvia Díaz-Alcaide, Esperanza Montero, Miguel Martín-Loeches

Abstract:

Groundwater monitoring networks are critical in evaluating the vulnerability of groundwater resources to depletion and contamination, both in space and time. Groundwater monitoring networks typically grow over decades, often in organic fashion, with relatively little overall planning. The groundwater monitoring networks in the Madrid area, Spain, were reviewed for the purpose of identifying gaps and opportunities for improvement. Spatial analysis reveals the presence of various monitoring networks belonging to different institutions, with several hundred observation wells in an area of approximately 4000 km2. This represents several thousand individual data entries, some going back to the early 1970s. Major issues included overlap between the networks, unknown screen depth/vertical distribution for many observation boreholes, uneven time series, uneven monitored species, and potentially suboptimal locations. Results also reveal there is sufficient information to carry out a spatial and temporal analysis of groundwater vulnerability based on machine learning applications. These can contribute to improve the overall planning of monitoring networks’ expansion into the future.

Keywords: groundwater monitoring, observation networks, machine learning, madrid

Procedia PDF Downloads 72
2568 DNA Methylation Score Development for In utero Exposure to Paternal Smoking Using a Supervised Machine Learning Approach

Authors: Cristy Stagnar, Nina Hubig, Diana Ivankovic

Abstract:

The epigenome is a compelling candidate for mediating long-term responses to environmental effects modifying disease risk. The main goal of this research is to develop a machine learning-based DNA methylation score, which will be valuable in delineating the unique contribution of paternal epigenetic modifications to the germline impacting childhood health outcomes. It will also be a useful tool in validating self-reports of nonsmoking and in adjusting epigenome-wide DNA methylation association studies for this early-life exposure. Using secondary data from two population-based methylation profiling studies, our DNA methylation score is based on CpG DNA methylation measurements from cord blood gathered from children whose fathers smoked pre- and peri-conceptually. Each child’s mother and father fell into one of three class labels in the accompanying questionnaires -never smoker, former smoker, or current smoker. By applying different machine learning algorithms to the accessible resource for integrated epigenomic studies (ARIES) sub-study of the Avon longitudinal study of parents and children (ALSPAC) data set, which we used for training and testing of our model, the best-performing algorithm for classifying the father smoker and mother never smoker was selected based on Cohen’s κ. Error in the model was identified and optimized. The final DNA methylation score was further tested and validated in an independent data set. This resulted in a linear combination of methylation values of selected probes via a logistic link function that accurately classified each group and contributed the most towards classification. The result is a unique, robust DNA methylation score which combines information on DNA methylation and early life exposure of offspring to paternal smoking during pregnancy and which may be used to examine the paternal contribution to offspring health outcomes.

Keywords: epigenome, health outcomes, paternal preconception environmental exposures, supervised machine learning

Procedia PDF Downloads 181
2567 Resisting Adversarial Assaults: A Model-Agnostic Autoencoder Solution

Authors: Massimo Miccoli, Luca Marangoni, Alberto Aniello Scaringi, Alessandro Marceddu, Alessandro Amicone

Abstract:

The susceptibility of deep neural networks (DNNs) to adversarial manipulations is a recognized challenge within the computer vision domain. Adversarial examples, crafted by adding subtle yet malicious alterations to benign images, exploit this vulnerability. Various defense strategies have been proposed to safeguard DNNs against such attacks, stemming from diverse research hypotheses. Building upon prior work, our approach involves the utilization of autoencoder models. Autoencoders, a type of neural network, are trained to learn representations of training data and reconstruct inputs from these representations, typically minimizing reconstruction errors like mean squared error (MSE). Our autoencoder was trained on a dataset of benign examples; learning features specific to them. Consequently, when presented with significantly perturbed adversarial examples, the autoencoder exhibited high reconstruction errors. The architecture of the autoencoder was tailored to the dimensions of the images under evaluation. We considered various image sizes, constructing models differently for 256x256 and 512x512 images. Moreover, the choice of the computer vision model is crucial, as most adversarial attacks are designed with specific AI structures in mind. To mitigate this, we proposed a method to replace image-specific dimensions with a structure independent of both dimensions and neural network models, thereby enhancing robustness. Our multi-modal autoencoder reconstructs the spectral representation of images across the red-green-blue (RGB) color channels. To validate our approach, we conducted experiments using diverse datasets and subjected them to adversarial attacks using models such as ResNet50 and ViT_L_16 from the torch vision library. The autoencoder extracted features used in a classification model, resulting in an MSE (RGB) of 0.014, a classification accuracy of 97.33%, and a precision of 99%.

Keywords: adversarial attacks, malicious images detector, binary classifier, multimodal transformer autoencoder

Procedia PDF Downloads 108
2566 Active Development of Tacit Knowledge: Knowledge Management, High Impact Practices and Experiential Learning

Authors: John Zanetich

Abstract:

Due to their positive associations with student learning and retention, certain undergraduate opportunities are designated ‘high-impact.’ High-Impact Practices (HIPs) such as, learning communities, community based projects, research, internships, study abroad and culminating senior experience, share several traits bin common: they demand considerable time and effort, learning occurs outside of the classroom, and they require meaningful interactions between faculty and students, they encourage collaboration with diverse others, and they provide frequent and substantive feedback. As a result of experiential learning in these practices, participation in these practices can be life changing. High impact learning helps individuals locate tacit knowledge, and build mental models that support the accumulation of knowledge. On-going learning from experience and knowledge conversion provides the individual with a way to implicitly organize knowledge and share knowledge over a lifetime. Knowledge conversion is a knowledge management component which focuses on the explication of the tacit knowledge that exists in the minds of students and that knowledge which is embedded in the process and relationships of the classroom educational experience. Knowledge conversion is required when working with tacit knowledge and the demand for a learner to align deeply held beliefs with the cognitive dissonance created by new information. Knowledge conversion and tacit knowledge result from the fact that an individual's way of knowing, that is, their core belief structure, is considered generalized and tacit instead of explicit and specific. As a phenomenon, tacit knowledge is not readily available to the learner for explicit description unless evoked by an external source. The development of knowledge–related capabilities such as Aggressive Development of Tacit Knowledge (ADTK) can be used in experiential educational programs to enhance knowledge, foster behavioral change, improve decision making, and overall performance. ADTK allows the student in HIPs to use their existing knowledge in a way that allows them to evaluate and make any necessary modifications to their core construct of reality in order to amalgamate new information. Based on the Lewin/Schein Change Theory, the learner will reach for tacit knowledge as a stabilizing mechanism when they are challenged by new information that puts them slightly off balance. As in word association drills, the important concept is the first thought. The reactionary outpouring to an experience is the programmed or tacit memory and knowledge of their core belief structure. ADTK is a way to help teachers design their own methods and activities to unfreeze, create new learning, and then refreeze the core constructs upon which future learning in a subject area is built. This paper will explore the use of ADTK as a technique for knowledge conversion in the classroom in general and in HIP programs specifically. It will focus on knowledge conversion in curriculum development and propose the use of one-time educational experiences, multi-session experiences and sequential program experiences focusing on tacit knowledge in educational programs.

Keywords: tacit knowledge, knowledge management, college programs, experiential learning

Procedia PDF Downloads 260
2565 The Effect of Information Technology on the Quality of Accounting Information

Authors: Mohammad Hadi Khorashadi Zadeh, Amin Karkon, Hamid Golnari

Abstract:

This study aimed to investigate the impact of information technology on the quality of accounting information was made in 2014. A survey of 425 executives of listed companies in Tehran Stock Exchange, using the Cochran formula simple random sampling method, 84 managers of these companies as the sample size was considered. Methods of data collection based on questionnaire information technology some of the questions of the impact of information technology was standardized questionnaires and the questions were designed according to existing components. After the distribution and collection of questionnaires, data analysis and hypothesis testing using structural equation modeling Smart PLS2 and software measurement model and the structure was conducted in two parts. In the first part of the questionnaire technical characteristics including reliability, validity, convergent and divergent validity for PLS has been checked and in the second part, application no significant coefficients were used to examine the research hypotheses. The results showed that IT and its dimensions (timeliness, relevance, accuracy, adequacy, and the actual transfer rate) affect the quality of accounting information of listed companies in Tehran Stock Exchange influence.

Keywords: information technology, information quality, accounting, transfer speed

Procedia PDF Downloads 275
2564 Age and Population Structure of the Goby Parapocryptes Serperaster in the Mekong Delta, Vietnam, Based on Length-Frequency and Otolith Analyses

Authors: Quang Minh Dinh, Jian Guang Qin, Sabine Dittmann, Dinh Dac Tran

Abstract:

The age and population structure the dermal gopy Parapocryptes serperaster were studied using length distributions, otolith and von Bertalanffy model in the Mekong Delta over a whole year through monthly sampling. The sex ratio of P. serperaster was near 1:1, and von Bertalanffy growth parameters were L∞= 25.2 cm, K = 0.74 yr-1, and t0 = -0.22 yr-1. Fish size at first entry to fishery was 14.6 cm, and fishing mortality (1.57 yr-1) and natural mortality (1.51 yr-1) accounted for 51% and 49% of the total mortality (3.07 yr-1), respectively. Relative yield-per-recruit and biomass-per-recruit analyses revealed the levels of maximum exploitation yield (Emax = 0.83), maximum economic yield (E0.1 = 0.71) and the yield at 50% reduction of exploitation (E0.5 = 0.37). Otoliths from 164 female and 196 male gobies were readable, and the otolith morphometry data were used for age identification. The mean age estimated by reading otolith annual rings and by analysing length frequency distribution was consistent. This study shows that the otolith morphometry is a reliable method for aging this goby and possibly also applicable for other tropical gobies. The fishery analysis indicates that this goby stock has not been overexploited in the Mekong Delta.

Keywords: Parapcryptes serperaster, otolith, age, pulation structure, Vietnam

Procedia PDF Downloads 650
2563 Photosynthesis Metabolism Affects Yield Potentials in Jatropha curcas L.: A Transcriptomic and Physiological Data Analysis

Authors: Nisha Govender, Siju Senan, Zeti-Azura Hussein, Wickneswari Ratnam

Abstract:

Jatropha curcas, a well-described bioenergy crop has been extensively accepted as future fuel need especially in tropical regions. Ideal planting material required for large-scale plantation is still lacking. Breeding programmes for improved J. curcas varieties are rendered difficult due to limitations in genetic diversity. Using a combined transcriptome and physiological data, we investigated the molecular and physiological differences in high and low yielding Jatropha curcas to address plausible heritable variations underpinning these differences, in regard to photosynthesis, a key metabolism affecting yield potentials. A total of 6 individual Jatropha plant from 4 accessions described as high and low yielding planting materials were selected from the Experimental Plot A, Universiti Kebangsaan Malaysia (UKM), Bangi. The inflorescence and shoots were collected for transcriptome study. For the physiological study, each individual plant (n=10) from the high and low yielding populations were screened for agronomic traits, chlorophyll content and stomatal patterning. The J. curcas transcriptomes are available under BioProject PRJNA338924 and BioSample SAMN05827448-65, respectively Each transcriptome was subjected to functional annotation analysis of sequence datasets using the BLAST2Go suite; BLASTing, mapping, annotation, statistical analysis and visualization Large-scale phenotyping of the number of fruits per plant (NFPP) and fruits per inflorescence (FPI) classified the high yielding Jatropha accessions with average NFPP =60 and FPI > 10, whereas the low yielding accessions yielded an average NFPP=10 and FPI < 5. Next generation sequencing revealed genes with differential expressions in the high yielding Jatropha relative to the low yielding plants. Distinct differences were observed in transcript level associated to photosynthesis metabolism. DEGs collection in the low yielding population showed comparable CAM photosynthetic metabolism and photorespiration, evident as followings: phosphoenolpyruvate phosphate translocator chloroplastic like isoform with 2.5 fold change (FC) and malate dehydrogenase (2.03 FC). Green leaves have the most pronounced photosynthetic activity in a plant body due to significant accumulation of chloroplast. In most plants, the leaf is always the dominant photosynthesizing heart of the plant body. Large number of the DEGS in the high-yielding population were found attributable to chloroplast and chloroplast associated events; STAY-GREEN chloroplastic, Chlorophyllase-1-like (5.08 FC), beta-amylase (3.66 FC), chlorophyllase-chloroplastic-like (3.1 FC), thiamine thiazole chloroplastic like (2.8 FC), 1-4, alpha glucan branching enzyme chloroplastic amyliplastic (2.6FC), photosynthetic NDH subunit (2.1 FC) and protochlorophyllide chloroplastic (2 FC). The results were parallel to a significant increase in chlorophyll a content in the high yielding population. In addition to the chloroplast associated transcript abundance, the TOO MANY MOUTHS (TMM) at 2.9 FC, which code for distant stomatal distribution and patterning in the high-yielding population may explain high concentration of CO2. The results were in agreement with the role of TMM. Clustered stomata causes back diffusion in the presence of gaps localized closely to one another. We conclude that high yielding Jatropha population corresponds to a collective function of C3 metabolism with a low degree of CAM photosynthetic fixation. From the physiological descriptions, high chlorophyll a content and even distribution of stomata in the leaf contribute to better photosynthetic efficiency in the high yielding Jatropha compared to the low yielding population.

Keywords: chlorophyll, gene expression, genetic variation, stomata

Procedia PDF Downloads 236
2562 Levels of CTX1 in Premenopausal Osteoporotic Women Study Conducted in Khyberpuktoonkhwa Province, Pakistan

Authors: Mehwish Durrani, Rubina Nazli, Muhammad Abubakr, Muhammad Shafiq

Abstract:

Objectives: To evaluate the high socio-economic status, urbanization, and decrease ambulation can lead to early osteoporosis in women reporting from Peshawar region. Study Design: Descriptive cross-sectional study was done. Sample size was 100 subjects, using 30% proportion of osteoporosis, 95% confidence level, and 9% margin of error under WHO software for sample size determination. Place and Duration of study: This study was carried out in the tertiary referral health care facilities of Peshawar viz PGMI Hayatabad Medical Complex, Peshawar, Khyber Pakhtunkhwa Province, Pakistan. Ethical approval for the study was taken from the Institutional Ethical Research board (IERD) at Post Graduate Medical Institute, Hayatabad Medical Complex, and Peshawar.The study was done in six months time period. Patients and Methods: Levels of CTX1 as a marker of bone degradation in radiographically assessed perimenopausal women was determined. These females were randomly selected and screened for osteoporosis. Hemoglobin in gm/dl, ESR by Westergren method as millimeter in 1 hour, Serum Ca mg/dl, Serum alkaline Phosphatase international units per liter radiographic grade of osteoporosis according to Singh index as 1-6 and CTX 1 level in pg/ml. Results: High levels of CTX1 was observed in perimenopausal osteoporotic women which were radiographically diagnosed as osteoporotic patients. The High socio-economic class also predispose to osteoporosis. Decrease ambulation another risk factor showed significant association with the increased levels of CTX1. Conclusion: The results of this study propose that minimum ambulation and high socioeconomic class both had significance association with the increase levels of serum CTX1, which in turn will lead to osteoporosis and to its complications.

Keywords: osteoporosis, CTX1, perimenopausal women, Hayatabad Medical Complex, Khyberpuktoonkhwa

Procedia PDF Downloads 329
2561 Decomposition of the Customer-Server Interaction in Grocery Shops

Authors: Andreas Ahrens, Ojaras Purvinis, Jelena Zascerinska

Abstract:

A successful shopping experience without overcrowded shops and long waiting times undoubtedly leads to the release of happiness hormones and is generally considered the goal of any optimization. Factors influencing the shopping experience can be divided into internal and external ones. External factors are related, e. g. to the arrival of the customers to the shop, whereas internal are linked with the service process itself when checking out (waiting in the queue to the cash register and the scanning of the goods as well as the payment process itself) or any other non-expected delay when changing the status from a visitor to a buyer by choosing goods or items. This paper divides the customer-server interaction into five phases starting with the customer's arrival at the shop, the selection of goods, the buyer waiting in the queue to the cash register, the payment process, and ending with the customer or buyer's departure. Our simulation results show how five phases are intertwined and influence the overall shopping experience. Parameters for measuring the shopping experience are estimated based on the burstiness level in each of the five phases of the customer-server interaction.

Keywords: customers’ burstiness, cash register, customers’ wait-ing time, gap distribution function

Procedia PDF Downloads 146
2560 Introduction of Robust Multivariate Process Capability Indices

Authors: Behrooz Khalilloo, Hamid Shahriari, Emad Roghanian

Abstract:

Process capability indices (PCIs) are important concepts of statistical quality control and measure the capability of processes and how much processes are meeting certain specifications. An important issue in statistical quality control is parameter estimation. Under the assumption of multivariate normality, the distribution parameters, mean vector and variance-covariance matrix must be estimated, when they are unknown. Classic estimation methods like method of moment estimation (MME) or maximum likelihood estimation (MLE) makes good estimation of the population parameters when data are not contaminated. But when outliers exist in the data, MME and MLE make weak estimators of the population parameters. So we need some estimators which have good estimation in the presence of outliers. In this work robust M-estimators for estimating these parameters are used and based on robust parameter estimators, robust process capability indices are introduced. The performances of these robust estimators in the presence of outliers and their effects on process capability indices are evaluated by real and simulated multivariate data. The results indicate that the proposed robust capability indices perform much better than the existing process capability indices.

Keywords: multivariate process capability indices, robust M-estimator, outlier, multivariate quality control, statistical quality control

Procedia PDF Downloads 280
2559 Deciphering Tumor Stroma Interactions in Retinoblastoma

Authors: Rajeswari Raguraman, Sowmya Parameswaran, Krishnakumar Subramanian, Jagat Kanwar, Rupinder Kanwar

Abstract:

Background: Tumor microenvironment has been implicated in several cancers to regulate cell growth, invasion and metastasis culminating in outcome of therapy. Tumor stroma consists of multiple cell types that are in constant cross-talk with the tumor cells to favour a pro-tumorigenic environment. Not much is known about the existence of tumor microenvironment in the pediatric intraocular malignancy, Retinoblastoma (RB). In the present study, we aim to understand the multiple stromal cellular subtypes and tumor stromal interactions expressed in RB tumors. Materials and Methods: Immunohistochemistry for stromal cell markers CD31, CD68, alpha-smooth muscle (α-SMA), vimentin and glial fibrillary acidic protein (GFAP) was performed on formalin fixed paraffin embedded tissues sections of RB (n=12). The differential expression of stromal target molecules; fibroblast activation protein (FAP), tenascin-C (TNC), osteopontin (SPP1), bone marrow stromal antigen 2 (BST2), stromal derived factor 2 and 4 (SDF2 and SDF4) in primary RB tumors (n=20) and normal retina (n=5) was studied by quantitative reverse transcriptase polymerase chain reaction (qRT-PCR) and Western blotting. The differential expression was correlated with the histopathological features of RB. The interaction between RB cell lines (Weri-Rb-1, NCC-RbC-51) and Bone marrow stromal cells (BMSC) was also studied using direct co-culture and indirect co-culture methods. The functional effect of the co-culture methods on the RB cells was evaluated by invasion and proliferation assays. Global gene expression was studied by using Affymetrix 3’ IVT microarray. Pathway prediction was performed using KEGG and the key molecules were validated using qRT-PCR. Results: The immunohistochemistry revealed the presence of several stromal cell types such as endothelial cells (CD31+;Vim+/-); macrophages (CD68+;Vim+/-); Fibroblasts (Vim+; CD31-;CD68- );myofibroblasts (α-SMA+/ Vim+) and invading retinal astrocytes/ differentiated retinal glia (GFAP+; Vim+). A characteristic distribution of these stromal cell types was observed in the tumor microenvironment, with endothelial cells predominantly seen in blood vessels and macrophages near actively proliferating tumor or necrotic areas. Retinal astrocytes and glia were predominant near the optic nerve regions in invasive tumors with sparse distribution in tumor foci. Fibroblasts were widely distributed with rare evidence of myofibroblasts in the tumor. Both gene and protein expression revealed statistically significant (P<0.05) up-regulation of FAP, TNC and BST2 in primary RB tumors compared to the normal retina. Co-culture of BMSC with RB cells promoted invasion and proliferation of RB cells in direct and indirect contact methods respectively. Direct co-culture of RB cell lines with BMSC resulted in gene expression changes in ECM-receptor interaction, focal adhesion, IL-8 and TGF-β signaling pathways associated with cancer. In contrast, various metabolic pathways such a glucose, fructose and amino acid metabolism were significantly altered under the indirect co-culture condition. Conclusion: The study suggests that the close interaction between RB cells and the stroma might be involved in RB tumor invasion and progression which is likely to be mediated by ECM-receptor interactions and secretory factors. Targeting the tumor stroma would be an attractive option for redesigning treatment strategies for RB.

Keywords: gene expression profiles, retinoblastoma, stromal cells, tumor microenvironment

Procedia PDF Downloads 382
2558 Preparation and Characterization of Diclofenac Sodium Loaded Solid Lipid Nanoparticle

Authors: Oktavia Eka Puspita

Abstract:

The possibility of using Solid Lipid Nanoparticles (SLN) for topical use is an interesting feature concerning this system has occlusive properties on the skin surface therefore enhance the penetration of drugs through the stratum corneum by increased hydration. This advantage can be used to enhance the drug penetration of topical delivery such as Diclofenac sodium for the relief of signs and symptoms of osteoarthritis, rheumatoid arthritis and ankylosing spondylitis. The purpose of this study was focused on the preparation and physical characterization of Diclofenac sodium loaded SLN (D-SLN). D loaded SLN were prepared by hot homogenization followed by ultrasonication technique. Since the occlusion factor of SLN is related to its particle size the formulation of D-SLN in present study two formulations different in its surfactant contents were prepared to investigate the difference of the particle size resulted. Surfactants selected for preparation of formulation A (FA) were lecithin soya and Tween 80 whereas formulation B (FB) were lecithin soya, Tween 80, and Sodium Lauryl Sulphate. D-SLN were characterized for particle size and distribution, polydispersity index (PI), zeta potential using Beckman-Coulter Delsa™ Nano. Overall, the particle size obtained from FA was larger than FB. FA has 90% of the particles were above 1000 nm, while FB has 90% were below 100 nm.

Keywords: solid lipid nanoparticles, hot homogenization technique, particle size analysis, topical administration

Procedia PDF Downloads 494
2557 Raman and Dielectric Relaxation Investigations of Polyester-CoFe₂O₄ Nanocomposites

Authors: Alhulw H. Alshammari, Ahmed Iraqi, S. A. Saad, T. A. Taha

Abstract:

In this work, we present for the first time the study of Raman spectra and dielectric relaxation of polyester polymer-CoFe₂O₄ (5.0, 10.0, 15.0, and 20.0 wt%) nanocomposites. Raman spectroscopy was applied as a sensitive structural identification technique to characterize the polyester-CoFe₂O₄ nanocomposites. The images of AFM confirmed the uniform distribution of CoFe₂O₄ inside the polymer matrix. Dielectric relaxation was employed as an important analytical technique to obtain information about the ability of the polymer nanocomposites to store and filter electrical signals. The dielectric relaxation analyses were carried out on the polyester-CoFe₂O₄ nanocomposites at different temperatures. An increase in dielectric constant ε₁ was observed for all samples with increasing temperatures due to the alignment of the electric dipoles with the applied electric field. In contrast, ε₁ decreased with increasing frequency. This is attributed to the difficulty for the electric dipoles to follow the electric field. The α relaxation peak that appeared at a high frequency shifted to higher frequencies when increasing the temperature. The activation energies for Maxwell-Wagner Sillar (MWS) changed from 0.84 to 1.01 eV, while the activation energies for α relaxations were 0.54 – 0.94 eV. The conduction mechanism for the polyester- CoFe₂O₄ nanocomposites followed the correlated barrier hopping (CBH) model.

Keywords: AC conductivity, activation energy, dielectric permittivity, polyester nanocomposites

Procedia PDF Downloads 107
2556 Understanding the Impact of Spatial Light Distribution on Object Identification in Low Vision: A Pilot Psychophysical Study

Authors: Alexandre Faure, Yoko Mizokami, éRic Dinet

Abstract:

These recent years, the potential of light in assisting visually impaired people in their indoor mobility has been demonstrated by different studies. Implementing smart lighting systems for selective visual enhancement, especially designed for low-vision people, is an approach that breaks with the existing visual aids. The appearance of the surface of an object is significantly influenced by the lighting conditions and the constituent materials of the objects. Appearance of objects may appear to be different from expectation. Therefore, lighting conditions lead to an important part of accurate material recognition. The main objective of this work was to investigate the effect of the spatial distribution of light on object identification in the context of low vision. The purpose was to determine whether and what specific lighting approaches should be preferred for visually impaired people. A psychophysical experiment was designed to study the ability of individuals to identify the smallest cube of a pair under different lighting diffusion conditions. Participants were divided into two distinct groups: a reference group of observers with normal or corrected-to-normal visual acuity and a test group, in which observers were required to wear visual impairment simulation glasses. All participants were presented with pairs of cubes in a "miniature room" and were instructed to estimate the relative size of the two cubes. The miniature room replicates real-life settings, adorned with decorations and separated from external light sources by black curtains. The correlated color temperature was set to 6000 K, and the horizontal illuminance at the object level at approximately 240 lux. The objects presented for comparison consisted of 11 white cubes and 11 black cubes of different sizes manufactured with a 3D printer. Participants were seated 60 cm away from the objects. Two different levels of light diffuseness were implemented. After receiving instructions, participants were asked to judge whether the two presented cubes were the same size or if one was smaller. They provided one of five possible answers: "Left one is smaller," "Left one is smaller but unsure," "Same size," "Right one is smaller," or "Right one is smaller but unsure.". The method of constant stimuli was used, presenting stimulus pairs in a random order to prevent learning and expectation biases. Each pair consisted of a comparison stimulus and a reference cube. A psychometric function was constructed to link stimulus value with the frequency of correct detection, aiming to determine the 50% correct detection threshold. Collected data were analyzed through graphs illustrating participants' responses to stimuli, with accuracy increasing as the size difference between cubes grew. Statistical analyses, including 2-way ANOVA tests, showed that light diffuseness had no significant impact on the difference threshold, whereas object color had a significant influence in low vision scenarios. The first results and trends derived from this pilot experiment clearly and strongly suggest that future investigations could explore extreme diffusion conditions to comprehensively assess the impact of diffusion on object identification. For example, the first findings related to light diffuseness may be attributed to the range of manipulation, emphasizing the need to explore how other lighting-related factors interact with diffuseness.

Keywords: Lighting, Low Vision, Visual Aid, Object Identification, Psychophysical Experiment

Procedia PDF Downloads 60
2555 Modeling Competition Between Subpopulations with Variable DNA Content in Resource-Limited Microenvironments

Authors: Parag Katira, Frederika Rentzeperis, Zuzanna Nowicka, Giada Fiandaca, Thomas Veith, Jack Farinhas, Noemi Andor

Abstract:

Resource limitations shape the outcome of competitions between genetically heterogeneous pre-malignant cells. One example of such heterogeneity is in the ploidy (DNA content) of pre-malignant cells. A whole-genome duplication (WGD) transforms a diploid cell into a tetraploid one and has been detected in 28-56% of human cancers. If a tetraploid subclone expands, it consistently does so early in tumor evolution, when cell density is still low, and competition for nutrients is comparatively weak – an observation confirmed for several tumor types. WGD+ cells need more resources to synthesize increasing amounts of DNA, RNA, and proteins. To quantify resource limitations and how they relate to ploidy, we performed a PAN cancer analysis of WGD, PET/CT, and MRI scans. Segmentation of >20 different organs from >900 PET/CT scans were performed with MOOSE. We observed a strong correlation between organ-wide population-average estimates of Oxygen and the average ploidy of cancers growing in the respective organ (Pearson R = 0.66; P= 0.001). In-vitro experiments using near-diploid and near-tetraploid lineages derived from a breast cancer cell line supported the hypothesis that DNA content influences Glucose- and Oxygen-dependent proliferation-, death- and migration rates. To model how subpopulations with variable DNA content compete in the resource-limited environment of the human brain, we developed a stochastic state-space model of the brain (S3MB). The model discretizes the brain into voxels, whereby the state of each voxel is defined by 8+ variables that are updated over time: stiffness, Oxygen, phosphate, glucose, vasculature, dead cells, migrating cells and proliferating cells of various DNA content, and treat conditions such as radiotherapy and chemotherapy. Well-established Fokker-Planck partial differential equations govern the distribution of resources and cells across voxels. We applied S3MB on sequencing and imaging data obtained from a primary GBM patient. We performed whole genome sequencing (WGS) of four surgical specimens collected during the 1ˢᵗ and 2ⁿᵈ surgeries of the GBM and used HATCHET to quantify its clonal composition and how it changes between the two surgeries. HATCHET identified two aneuploid subpopulations of ploidy 1.98 and 2.29, respectively. The low-ploidy clone was dominant at the time of the first surgery and became even more dominant upon recurrence. MRI images were available before and after each surgery and registered to MNI space. The S3MB domain was initiated from 4mm³ voxels of the MNI space. T1 post and T2 flair scan acquired after the 1ˢᵗ surgery informed tumor cell densities per voxel. Magnetic Resonance Elastography scans and PET/CT scans informed stiffness and Glucose access per voxel. We performed a parameter search to recapitulate the GBM’s tumor cell density and ploidy composition before the 2ⁿᵈ surgery. Results suggest that the high-ploidy subpopulation had a higher Glucose-dependent proliferation rate (0.70 vs. 0.49), but a lower Glucose-dependent death rate (0.47 vs. 1.42). These differences resulted in spatial differences in the distribution of the two subpopulations. Our results contribute to a better understanding of how genomics and microenvironments interact to shape cell fate decisions and could help pave the way to therapeutic strategies that mimic prognostically favorable environments.

Keywords: tumor evolution, intra-tumor heterogeneity, whole-genome doubling, mathematical modeling

Procedia PDF Downloads 68
2554 Influence of Shield Positions on Thermo/Fluid Performance of Pin Fin Heat Sink

Authors: Ramy H. Mohammed

Abstract:

In heat sinks, the flow within the core exhibits separation and hence does not lend itself to simple analytical boundary layer or duct flow analysis of the wall friction. In this paper, I present some findings from an experimental and numerical study aimed to obtain physical insight into the influence of the presence of the shield and its position on the hydraulic and thermal performance of square pin fin heat sink without top by-pass. The variations of the Nusselt number and friction factor are obtained under varied parameters, such as the Reynolds number and the shield position. The numerical code is validated by comparing the numerical results with the available experimental data. It is shown that, there is a good agreement between the temperature predictions based on the model and the experimental data. Results show that, as the presence of the shield, the heat transfer of fin array is enhanced and the flow resistance increased. The surface temperature distribution of the heat sink base is more uniform when the dimensionless shield position equals to 1/3 or 2/3. The comprehensive performance evaluation approach based on identical pumping power criteria is adopted and shows that the optimum shield position is at x/l=0.43 where energy is saved.

Keywords: shield, fin array, performance evaluation, heat transfer, energy

Procedia PDF Downloads 304
2553 Transnationalization Strategies of Danish Cinema: Susanne Bier, Lone Scherfig

Authors: Ebru Thwaites Diken

Abstract:

This article analyzes the works of certain directors in Danish cinema, namely Susanne Bier and Lone Sherfig, in the context of transnationalisation of Danish cinema. It looks at how the films' narratives negotiate and reconstruct the local / national / regional and the global. Scholars such as Nestingen & Elkington (2005), Hjort (2010), Higbee and Lim (2010), Bondebjerg and Redvall (2011) address transnationalism of Danish cinema in terms of production and distribution processes and how film making trascends national boundaries. This paper employs a particular understanding of transnationalism - in terms of how ideas and characters travel - to analyze how the storytelling and style has evolved to connect the national, the regional and the global on the basis of the works of these two directors. Strategies such as Hollywoodization - i.e. focus on stardom and classical narration, adhering to conventional European genre formulas, producing Danish films in English language have been identifiable strategies in Danish cinema in the period after the 2000s. Susanne Bier and Lone Scherfig are significant for employing some of these strategies simultaneously. For this reason, this article will look at how these two directors have employed these strategies and negotiated the cultural boundaries and exchanges.

Keywords: danish cinema, transnational cinema, susanne bier, lone scherfig, national cinema

Procedia PDF Downloads 66
2552 The Applicability of Western Environmental Criminology Theories to the Arabic Context

Authors: Nawaf Alotaibi, Andy Evans, Alison Heppenstall, Nick Malleson

Abstract:

Throughout the last two decades, motor vehicle theft (MVT) has accounted for the largest proportion of property crime incidents in Saudi Arabia (SA). However, to date, few studies have investigated SA’s MVT problem. Those that have are primarily focused on the characteristics of car thieves, and most have overlooked any spatial-temporal distribution of MVT incidents and the characteristics of victims. This paper represents the first step in understanding this problem by reviewing the existing MVT studies contextualised within the theoretical frameworks developed in environmental criminology theories – originating in the West – and exploring to what extent they are relevant to the SA context. To achieve this, the paper has identified a range of key features in SA that are different from typical Western contexts, that could limit the appropriateness and capability of applying existing environmental criminology theories. Furthermore, despite these Western studies reviewed so far having introduced a number of explanatory variables for MVT rates, a range of significant elements are apparently absent in the current literature and this requires further analysis. For example, almost no attempts have been made to quantify the associations between the locations of vehicle theft, recovery of stolen vehicles, joyriding and traffic volume.

Keywords: environmental criminology theories, motor vehicle theft, Saudi Arabia, spatial analysis

Procedia PDF Downloads 294
2551 Fault Tolerant and Testable Designs of Reversible Sequential Building Blocks

Authors: Vishal Pareek, Shubham Gupta, Sushil Chandra Jain

Abstract:

With increasing high-speed computation demand the power consumption, heat dissipation and chip size issues are posing challenges for logic design with conventional technologies. Recovery of bit loss and bit errors is other issues that require reversibility and fault tolerance in the computation. The reversible computing is emerging as an alternative to conventional technologies to overcome the above problems and helpful in a diverse area such as low-power design, nanotechnology, quantum computing. Bit loss issue can be solved through unique input-output mapping which require reversibility and bit error issue require the capability of fault tolerance in design. In order to incorporate reversibility a number of combinational reversible logic based circuits have been developed. However, very few sequential reversible circuits have been reported in the literature. To make the circuit fault tolerant, a number of fault model and test approaches have been proposed for reversible logic. In this paper, we have attempted to incorporate fault tolerance in sequential reversible building blocks such as D flip-flop, T flip-flop, JK flip-flop, R-S flip-flop, Master-Slave D flip-flop, and double edge triggered D flip-flop by making them parity preserving. The importance of this proposed work lies in the fact that it provides the design of reversible sequential circuits completely testable for any stuck-at fault and single bit fault. In our opinion our design of reversible building blocks is superior to existing designs in term of quantum cost, hardware complexity, constant input, garbage output, number of gates and design of online testable D flip-flop have been proposed for the first time. We hope our work can be extended for building complex reversible sequential circuits.

Keywords: parity preserving gate, quantum computing, fault tolerance, flip-flop, sequential reversible logic

Procedia PDF Downloads 542
2550 Study of Strontium Sorption onto Indian Bentonite

Authors: Pankaj Pathak, Susmita Sharma

Abstract:

Incessant industrial growth fulfill the energy demand of present day society, at the same time it produces huge amount of waste which could be hazardous or non-hazardous in nature. These wastes are coming out from different sources viz, nuclear power, thermal power, coal mines which contain different types of contaminants and one of the emergent contaminant is strontium, used in the present study. The isotope of strontium (Sr90) is radioactive in nature with half-life of 28.8 years and permissible limit of strontium in drinking water is 1.5 ppm. Above the permissible limit causes several types of diseases in human being. Therefore, safe disposal of strontium into ground becomes a biggest challenge for the researchers. In this context, bentonite is being used as an efficient material to retain strontium onto ground due to its specific physical, chemical and mineralogical properties which exhibits higher cation exchange capacity and specific surface area. These properties influence the interaction between strontium and bentonite, which is quantified by employing a parameter known as distribution coefficient. Batch test was conducted, and sorption isotherms were modelled at different interaction time. The pseudo first-order and pseudo second order kinetic models have been used to fit experimental data, which helps to determine the sorption rate and mechanism.

Keywords: bentonite, interaction time, sorption, strontium

Procedia PDF Downloads 298