Search results for: melodic models
1875 Behavior of the RC Slab Subjected to Impact Loading According to the DIF
Authors: Yong Jae Yu, Jae-Yeol Cho
Abstract:
In the design of structural concrete for impact loading, design or model codes often employ a dynamic increase factor (DIF) to impose dynamic effect on static response. Dynamic increase factors that are obtained from laboratory material test results and that are commonly given as a function of strain rate only are quite different from each other depending on the design concept of design codes like ACI 349M-06, fib Model Code 2010 and ACI 370R-14. Because the dynamic increase factors currently adopted in the codes are too simple and limited to consider a variety of strength of materials, their application in practical design is questionable. In this study, the dynamic increase factors used in the three codes were validated through the finite element analysis of reinforced concrete slab elements which were tested and reported by other researcher. The test was intended to simulate a wall element of the containment building in nuclear power plants that is assumed to be subject to impact scenario that the Pentagon experienced on September 11, 2001. The finite element analysis was performed using the ABAQAUS 6.10 and the plasticity models were employed for the concrete, reinforcement. The dynamic increase factors given in the three codes were applied to the stress-strain curves of the materials. To estimate the dynamic increase factors, strain rate was adopted as a parameter. Comparison of the test and analysis was done with regard to perforation depth, maximum deflection, and surface crack area of the slab. Consequently, it was found that DIF has so great an effect on the behavior of the reinforced concrete structures that selection of DIF should be very careful. The result implies that DIF should be provided in design codes in more delicate format considering various influence factors.Keywords: impact, strain rate, DIF, slab elements
Procedia PDF Downloads 2941874 Evaluation of Initial Graft Tension during ACL Reconstruction Using a Three-Dimensional Computational Finite Element Simulation: Effect of the Combination of a Band of Gracilis with the Former Graft
Authors: S. Alireza Mirghasemi, Javad Parvizi, Narges R. Gabaran, Shervin Rashidinia, Mahdi M. Bijanabadi, Dariush G. Savadkoohi
Abstract:
Background: The anterior cruciate ligament is one of the most frequent ligament to be disrupted. Surgical reconstruction of the anterior cruciate ligament is a common practice to treat the disability or chronic instability of the knee. Several factors associated with success or failure of the ACL reconstruction including preoperative laxity of the knee, selection of the graft material, surgical technique, graft tension, and postoperative rehabilitation. We aimed to examine the biomechanical properties of any graft type and initial graft tensioning during ACL reconstruction using 3-dimensional computational finite element simulation. Methods: In this paper, 3-dimensional model of the knee was constructed to investigate the effect of graft tensioning on the knee joint biomechanics. Four different grafts were compared: 1) Bone-patellar tendon-bone graft (BPTB) 2) Hamstring tendon 3) BPTB and a band of gracilis4) Hamstring and a band of gracilis. The initial graft tension was set as “0, 20, 40, or 60N”. The anterior loading was set to 134 N. Findings: The resulting stress pattern and deflection in any of these models were compared to that of the intact knee. The obtained results showed that the combination of a band of gracilis with the former graft (BPTB or Hamstring) increases the structural stiffness of the knee. Conclusion: Required pretension during surgery decreases significantly by adding a band of gracilis to the proper graft.Keywords: ACL reconstruction, deflection, finite element simulation, stress pattern
Procedia PDF Downloads 3001873 Analysis of the Unmanned Aerial Vehicles’ Incidents and Accidents: The Role of Human Factors
Authors: Jacob J. Shila, Xiaoyu O. Wu
Abstract:
As the applications of unmanned aerial vehicles (UAV) continue to increase across the world, it is critical to understand the factors that contribute to incidents and accidents associated with these systems. Given the variety of daily applications that could utilize the operations of the UAV (e.g., medical, security operations, construction activities, landscape activities), the main discussion has been how to safely incorporate the UAV into the national airspace system. The types of UAV incidents being reported range from near sightings by other pilots to actual collisions with aircraft or UAV. These incidents have the potential to impact the rest of aviation operations in a variety of ways, including human lives, liability costs, and delay costs. One of the largest causes of these incidents cited is the human factor; other causes cited include maintenance, aircraft, and others. This work investigates the key human factors associated with UAV incidents. To that end, the data related to UAV incidents that have occurred in the United States is both reviewed and analyzed to identify key human factors related to UAV incidents. The data utilized in this work is gathered from the Federal Aviation Administration (FAA) drone database. This study adopts the human factor analysis and classification system (HFACS) to identify key human factors that have contributed to some of the UAV failures to date. The uniqueness of this work is the incorporation of UAV incident data from a variety of applications and not just military data. In addition, identifying the specific human factors is crucial towards developing safety operational models and human factor guidelines for the UAV. The findings of these common human factors are also compared to similar studies in other countries to determine whether these factors are common internationally.Keywords: human factors, incidents and accidents, safety, UAS, UAV
Procedia PDF Downloads 2431872 Efficacy of Technology for Successful Learning Experience; Technology Supported Model for Distance Learning: Case Study of Botho University, Botswana
Authors: Ivy Rose Mathew
Abstract:
The purpose of this study is to outline the efficacy of technology and the opportunities it can bring to implement a successful delivery model in Distance Learning. Distance Learning has proliferated over the past few years across the world. Some of the current challenges faced by current students of distance education include lack of motivation, a sense of isolation and a need for greater and improved communication. Hence the author proposes a creative technology supported model for distance learning exactly mirrored on the traditional face to face learning that can be adopted by distance learning providers. This model suggests the usage of a range of technologies and social networking facilities, with the aim of creating a more engaging and sustaining learning environment to help overcome the isolation often noted by distance learners. While discussing the possibilities, the author also highlights the complexity and practical challenges of implementing such a model. Design/methodology/approach: Theoretical issues from previous research related to successful models for distance learning providers will be considered. And also the analysis of a case study from one of the largest private tertiary institution in Botswana, Botho University will be included. This case study illustrates important aspects of the distance learning delivery model and provides insights on how curriculum development is planned, quality assurance is done, and learner support is assured for successful distance learning experience. Research limitations/implications: While some of the aspects of this study may not be applicable to other contexts, a number of new providers of distance learning can adapt the key principles of this delivery model.Keywords: distance learning, efficacy, learning experience, technology supported model
Procedia PDF Downloads 2471871 Higher Consumption of White Rice Increase the Risk of Metabolic Syndrome in Adults with Abdominal Obesity
Authors: Zahra Bahadoran, Parvin Mirmiran, Fereidoun Azizi
Abstract:
Background: Higher consumption of white rice has been suggested as a risk factor for development of metabolic abnormalities. In this study we investigated the association between consumption of white rice and the 3-year occurrence of metabolic syndrome (MetS) in adults with and without abdominal obesity. Methods: This longitudinal study was conducted within the framework of the Tehran Lipid and Glucose Study on 1476 adults, aged 19-70 years. Dietary intakes were measured, using a 168-food items validated semi-quantitative food frequency questionnaire at baseline. Biochemical and anthropometric measurements were evaluated at both baseline (2006-2008) and after 3-year follow-up (2009-2011). MetS and its components were defined according to the diagnostic criteria proposed by NCEP ATP III, and the new cutoff points of waist circumference for Iranian adults. Multiple logistic regression models were used to estimate the occurrence of the MetS in each quartile of white rice consumption. Results: The mean age of participants was 37.8±12.3 y, and mean BMI was 26.0±4.5 kg/m2 at baseline. The prevalence of MetS in subjects with abdominal obesity was significantly higher (40.9 vs. 16.2%, P<0.01). There was no significant difference in white rice consumption between the two groups. Mean daily intake of white rice was 93±59, 209±58, 262±60 and 432±224 g/d, in the first to fourth quartiles of white rice, respectively. Stratified analysis by categories of waist circumference showed that higher consumption of white rice was more strongly related to the risk of metabolic syndrome in participants who had abdominal obesity (OR: 2.34, 95% CI:1.14-4.41 vs. OR:0.99, 95% CI:0.60-1.65) Conclusion: We demonstrated that higher consumption of white rice may be a risk for development of metabolic syndrome in adults with abdominal obesity.Keywords: white rice, abdominal obesity, metabolic syndrome, food science, triglycerides
Procedia PDF Downloads 4461870 Jurisdictional Issues between Competition Law and Data Protection Law in Protection of Privacy of Online Consumers
Authors: Pankhudi Khandelwal
Abstract:
The revenue models of digital giants such as Facebook and Google, use targeted advertising for revenues. Such a model requires huge amounts of consumer data. While the data protection law deals with the protection of personal data, however, this data is acquired by the companies on the basis of consent, performance of a contract, or legitimate interests. This paper analyses the role that competition law can play in evading these loopholes for the protection of data and privacy of online consumers. Digital markets have certain distinctive features such as network effects and feedback loop, which gives incumbents of these markets a first-mover advantage. This creates a situation where the winner takes it all, thus creating entry barriers and concentration in the market. It has been also seen that this dominant position is then used by the undertakings for leveraging in other markets. This can be harmful to the consumers in form of less privacy, less choice, and stifling innovation, as seen in the cases of Facebook Cambridge Analytica, Google Shopping, and Google Android. Therefore, the article aims to provide a legal framework wherein the data protection law and competition law can come together to provide a balance in regulating digital markets. The issue has become more relevant in light of the Facebook decision by German competition authority, where it was held that Facebook had abused its dominant position by not complying with data protection rules, which constituted an exploitative practice. The paper looks into the jurisdictional boundaries that the data protection and competition authorities can work from and suggests ex ante regulation through data protection law and ex post regulation through competition law. It further suggests a change in the consumer welfare standard where harm to privacy should be considered as an indicator of low quality.Keywords: data protection, dominance, ex ante regulation, ex post regulation
Procedia PDF Downloads 1831869 Membrane Distillation Process Modeling: Dynamical Approach
Authors: Fadi Eleiwi, Taous Meriem Laleg-Kirati
Abstract:
This paper presents a complete dynamic modeling of a membrane distillation process. The model contains two consistent dynamic models. A 2D advection-diffusion equation for modeling the whole process and a modified heat equation for modeling the membrane itself. The complete model describes the temperature diffusion phenomenon across the feed, membrane, permeate containers and boundary layers of the membrane. It gives an online and complete temperature profile for each point in the domain. It explains heat conduction and convection mechanisms that take place inside the process in terms of mathematical parameters, and justify process behavior during transient and steady state phases. The process is monitored for any sudden change in the performance at any instance of time. In addition, it assists maintaining production rates as desired, and gives recommendations during membrane fabrication stages. System performance and parameters can be optimized and controlled using this complete dynamic model. Evolution of membrane boundary temperature with time, vapor mass transfer along the process, and temperature difference between membrane boundary layers are depicted and included. Simulations were performed over the complete model with real membrane specifications. The plots show consistency between 2D advection-diffusion model and the expected behavior of the systems as well as literature. Evolution of heat inside the membrane starting from transient response till reaching steady state response for fixed and varying times is illustrated.Keywords: membrane distillation, dynamical modeling, advection-diffusion equation, thermal equilibrium, heat equation
Procedia PDF Downloads 2721868 The Impact of Prior Cancer History on the Prognosis of Salivary Gland Cancer Patients: A Population-based Study from the Surveillance, Epidemiology, and End Results (SEER) Database
Authors: Junhong Li, Danni Cheng, Yaxin Luo, Xiaowei Yi, Ke Qiu, Wendu Pang, Minzi Mao, Yufang Rao, Yao Song, Jianjun Ren, Yu Zhao
Abstract:
Background: The number of multiple cancer patients was increasing, and the impact of prior cancer history on salivary gland cancer patients remains unclear. Methods: Clinical, demographic and pathological information on salivary gland cancer patients were retrospectively collected from the Surveillance, Epidemiology, and End Results (SEER) database from 2004 to 2017, and the characteristics and prognosis between patients with a prior cancer and those without prior caner were compared. Univariate and multivariate cox proportional regression models were used for the analysis of prognosis. A risk score model was established to exam the impact of treatment on patients with a prior cancer in different risk groups. Results: A total of 9098 salivary gland cancer patients were identified, and 1635 of them had a prior cancer history. Salivary gland cancer patients with prior cancer had worse survival compared with those without a prior cancer (p<0.001). Patients with a different type of first cancer had a distinct prognosis (p<0.001), and longer latent time was associated with better survival (p=0.006) in the univariate model, although both became nonsignificant in the multivariate model. Salivary gland cancer patients with a prior cancer were divided into low-risk (n= 321), intermediate-risk (n=223), and high-risk (n=62) groups and the results showed that patients at high risk could benefit from surgery, radiation therapy, and chemotherapy, and those at intermediate risk could benefit from surgery. Conclusion: Prior cancer history had an adverse impact on the survival of salivary gland cancer patients, and individualized treatment should be seriously considered for them.Keywords: prior cancer history, prognosis, salivary gland cancer, SEER
Procedia PDF Downloads 1461867 Modeling Sediment Transports under Extreme Storm Situation along Persian Gulf North Coast
Authors: Majid Samiee Zenoozian
Abstract:
The Persian Gulf is a bordering sea with an normal depth of 35 m and a supreme depth of 100 m near its narrow appearance. Its lengthen bathymetric axis divorces two main geological shires — the steady Arabian Foreland and the unbalanced Iranian Fold Belt — which are imitated in the conflicting shore and bathymetric morphologies of Arabia and Iran. The sediments were experimented with from 72 offshore positions through an oceanographic cruise in the winter of 2018. Throughout the observation era, several storms and river discharge actions happened, as well as the major flood on record since 1982. Suspended-sediment focus at all three sites varied in reaction to both wave resuspension and advection of river-derived sediments. We used hydrological models to evaluation and associate the wave height and inundation distance required to carriage the rocks inland. Our results establish that no known or possible storm happening on the Makran coast is accomplished of detaching and transporting the boulders. The fluid mud consequently is conveyed seaward due to gravitational forcing. The measured sediment focus and velocity profiles on the shelf provide a strong indication to provision this assumption. The sediment model is joined with a 3D hydrodynamic module in the Environmental Fluid Dynamics Code (EFDC) model that offers data on estuarine rotation and salinity transport under normal temperature conditions. 3-D sediment transport from model simulations specify dynamic sediment resuspension and transport near zones of highly industrious oyster beds.Keywords: sediment transport, storm, coast, fluid dynamics
Procedia PDF Downloads 1151866 Customer Involvement in the Development of New Sustainable Products: A Review of the Literature
Authors: Natalia Moreira, Trevor Wood-Harper
Abstract:
The acceptance of sustainable products by the final consumer is still one of the challenges of the industry, which constantly seeks alternative approaches to successfully be accepted in the global market. A large set of methods and approaches have been discussed and analysed throughout the literature. Considering the current need for sustainable development and the current pace of consumption, the need for a combined solution towards the development of new products became clear, forcing researchers in product development to propose alternatives to the previous standard product development models. This paper presents, through a systemic analysis of the literature on product development, eco-design and consumer involvement, a set of alternatives regarding consumer involvement towards the development of sustainable products and how these approaches could help improve the sustainable industry’s establishment in the general market. The initial findings of the research show that the understanding of the benefits of sustainable behaviour lead to a more conscious acquisition and eventually to the implementation of sustainable change in the consumer. Thus this paper is the initial approach towards the development of new sustainable products using the fashion industry as an example of practical implementation and acceptance by the consumers. By comparing the existing literature and critically analysing it this paper concluded that the consumer involvement is strategic to improve the general understanding of sustainability and its features. The use of consumers and communities has been studied since the early 90s in order to exemplify uses and to guarantee a fast comprehension. The analysis done also includes the importance of this approach for the increase of innovation and ground breaking developments, thus requiring further research and practical implementation in order to better understand the implications and limitations of this methodology.Keywords: consumer involvement, products development, sustainability, eco-design
Procedia PDF Downloads 5941865 Molecular Dynamics Studies of Main Factors Affecting Mass Transport Phenomena on Cathode of Polymer Electrolyte Membrane Fuel Cell
Authors: Jingjing Huang, Nengwei Li, Guanghua Wei, Jiabin You, Chao Wang, Junliang Zhang
Abstract:
In this work, molecular dynamics (MD) simulation is applied to analyze the mass transport process in the cathode of proton exchange membrane fuel cell (PEMFC), of which all types of molecules situated in the cathode is considered. a reasonable and effective MD simulation process is provided, and models were built and compared using both Materials Studio and LAMMPS. The mass transport is one of the key issues in the study of proton exchange membrane fuel cells (PEMFCs). In this report, molecular dynamics (MD) simulation is applied to analyze the influence of Nafion ionomer distribution and Pt nano-particle size on mass transport process in the cathode. It is indicated by the diffusion coefficients calculation that a larger quantity of Nafion, as well as a higher equivalent weight (EW) value, will hinder the transport of oxygen. In addition, medium-sized Pt nano-particles (1.5~2nm) are more advantageous in terms of proton transport compared with other particle sizes (0.94~2.55nm) when the center-to-center distance between two Pt nano-particles is around 5 nm. Then mass transport channels are found to be formed between the hydrophobic backbone and the hydrophilic side chains of Nafion ionomer according to the radial distribution function (RDF) curves. And the morphology of these channels affected by the Pt size is believed to influence the transport of hydronium ions and, consequently the performance of PEMFC.Keywords: cathode catalytic layer, mass transport, molecular dynamics, proton exchange membrane fuel cell
Procedia PDF Downloads 2431864 Evaluation of Insulin Sensitizing Effects of Different Fractions from Total Alcoholic Extract of Moringa oleifera Lam. Bark in Dexamethasone-Induced Insulin Resistant Rats
Authors: Hasanpasha N. Sholapur, Basanagouda M.Patil
Abstract:
Alcoholic extract of the bark of Moringa oleifera Lam. (MO), (Moringaceae), has been evaluated experimentally in the past for its insulin sensitizing potentials. In order to explore the possibility of the class of phytochemical(s) responsible for this experimental claim, the alcoholic extract was fractionated into non-polar [petroleum ether (PEF)], moderately non-polar [ethyl acetate (EAF)] and polar [aqueous (AQF)] fractions. All the fractions and pioglitazone (PIO) as standard (10mg/kg were p.o., once daily for 11 d) were investigated for their chronic effect on fasting plasma glucose, triglycerides, total cholesterol, insulin, oral glucose tolerance and acute effect on oral glucose tolerance in dexamethasone-induced (1 mg/kg s.c., once daily for 11 d) chronic model and acute model (1 mg/kg i.p., for 4 h) respectively for insulin resistance (IR) in rats. Among all the fractions tested, chronic treatment with EAF (140 mg/kg) and PIO (10 mg/kg) prevented dexamethasone-induced IR, indicated by prevention of hypertriglyceridemia, hyperinsulinemia and oral glucose intolerance, whereas treatment with AQF (95 mg/kg) prevented hepatic IR but not peripheral IR. In acute study single dose treatment with EAF (140 mg/kg) and PIO (10 mg/kg) prevented dexamethasone-induced oral glucose intolerance, fraction PEF did not show any effect on these parameters in both the models. The present study indicates that the triterpenoidal and the phenolic class of phytochemicals detected in EAF of alcoholic extract of MO bark may be responsible for the prevention of dexamethasone-induced insulin resistance in rats.Keywords: Moringa oleifera, insulin resistance, dexamethasone, serum triglyceride, insulin, oral glucose tolerance test
Procedia PDF Downloads 3721863 Applying the Regression Technique for Prediction of the Acute Heart Attack
Authors: Paria Soleimani, Arezoo Neshati
Abstract:
Myocardial infarction is one of the leading causes of death in the world. Some of these deaths occur even before the patient reaches the hospital. Myocardial infarction occurs as a result of impaired blood supply. Because the most of these deaths are due to coronary artery disease, hence the awareness of the warning signs of a heart attack is essential. Some heart attacks are sudden and intense, but most of them start slowly, with mild pain or discomfort, then early detection and successful treatment of these symptoms is vital to save them. Therefore, importance and usefulness of a system designing to assist physicians in the early diagnosis of the acute heart attacks is obvious. The purpose of this study is to determine how well a predictive model would perform based on the only patient-reportable clinical history factors, without using diagnostic tests or physical exams. This type of the prediction model might have application outside of the hospital setting to give accurate advice to patients to influence them to seek care in appropriate situations. For this purpose, the data were collected on 711 heart patients in Iran hospitals. 28 attributes of clinical factors can be reported by patients; were studied. Three logistic regression models were made on the basis of the 28 features to predict the risk of heart attacks. The best logistic regression model in terms of performance had a C-index of 0.955 and with an accuracy of 94.9%. The variables, severe chest pain, back pain, cold sweats, shortness of breath, nausea, and vomiting were selected as the main features.Keywords: Coronary heart disease, Acute heart attacks, Prediction, Logistic regression
Procedia PDF Downloads 4491862 Household Wealth and Portfolio Choice When Tail Events Are Salient
Authors: Carlson Murray, Ali Lazrak
Abstract:
Robust experimental evidence of systematic violations of expected utility (EU) establishes that individuals facing risk overweight utility from low probability gains and losses when making choices. These findings motivated development of models of preferences with probability weighting functions, such as rank dependent utility (RDU). We solve for the optimal investing strategy of an RDU investor in a dynamic binomial setting from which we derive implications for investing behavior. We show that relative to EU investors with constant relative risk aversion, commonly measured probability weighting functions produce optimal RDU terminal wealth with significant downside protection and upside exposure. We additionally find that in contrast to EU investors, RDU investors optimally choose a portfolio that contains fair bets that provide payo↵s that can be interpreted as lottery outcomes or exposure to idiosyncratic returns. In a calibrated version of the model, we calculate that RDU investors would be willing to pay 5% of their initial wealth for the freedom to trade away from an optimal EU wealth allocation. The dynamic trading strategy that supports the optimal wealth allocation implies portfolio weights that are independent of initial wealth but requires higher risky share after good stock return histories. Optimal trading also implies the possibility of non-participation when historical returns are poor. Our model fills a gap in the literature by providing new quantitative and qualitative predictions that can be tested experimentally or using data on household wealth and portfolio choice.Keywords: behavioral finance, probability weighting, portfolio choice
Procedia PDF Downloads 4201861 Effect of Omeprazole on the Renal Cortex of Adult Male Albino Rats and the Possible Protective Role of Ginger: Histological and Immunohistochemical study
Authors: Nashwa A. Mohamed
Abstract:
Introduction: Omeprazole is a proton pump inhibitor used commonly in the treatment of acid-peptic disorders. Although omeprazole is generally well tolerated, serious adverse effects such as renal failure have been reported. Ginger is an antioxidant that could play a protective role in models of experimentally induced nephropathies. Aim of the work: The aim of this work was to study the possible histological changes induced by omeprazole on renal cortex and evaluate the possible protective effect of ginger on omeprazole-induced renal damage in adult male albino rats. Materials and methods: Twenty-four adult male albino rats divided into four groups (six rats each) were used in this study. Group I served as the control group. Rats of group II received only an aqueous extract of ginger daily for 3 months through a gastric tube. Rats of group III were received omeprazole orally through a gastric tube for 3 months. Rats of group IV were given both ginger and omeprazole at the same doses and through the same routes as the previous two groups. At the end of the experiment, the rats were sacrificed. Renal tissue samples were processed for light, immunohistochemical and electron microscopic examination. The obtained results were analysed morphometrically and statistically. Results: Omeprazole caused several histological changes in the form of loss of normal appearance of renal cortex with degenerative changes in the renal corpuscle and tubules. Cellular infilteration was also observed. The filteration barrier was markedly affected. Ginger ameliorated the omeprazole-induced histological changes. Conclusion: Omeprazole induced injurious effects on renal cortex. Coadministration of ginger can ameliorate the histological changes induced by omeprazole.Keywords: ginger, kidney, omeprazole, rat
Procedia PDF Downloads 2521860 Border Trade Policy to Promote Thailand - Myanmar Mae Sai, Chiang Rai Province
Authors: Sakapas Saengchai, Pichamon Chansuchai
Abstract:
Research Thai- Myanmar Border Trade Promotion Policy, Mae Sai District, Chiang Rai Province The objectives of this study were to study the policy of promoting Thai- Myanmar border trade in Mae Sai district, Chiang Rai province. And suitable models for the development of border trade in Mae Sai. Chiang Rai province This research uses qualitative methodology. The method of collecting data from research papers. Participatory Observation In-depth interviews in which the information is important, the governor of Chiang Rai. Chiang Rai Customs Service Executive Office of Mae Sai Immigration Bureau Maesai Chamber of Commerce and Private Entrepreneurs By specific sampling Data analysis uses content analysis. The study indicated that Border Trade Promotion Policy The direction taken by the government to focus on developing 1. Security is further reducing crime. Smuggling and human trafficking Including the preparation to protect people from terrorism and natural disasters. And cooperation with Burma on border security. 2. The development of wealth is the promotion of investment. The transport links, logistics value chain. Products and services across the Thai-Myanmar border. Improve the regulations and laws to promote fair trade. Convenient and fast 3. Sustainable development is the ability to generate income, quality of life of people in the Thai border to increase continuously. By using balanced natural resources, production and consumption are environmentally friendly. Which featured the participation of all sectors of the public and private sectors in the region to drive the development of the border with Thailand. Chiang Rai province To be more competitive .Keywords: Border, Trade, Policy, Promote
Procedia PDF Downloads 1711859 A Next-Generation Blockchain-Based Data Platform: Leveraging Decentralized Storage and Layer 2 Scaling for Secure Data Management
Authors: Kenneth Harper
Abstract:
The rapid growth of data-driven decision-making across various industries necessitates advanced solutions to ensure data integrity, scalability, and security. This study introduces a decentralized data platform built on blockchain technology to improve data management processes in high-volume environments such as healthcare and financial services. The platform integrates blockchain networks using Cosmos SDK and Polkadot Substrate alongside decentralized storage solutions like IPFS and Filecoin, and coupled with decentralized computing infrastructure built on top of Avalanche. By leveraging advanced consensus mechanisms, we create a scalable, tamper-proof architecture that supports both structured and unstructured data. Key features include secure data ingestion, cryptographic hashing for robust data lineage, and Zero-Knowledge Proof mechanisms that enhance privacy while ensuring compliance with regulatory standards. Additionally, we implement performance optimizations through Layer 2 scaling solutions, including ZK-Rollups, which provide low-latency data access and trustless data verification across a distributed ledger. The findings from this exercise demonstrate significant improvements in data accessibility, reduced operational costs, and enhanced data integrity when tested in real-world scenarios. This platform reference architecture offers a decentralized alternative to traditional centralized data storage models, providing scalability, security, and operational efficiency.Keywords: blockchain, cosmos SDK, decentralized data platform, IPFS, ZK-Rollups
Procedia PDF Downloads 271858 Time Series Simulation by Conditional Generative Adversarial Net
Authors: Rao Fu, Jie Chen, Shutian Zeng, Yiping Zhuang, Agus Sudjianto
Abstract:
Generative Adversarial Net (GAN) has proved to be a powerful machine learning tool in image data analysis and generation. In this paper, we propose to use Conditional Generative Adversarial Net (CGAN) to learn and simulate time series data. The conditions include both categorical and continuous variables with different auxiliary information. Our simulation studies show that CGAN has the capability to learn different types of normal and heavy-tailed distributions, as well as dependent structures of different time series. It also has the capability to generate conditional predictive distributions consistent with training data distributions. We also provide an in-depth discussion on the rationale behind GAN and the neural networks as hierarchical splines to establish a clear connection with existing statistical methods of distribution generation. In practice, CGAN has a wide range of applications in market risk and counterparty risk analysis: it can be applied to learn historical data and generate scenarios for the calculation of Value-at-Risk (VaR) and Expected Shortfall (ES), and it can also predict the movement of the market risk factors. We present a real data analysis including a backtesting to demonstrate that CGAN can outperform Historical Simulation (HS), a popular method in market risk analysis to calculate VaR. CGAN can also be applied in economic time series modeling and forecasting. In this regard, we have included an example of hypothetical shock analysis for economic models and the generation of potential CCAR scenarios by CGAN at the end of the paper.Keywords: conditional generative adversarial net, market and credit risk management, neural network, time series
Procedia PDF Downloads 1431857 Optimal Sequential Scheduling of Imperfect Maintenance Last Policy for a System Subject to Shocks
Authors: Yen-Luan Chen
Abstract:
Maintenance has a great impact on the capacity of production and on the quality of the products, and therefore, it deserves continuous improvement. Maintenance procedure done before a failure is called preventive maintenance (PM). Sequential PM, which specifies that a system should be maintained at a sequence of intervals with unequal lengths, is one of the commonly used PM policies. This article proposes a generalized sequential PM policy for a system subject to shocks with imperfect maintenance and random working time. The shocks arrive according to a non-homogeneous Poisson process (NHPP) with varied intensity function in each maintenance interval. As a shock occurs, the system suffers two types of failures with number-dependent probabilities: type-I (minor) failure, which is rectified by a minimal repair, and type-II (catastrophic) failure, which is removed by a corrective maintenance (CM). The imperfect maintenance is carried out to improve the system failure characteristic due to the altered shock process. The sequential preventive maintenance-last (PML) policy is defined as that the system is maintained before any CM occurs at a planned time Ti or at the completion of a working time in the i-th maintenance interval, whichever occurs last. At the N-th maintenance, the system is replaced rather than maintained. This article first takes up the sequential PML policy with random working time and imperfect maintenance in reliability engineering. The optimal preventive maintenance schedule that minimizes the mean cost rate of a replacement cycle is derived analytically and determined in terms of its existence and uniqueness. The proposed models provide a general framework for analyzing the maintenance policies in reliability theory.Keywords: optimization, preventive maintenance, random working time, minimal repair, replacement, reliability
Procedia PDF Downloads 2751856 Bluetooth Communication Protocol Study for Multi-Sensor Applications
Authors: Joao Garretto, R. J. Yarwood, Vamsi Borra, Frank Li
Abstract:
Bluetooth Low Energy (BLE) has emerged as one of the main wireless communication technologies used in low-power electronics, such as wearables, beacons, and Internet of Things (IoT) devices. BLE’s energy efficiency characteristic, smart mobiles interoperability, and Over the Air (OTA) capabilities are essential features for ultralow-power devices, which are usually designed with size and cost constraints. Most current research regarding the power analysis of BLE devices focuses on the theoretical aspects of the advertising and scanning cycles, with most results being presented in the form of mathematical models and computer software simulations. Such computer modeling and simulations are important for the comprehension of the technology, but hardware measurement is essential for the understanding of how BLE devices behave in real operation. In addition, recent literature focuses mostly on the BLE technology, leaving possible applications and its analysis out of scope. In this paper, a coin cell battery-powered BLE Data Acquisition Device, with a 4-in-1 sensor and one accelerometer, is proposed and evaluated with respect to its Power Consumption. First, evaluations of the device in advertising mode with the sensors turned off completely, followed by the power analysis when each of the sensors is individually turned on and data is being transmitted, and concluding with the power consumption evaluation when both sensors are on and respectively broadcasting the data to a mobile phone. The results presented in this paper are real-time measurements of the electrical current consumption of the BLE device, where the energy levels that are demonstrated are matched to the BLE behavior and sensor activity.Keywords: bluetooth low energy, power analysis, BLE advertising cycle, wireless sensor node
Procedia PDF Downloads 911855 Characterization of the State of Pollution by Nitrates in the Groundwater in Arid Zones Case of Eloued District (South-East of Algeria)
Authors: Zair Nadje, Attoui Badra, Miloudi Abdelmonem
Abstract:
This study aims to assess sensitivity to nitrate pollution and monitor the temporal evolution of nitrate contents in groundwater using statistical models and map their spatial distribution. The nitrate levels observed in the waters of the town of El-Oued differ from one aquifer to another. Indeed, the waters of the Quaternary aquifer are the richest in nitrates, with average annual contents varying from 6 mg/l to 85 mg/l, for an average of 37 mg/l. These levels are higher than the WHO standard (50 mg/l) for drinking water. At the water level of the Terminal Complex (CT) aquifer, the annual average nitrate levels vary from 14 mg/l to 37 mg/l, with an average of 18 mg/l. In the Terminal Complex, excessive nitrate levels are observed in the central localities of the study area. The spatial distribution of nitrates in the waters of the Quaternary aquifer shows that the majority of the catchment points of this aquifer are subject to nitrate pollution. This study shows that in the waters of the Terminal Complex aquifer, nitrate pollution evolves in two major areas. The first focus is South-North, following the direction of underground flow. The second is West-East, progressing towards the East zone. The temporal distribution of nitrate contents in the water of the Terminal Complex aquifer in the city of El-Oued showed that for decades, nitrate contents have suffered a decline after an increase. This evolution of nitrate levels is linked to demographic growth and the rapid urbanization of the city of El-Oued.Keywords: anthropogenic activities, groundwater, nitrates, pollution, arid zones city of El-Oued, Algeria
Procedia PDF Downloads 561854 Spatial Variation of Nitrogen, Phosphorus and Potassium Contents of Tomato (Solanum lycopersicum L.) Plants Grown in Greenhouses (Springs) in Elmali-Antalya Region
Authors: Namik Kemal Sonmez, Sahriye Sonmez, Hasan Rasit Turkkan, Hatice Tuba Selcuk
Abstract:
In this study, the spatial variation of plant and soil nutrition contents of tomato plants grown in greenhouses was investigated in Elmalı region of Antalya. For this purpose, total of 19 sampling points were determined. Coordinates of each sampling points were recorded by using a hand-held GPS device and were transferred to satellite data in GIS. Soil samples were collected from two different depths, 0-20 and 20-40 cm, and leaf were taken from different tomato greenhouses. The soil and plant samples were analyzed for N, P and K. Then, attribute tables were created with the analyses results by using GIS. Data were analyzed and semivariogram models and parameters (nugget, sill and range) of variables were determined by using GIS software. Kriged maps of variables were created by using nugget, sill and range values with geostatistical extension of ArcGIS software. Kriged maps of the N, P and K contents of plant and soil samples showed patchy or a relatively smooth distribution in the study areas. As a result, the N content of plants were sufficient approximately 66% portion of the tomato productions. It was determined that the P and K contents were sufficient of 70% and 80% portion of the areas, respectively. On the other hand, soil total K contents were generally adequate and available N and P contents were found to be highly good enough in two depths (0-20 and 20-40 cm) 90% portion of the areas.Keywords: Elmali, nutrients, springs greenhouses, spatial variation, tomato
Procedia PDF Downloads 2431853 The Assessment of Particulate Matter Pollution in Kaunas Districts
Authors: Audrius Dedele, Aukse Miskinyte
Abstract:
Air pollution is a major problem, especially in large cities, causing a variety of environmental issues and a risk to human health effects. In order to observe air quality, to reduce and control air pollution in the city, municipalities are responsible for the creation of air quality management plans, air quality monitoring and emission inventories. Atmospheric dispersion modelling systems, along with monitoring, are powerful tools, which can be used not only for air quality management, but for the assessment of human exposure to air pollution. These models are widely used in epidemiological studies, which try to determine the associations between exposure to air pollution and the adverse health effects. The purpose of this study was to determine the concentration of particulate matter smaller than 10 μm (PM10) in different districts of Kaunas city during winter season. ADMS-Urban dispersion model was used for the simulation of PM10 pollution. The inputs of the model were the characteristics of stationary, traffic and domestic sources, emission data, meteorology and background concentrations were entered in the model. To assess the modelled concentrations of PM10 in Kaunas districts, geographic information system (GIS) was used. More detailed analysis was made using Spatial Analyst tools. The modelling results showed that the average concentration of PM10 during winter season in Kaunas city was 24.8 µg/m3. The highest PM10 levels were determined in Zaliakalnis and Aleksotas districts with are the highest number of individual residential properties, 32.0±5.2 and 28.7±8.2 µg/m3, respectively. The lowest pollution of PM10 was modelled in Petrasiunai district (18.4 µg/m3), which is characterized as commercial and industrial neighbourhood.Keywords: air pollution, dispersion model, GIS, Particulate matter
Procedia PDF Downloads 2691852 Exploring Alignability Effects and the Role of Information Structure in Promoting Uptake of Energy Efficient Technologies
Authors: Rebecca Hafner, David Elmes, Daniel Read
Abstract:
The current research applies decision-making theory to the problem of increasing uptake of energy efficient technologies in the market place, where uptake is currently slower than one might predict following rational choice models. We apply the alignable/non-alignable features effect and explore the impact of varying information structure on the consumers’ preference for standard versus energy efficient technologies. In two studies we present participants with a choice between similar (boiler vs. boiler) vs. dissimilar (boiler vs. heat pump) technologies, described by a list of alignable and non-alignable attributes. In study One there is a preference for alignability when options are similar; an effect mediated by an increased tendency to infer missing information is the same. No effects of alignability on preference are found when options differ. One explanation for this split-shift in attentional focus is a change in construal levels potentially induced by the added consideration of environmental concern. Study two was designed to explore the interplay between alignability and construal level in greater detail. We manipulated construal level via a thought prime task prior to taking part in the same heating systems choice task, and find that there is a general preference for non-alignability, regardless of option type. We draw theoretical and applied implications for the type of information structure best suited for the promotion of energy efficient technologies.Keywords: alignability effects, decision making, energy-efficient technologies, sustainable behaviour change
Procedia PDF Downloads 3131851 Soil Salinity from Wastewater Irrigation in Urban Greenery
Authors: H. Nouri, S. Chavoshi Borujeni, S. Anderson, S. Beecham, P. Sutton
Abstract:
The potential risk of salt leaching through wastewater irrigation is of concern for most local governments and city councils. Despite the necessity of salinity monitoring and management in urban greenery, most attention has been on agricultural fields. This study was defined to investigate the capability and feasibility of monitoring and predicting soil salinity using near sensing and remote sensing approaches using EM38 surveys, and high-resolution multispectral image of WorldView3. Veale Gardens within the Adelaide Parklands was selected as the experimental site. The results of the near sensing investigation were validated by testing soil salinity samples in the laboratory. Over 30 band combinations forming salinity indices were tested using image processing techniques. The outcomes of the remote sensing and near sensing approaches were compared to examine whether remotely sensed salinity indicators could map and predict the spatial variation of soil salinity through a potential statistical model. Statistical analysis was undertaken using the Stata 13 statistical package on over 52,000 points. Several regression models were fitted to the data, and the mixed effect modelling was selected the most appropriate one as it takes to account the systematic observation-specific unobserved heterogeneity. Results showed that SAVI (Soil Adjusted Vegetation Index) was the only salinity index that could be considered as a predictor for soil salinity but further investigation is needed. However, near sensing was found as a rapid, practical and realistically accurate approach for salinity mapping of heterogeneous urban vegetation.Keywords: WorldView3, remote sensing, EM38, near sensing, urban green spaces, green smart cities
Procedia PDF Downloads 1621850 Assessing the Actions of the Farm Mangers to Execute Field Operations at Opportune Times
Authors: G. Edwards, N. Dybro, L. J. Munkholm, C. G. Sørensen
Abstract:
Planning agricultural operations requires an understanding of when fields are ready for operations. However determining a field’s readiness is a difficult process that can involve large amounts of data and an experienced farm manager. A consequence of this is that operations are often executed when fields are unready, or partially unready, which can compromise results incurring environmental impacts, decreased yield and increased operational costs. In order to assess timeliness of operations’ execution, a new scheme is introduced to quantify the aptitude of farm managers to plan operations. Two criteria are presented by which the execution of operations can be evaluated as to their exploitation of a field’s readiness window. A dataset containing the execution dates of spring and autumn operations on 93 fields in Iowa, USA, over two years, was considered as an example and used to demonstrate how operations’ executions can be evaluated. The execution dates were compared with simulated data to gain a measure of how disparate the actual execution was from the ideal execution. The presented tool is able to evaluate the spring operations better than the autumn operations as required data was lacking to correctly parameterise the crop model. Further work is needed on the underlying models of the decision support tool in order for its situational knowledge to emulate reality more consistently. However the assessment methods and evaluation criteria presented offer a standard by which operations' execution proficiency can be quantified and could be used to identify farm managers who require decisional support when planning operations, or as a means of incentivising and promoting the use of sustainable farming practices.Keywords: operation management, field readiness, sustainable farming, workability
Procedia PDF Downloads 3871849 Unsteady Flow Simulations for Microchannel Design and Its Fabrication for Nanoparticle Synthesis
Authors: Mrinalini Amritkar, Disha Patil, Swapna Kulkarni, Sukratu Barve, Suresh Gosavi
Abstract:
Micro-mixers play an important role in the lab-on-a-chip applications and micro total analysis systems to acquire the correct level of mixing for any given process. The mixing process can be classified as active or passive according to the use of external energy. Literature of microfluidics reports that most of the work is done on the models of steady laminar flow; however, the study of unsteady laminar flow is an active area of research at present. There are wide applications of this, out of which, we consider nanoparticle synthesis in micro-mixers. In this work, we have developed a model for unsteady flow to study the mixing performance of a passive micro mixer for reactants used for such synthesis. The model is developed in Finite Volume Method (FVM)-based software, OpenFOAM. The model is tested by carrying out the simulations at Re of 0.5. Mixing performance of the micro-mixer is investigated using simulated concentration values of mixed species across the width of the micro-mixer and calculating the variance across a line profile. Experimental validation is done by passing dyes through a Y shape micro-mixer fabricated using polydimethylsiloxane (PDMS) polymer and comparing variances with the simulated ones. Gold nanoparticles are later synthesized through the micro-mixer and collected at two different times leading to significantly different size distributions. These times match with the time scales over which reactant concentrations vary as obtained from simulations. Our simulations could thus be used to create design aids for passive micro-mixers used in nanoparticle synthesis.Keywords: Lab-on-chip, LOC, micro-mixer, OpenFOAM, PDMS
Procedia PDF Downloads 1611848 Efficient DNN Training on Heterogeneous Clusters with Pipeline Parallelism
Abstract:
Pipeline parallelism has been widely used to accelerate distributed deep learning to alleviate GPU memory bottlenecks and to ensure that models can be trained and deployed smoothly under limited graphics memory conditions. However, in highly heterogeneous distributed clusters, traditional model partitioning methods are not able to achieve load balancing. The overlap of communication and computation is also a big challenge. In this paper, HePipe is proposed, an efficient pipeline parallel training method for highly heterogeneous clusters. According to the characteristics of the neural network model pipeline training task, oriented to the 2-level heterogeneous cluster computing topology, a training method based on the 2-level stage division of neural network modeling and partitioning is designed to improve the parallelism. Additionally, a multi-forward 1F1B scheduling strategy is designed to accelerate the training time of each stage by executing the computation units in advance to maximize the overlap between the forward propagation communication and backward propagation computation. Finally, a dynamic recomputation strategy based on task memory requirement prediction is proposed to improve the fitness ratio of task and memory, which improves the throughput of the cluster and solves the memory shortfall problem caused by memory differences in heterogeneous clusters. The empirical results show that HePipe improves the training speed by 1.6×−2.2× over the existing asynchronous pipeline baselines.Keywords: pipeline parallelism, heterogeneous cluster, model training, 2-level stage partitioning
Procedia PDF Downloads 181847 Portfolio Optimization with Reward-Risk Ratio Measure Based on the Mean Absolute Deviation
Authors: Wlodzimierz Ogryczak, Michal Przyluski, Tomasz Sliwinski
Abstract:
In problems of portfolio selection, the reward-risk ratio criterion is optimized to search for a risky portfolio with the maximum increase of the mean return in proportion to the risk measure increase when compared to the risk-free investments. In the classical model, following Markowitz, the risk is measured by the variance thus representing the Sharpe ratio optimization and leading to the quadratic optimization problems. Several Linear Programming (LP) computable risk measures have been introduced and applied in portfolio optimization. In particular, the Mean Absolute Deviation (MAD) measure has been widely recognized. The reward-risk ratio optimization with the MAD measure can be transformed into the LP formulation with the number of constraints proportional to the number of scenarios and the number of variables proportional to the total of the number of scenarios and the number of instruments. This may lead to the LP models with huge number of variables and constraints in the case of real-life financial decisions based on several thousands scenarios, thus decreasing their computational efficiency and making them hardly solvable by general LP tools. We show that the computational efficiency can be then dramatically improved by an alternative model based on the inverse risk-reward ratio minimization and by taking advantages of the LP duality. In the introduced LP model the number of structural constraints is proportional to the number of instruments thus not affecting seriously the simplex method efficiency by the number of scenarios and therefore guaranteeing easy solvability. Moreover, we show that under natural restriction on the target value the MAD risk-reward ratio optimization is consistent with the second order stochastic dominance rules.Keywords: portfolio optimization, reward-risk ratio, mean absolute deviation, linear programming
Procedia PDF Downloads 4071846 Payments for Forest Environmental Services: Advantages and Disadvantages in the Different Mechanisms in Vietnam North Central Area
Authors: Huong Nguyen Thi Thanh, Van Mai Thi Khanh
Abstract:
For around the world, payments for environmental services have been implemented since the late 1970s in Europe and North America; then, it was spread to Latin America, Asia, Africa, and finally Oceania in 2008. In Vietnam, payments for environmental services are an interesting issue recently with the forest as the main focus and therefore known as the program on payment for forest environmental services (PFES). PFES was piloted in Lam Dong and Son La in 2008 and has been widely applied in many provinces after 2010. PFES is in the orientation for the socialization of national forest protection in Vietnam and has made great strides in the last decade. By using the primary data and secondary data simultaneously, the paper clarifies two cases of implementing PFES in the Vietnam North Central area with the different mechanisms of payment. In the first case at Phu Loc district (Thua Thien Hue province), PFES is an indirect method by a water supply company via the Forest Protection and Development Fund. In the second one at Phong Nha – Ke Bang National Park (Quang Binh Province), tourism companies are the direct payers to forest owners. The paper describes the PFES implementation process at each site, clarifies the payment mechanism, and models the relationship between stakeholders in PFES implementation. Based on the current status of PFES sites, the paper compares and analyzes the advantages and disadvantages of the two payment methods. Finally, the paper proposes recommendations to improve the existing shortcomings in each payment mechanism.Keywords: advantages and disadvantages, forest environmental services, forest protection, payment mechanism
Procedia PDF Downloads 129