Search results for: equivalent circuit models
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 8210

Search results for: equivalent circuit models

4430 Unlocking Health Insights: Studying Data for Better Care

Authors: Valentina Marutyan

Abstract:

Healthcare data mining is a rapidly developing field at the intersection of technology and medicine that has the potential to change our understanding and approach to providing healthcare. Healthcare and data mining is the process of examining huge amounts of data to extract useful information that can be applied in order to improve patient care, treatment effectiveness, and overall healthcare delivery. This field looks for patterns, trends, and correlations in a variety of healthcare datasets, such as electronic health records (EHRs), medical imaging, patient demographics, and treatment histories. To accomplish this, it uses advanced analytical approaches. Predictive analysis using historical patient data is a major area of interest in healthcare data mining. This enables doctors to get involved early to prevent problems or improve results for patients. It also assists in early disease detection and customized treatment planning for every person. Doctors can customize a patient's care by looking at their medical history, genetic profile, current and previous therapies. In this way, treatments can be more effective and have fewer negative consequences. Moreover, helping patients, it improves the efficiency of hospitals. It helps them determine the number of beds or doctors they require in regard to the number of patients they expect. In this project are used models like logistic regression, random forests, and neural networks for predicting diseases and analyzing medical images. Patients were helped by algorithms such as k-means, and connections between treatments and patient responses were identified by association rule mining. Time series techniques helped in resource management by predicting patient admissions. These methods improved healthcare decision-making and personalized treatment. Also, healthcare data mining must deal with difficulties such as bad data quality, privacy challenges, managing large and complicated datasets, ensuring the reliability of models, managing biases, limited data sharing, and regulatory compliance. Finally, secret code of data mining in healthcare helps medical professionals and hospitals make better decisions, treat patients more efficiently, and work more efficiently. It ultimately comes down to using data to improve treatment, make better choices, and simplify hospital operations for all patients.

Keywords: data mining, healthcare, big data, large amounts of data

Procedia PDF Downloads 76
4429 Kinetic Modelling of Drying Process of Jumbo Squid (Dosidicus Gigas) Slices Subjected to an Osmotic Pretreatment under High Pressure

Authors: Mario Perez-Won, Roberto Lemus-Mondaca, Constanza Olivares-Rivera, Fernanda Marin-Monardez

Abstract:

This research presents the simultaneous application of high hydrostatic pressure (HHP) and osmotic dehydration (DO) as a pretreatment to hot –air drying of jumbo squid (Dosidicus gigas) cubes. The drying time was reduced to 2 hours at 60ºC and 5 hours at 40°C as compared to the jumbo squid samples untreated. This one was due to osmotic pressure under high-pressure treatment where increased salt saturation what caused an increasing water loss. Thus, a more reduced time during convective drying was reached, and so water effective diffusion in drying would play an important role in this research. Different working conditions such as pressure (350-550 MPa), pressure time (5-10 min), salt concentration, NaCl (10 y 15%) and drying temperature (40-60ºC) were optimized according to kinetic parameters of each mathematical model. The models used for drying experimental curves were those corresponding to Weibull, Page and Logarithmic models, however, the latest one was the best fitted to the experimental data. The values for water effective diffusivity varied from 4.82 to 6.59x10-9 m2/s for the 16 curves (DO+HHP) whereas the control samples obtained a value of 1.76 and 5.16×10-9 m2/s, for 40 and 60°C, respectively. On the other hand, quality characteristics such as color, texture, non-enzymatic browning, water holding capacity (WHC) and rehydration capacity (RC) were assessed. The L* (lightness) color parameter increased, however, b * (yellowish) and a* (reddish) parameters decreased for the DO+HHP treated samples, indicating treatment prevents sample browning. The texture parameters such as hardness and elasticity decreased, but chewiness increased with treatment, which resulted in a product with a higher tenderness and less firmness compared to the untreated sample. Finally, WHC and RC values of the most treatments increased owing to a minor damage in tissue cellular compared to untreated samples. Therefore, a knowledge regarding to the drying kinetic as well as quality characteristics of dried jumbo squid samples subjected to a pretreatment of osmotic dehydration under high hydrostatic pressure is extremely important to an industrial level so that the drying process can be successful at different pretreatment conditions and/or variable processes.

Keywords: diffusion coefficient, drying process, high pressure, jumbo squid, modelling, quality aspects

Procedia PDF Downloads 246
4428 The Voiceless Dental- Alveolar Common Augment in Arabic and Other Semitic Languages, a Morphophonemic Comparison

Authors: Tarek Soliman Mostafa Soliman Al-Nana'i

Abstract:

There are non-steady voiced augments in the Semitic languages, and in the morphological and structural augmentation, two sounds were augments in all Semitic languages at the level of the spoken language and two letters at the level of the written language, which are the hamza and the ta’. This research studies only the second of them; Therefore, we defined it as “The Voiceless Dental- alveolar common augment” (VDACA) to distinguish it from the glottal sound “Hamza”, first, middle, or last, in a noun or in a verb, in Arabic and its equivalent in the Semitic languages. What is meant by “VDACA” is the ta’ that is in addition to the root of the word at the morphological level: the word “voiceless” takes out the voiced sounds that we studied before, and the “dental- alveolar common augment” takes out the laryngeal sound of them, which is the “Hamza”: and the word “common” brings out the uncommon voiceless sounds, which are sīn, shīn, and hā’. The study is limited to the ta' alone among the Arabic sounds, and this title faced a problem in identifying it with the ta'. Because the designation of the ta is not the same in most Semitic languages. Hebrew, for example, has “tav” and is pronounced with the voiced fa (v), which is not in Arabic. It is called different names in other Semitic languages, such as “taw” or “tAu” in old Syriac. And so on. This goes hand in hand with the insistence on distance from the written level and the reference to the phonetic aspect in this study that is closely and closely linked to the morphological level. Therefore, the study is “morphophonemic”. What is meant by Semitic languages in this study are the following: Akkadian, Ugaritic, Hebrew, Syriac, Mandaean, Ge'ez, and Amharic. The problem of the study is the agreement or difference between these languages in the position of that augment, first, middle, or last. And in determining the distinguishing characteristics of each language from the other. As for the study methodology, it is determined by the comparative approach in Semitic languages, which is based on the descriptive approach for each language. The study is divided into an introduction, four sections, and a conclusion: Introduction: It included the subject of the study, its importance, motives, problem, methodology, and division. The first section: VDACA as a non-common phoneme. The second: VDACA as a common phoneme. The third: VDACA as a functional morpheme. The fourth section: Commentary and conclusion with the most important results. The positions of VDACA in Arabic and other Semitic languages, and in nouns and verbs, were limited to first, middle, and last. The research identified the individual addition, which is common with other augments, and the research proved that this augmentation is constant in all Semitic languages, but there are characteristics that distinguish each language from the other.

Keywords: voiceless -, dental- alveolar, augment, Arabic - semitic languages

Procedia PDF Downloads 73
4427 Hydrogen Production Using an Anion-Exchange Membrane Water Electrolyzer: Mathematical and Bond Graph Modeling

Authors: Hugo Daneluzzo, Christelle Rabbat, Alan Jean-Marie

Abstract:

Water electrolysis is one of the most advanced technologies for producing hydrogen and can be easily combined with electricity from different sources. Under the influence of electric current, water molecules can be split into oxygen and hydrogen. The production of hydrogen by water electrolysis favors the integration of renewable energy sources into the energy mix by compensating for their intermittence through the storage of the energy produced when production exceeds demand and its release during off-peak production periods. Among the various electrolysis technologies, anion exchange membrane (AEM) electrolyser cells are emerging as a reliable technology for water electrolysis. Modeling and simulation are effective tools to save time, money, and effort during the optimization of operating conditions and the investigation of the design. The modeling and simulation become even more important when dealing with multiphysics dynamic systems. One of those systems is the AEM electrolysis cell involving complex physico-chemical reactions. Once developed, models may be utilized to comprehend the mechanisms to control and detect flaws in the systems. Several modeling methods have been initiated by scientists. These methods can be separated into two main approaches, namely equation-based modeling and graph-based modeling. The former approach is less user-friendly and difficult to update as it is based on ordinary or partial differential equations to represent the systems. However, the latter approach is more user-friendly and allows a clear representation of physical phenomena. In this case, the system is depicted by connecting subsystems, so-called blocks, through ports based on their physical interactions, hence being suitable for multiphysics systems. Among the graphical modelling methods, the bond graph is receiving increasing attention as being domain-independent and relying on the energy exchange between the components of the system. At present, few studies have investigated the modelling of AEM systems. A mathematical model and a bond graph model were used in previous studies to model the electrolysis cell performance. In this study, experimental data from literature were simulated using OpenModelica using bond graphs and mathematical approaches. The polarization curves at different operating conditions obtained by both approaches were compared with experimental ones. It was stated that both models predicted satisfactorily the polarization curves with error margins lower than 2% for equation-based models and lower than 5% for the bond graph model. The activation polarization of hydrogen evolution reactions (HER) and oxygen evolution reactions (OER) were behind the voltage loss in the AEM electrolyzer, whereas ion conduction through the membrane resulted in the ohmic loss. Therefore, highly active electro-catalysts are required for both HER and OER while high-conductivity AEMs are needed for effectively lowering the ohmic losses. The bond graph simulation of the polarisation curve for operating conditions at various temperatures has illustrated that voltage increases with temperature owing to the technology of the membrane. Simulation of the polarisation curve can be tested virtually, hence resulting in reduced cost and time involved due to experimental testing and improved design optimization. Further improvements can be made by implementing the bond graph model in a real power-to-gas-to-power scenario.

Keywords: hydrogen production, anion-exchange membrane, electrolyzer, mathematical modeling, multiphysics modeling

Procedia PDF Downloads 93
4426 Determining a Sustainability Business Model Using Materiality Matrices in an Electricity Bus Factory

Authors: Ozcan Yavas, Berrak Erol Nalbur, Sermin Gunarslan

Abstract:

A materiality matrix is a tool that organizations use to prioritize their activities and adapt to the increasing sustainability requirements in recent years. For the materiality index to move from business models to the sustainability business model stage, it must be done with all partners in the raw material, supply, production, product, and end-of-life product stages. Within the scope of this study, the Materiality Matrix was used to transform the business model into a sustainability business model and to create a sustainability roadmap in a factory producing electric buses. This matrix determines the necessary roadmap for all stakeholders to participate in the process, especially in sectors that produce sustainable products, such as the electric vehicle sector, and to act together with the cradle-to-cradle approach of sustainability roadmaps. Global Reporting Initiative analysis was used in the study conducted with 1150 stakeholders within the scope of the study, and 43 questions were asked to the stakeholders under the main headings of 'Legal Compliance Level,' 'Environmental Strategies,' 'Risk Management Activities,' 'Impact of Sustainability Activities on Products and Services,' 'Corporate Culture,' 'Responsible and Profitable Business Model Practices' and 'Achievements in Leading the Sector' and Economic, Governance, Environment, Social and Other. The results of the study aimed to include five 1st priority issues and four 2nd priority issues in the sustainability strategies of the organization in the short and medium term. When the studies carried out in the short term are evaluated in terms of Sustainability and Environmental Risk Management, it is seen that the studies are still limited to the level of legal legislation (60%) and individual studies in line with the strategies (20%). At the same time, the stakeholders expect the company to integrate sustainability activities into its business model within five years (35%) and to carry out projects to become the first company that comes to mind with its success leading the sector (20%). Another result obtained within the study's scope is identifying barriers to implementation. It is seen that the most critical obstacles identified by stakeholders with climate change and environmental impacts are financial deficiency and lack of infrastructure in the dissemination of sustainable products. These studies are critical for transitioning to sustainable business models for the electric vehicle sector to achieve the EU Green Deal and CBAM targets.

Keywords: sustainability business model, materiality matrix, electricity bus, carbon neutrality, sustainability management

Procedia PDF Downloads 62
4425 Early Prediction of Disposable Addresses in Ethereum Blockchain

Authors: Ahmad Saleem

Abstract:

Ethereum is the second largest crypto currency in blockchain ecosystem. Along with standard transactions, it supports smart contracts and NFT’s. Current research trends are focused on analyzing the overall structure of the network its growth and behavior. Ethereum addresses are anonymous and can be created on fly. The nature of Ethereum network and addresses make it hard to predict their behavior. The activity period of an ethereum address is not much analyzed. Using machine learning we can make early prediction about the disposability of the address. In this paper we analyzed the lifetime of the addresses. We also identified and predicted the disposable addresses using machine learning models and compared the results.

Keywords: blockchain, Ethereum, cryptocurrency, prediction

Procedia PDF Downloads 98
4424 A Unified Approach for Digital Forensics Analysis

Authors: Ali Alshumrani, Nathan Clarke, Bogdan Ghite, Stavros Shiaeles

Abstract:

Digital forensics has become an essential tool in the investigation of cyber and computer-assisted crime. Arguably, given the prevalence of technology and the subsequent digital footprints that exist, it could have a significant role across almost all crimes. However, the variety of technology platforms (such as computers, mobiles, Closed-Circuit Television (CCTV), Internet of Things (IoT), databases, drones, cloud computing services), heterogeneity and volume of data, forensic tool capability, and the investigative cost make investigations both technically challenging and prohibitively expensive. Forensic tools also tend to be siloed into specific technologies, e.g., File System Forensic Analysis Tools (FS-FAT) and Network Forensic Analysis Tools (N-FAT), and a good deal of data sources has little to no specialist forensic tools. Increasingly it also becomes essential to compare and correlate evidence across data sources and to do so in an efficient and effective manner enabling an investigator to answer high-level questions of the data in a timely manner without having to trawl through data and perform the correlation manually. This paper proposes a Unified Forensic Analysis Tool (U-FAT), which aims to establish a common language for electronic information and permit multi-source forensic analysis. Core to this approach is the identification and development of forensic analyses that automate complex data correlations, enabling investigators to investigate cases more efficiently. The paper presents a systematic analysis of major crime categories and identifies what forensic analyses could be used. For example, in a child abduction, an investigation team might have evidence from a range of sources including computing devices (mobile phone, PC), CCTV (potentially a large number), ISP records, and mobile network cell tower data, in addition to third party databases such as the National Sex Offender registry and tax records, with the desire to auto-correlate and across sources and visualize in a cognitively effective manner. U-FAT provides a holistic, flexible, and extensible approach to providing digital forensics in technology, application, and data-agnostic manner, providing powerful and automated forensic analysis.

Keywords: digital forensics, evidence correlation, heterogeneous data, forensics tool

Procedia PDF Downloads 196
4423 Predicting Aggregation Propensity from Low-Temperature Conformational Fluctuations

Authors: Hamza Javar Magnier, Robin Curtis

Abstract:

There have been rapid advances in the upstream processing of protein therapeutics, which has shifted the bottleneck to downstream purification and formulation. Finding liquid formulations with shelf lives of up to two years is increasingly difficult for some of the newer therapeutics, which have been engineered for activity, but their formulations are often viscous, can phase separate, and have a high propensity for irreversible aggregation1. We explore means to develop improved predictive ability from a better understanding of how protein-protein interactions on formulation conditions (pH, ionic strength, buffer type, presence of excipients) and how these impact upon the initial steps in protein self-association and aggregation. In this work, we study the initial steps in the aggregation pathways using a minimal protein model based on square-well potentials and discontinuous molecular dynamics. The effect of model parameters, including range of interaction, stiffness, chain length, and chain sequence, implies that protein models fold according to various pathways. By reducing the range of interactions, the folding- and collapse- transition come together, and follow a single-step folding pathway from the denatured to the native state2. After parameterizing the model interaction-parameters, we developed an understanding of low-temperature conformational properties and fluctuations, and the correlation to the folding transition of proteins in isolation. The model fluctuations increase with temperature. We observe a low-temperature point, below which large fluctuations are frozen out. This implies that fluctuations at low-temperature can be correlated to the folding transition at the melting temperature. Because proteins “breath” at low temperatures, defining a native-state as a single structure with conserved contacts and a fixed three-dimensional structure is misleading. Rather, we introduce a new definition of a native-state ensemble based on our understanding of the core conservation, which takes into account the native fluctuations at low temperatures. This approach permits the study of a large range of length and time scales needed to link the molecular interactions to the macroscopically observed behaviour. In addition, these models studied are parameterized by fitting to experimentally observed protein-protein interactions characterized in terms of osmotic second virial coefficients.

Keywords: protein folding, native-ensemble, conformational fluctuation, aggregation

Procedia PDF Downloads 362
4422 Accreditation and Quality Assurance of Nigerian Universities: The Management Imperative

Authors: F. O Anugom

Abstract:

The general functions of the university amongst other things include teaching, research and community service. Universities are recognized as the apex of learning, accumulating and imparting knowledge and skills of all kinds to students to enable them to be productive, earn their living and to make optimum contributions to national development. This is equivalent to the production of human capital in the form of high level manpower needed to administer the educational society, be useful to the society and manage the economy. Quality has become a matter of major importance for university education in Nigeria. Accreditation is the systematic review of educational programs to ensure that acceptable standards of education, scholarship and infrastructure are being maintained. Accreditation ensures that institution maintain quality. The process is designed to determine whether or not an institution has met or exceeded the published standards for accreditation, and whether it is achieving its mission and stated purposes. Ensuring quality assurance in accreditation process falls in the hands of university management which justified the need for this study. This study examined accreditation and quality assurance: the management imperative. Three research questions and three hypotheses guided the study. The design was a correlation survey with a population of 2,893 university administrators out of which 578 Heads of department and Dean of faculties were sampled. The instrument for data collection was titled Programme Accreditation Exercise scale with high levels of reliability. The research questions were answered with Pearson ‘r’ statistics. T-test statistics was used to test the hypotheses. It was found among others that the quality of accredited programme depends on the level of funding of universities in Nigeria. It was also indicated that quality of programme accreditation and physical facilities of universities in Nigeria have high relationship. But it was also revealed that programme accreditation is positively related to staffing in Nigerian universities. Based on the findings of the study, the researcher recommend that academic administrators should be included in the team of those who ensure quality programs in the universities. Private sector partnership should be encouraged to fund programs to ensure quality of programme in the universities. Independent agencies should be engaged to monitor the activities of accreditation teams to avoid bias.

Keywords: accreditation, quality assurance, national universities commission , physical facilities, staffing

Procedia PDF Downloads 194
4421 Heat Transfer and Trajectory Models for a Cloud of Spray over a Marine Vessel

Authors: S. R. Dehghani, G. F. Naterer, Y. S. Muzychka

Abstract:

Wave-impact sea spray creates many droplets which form a spray cloud traveling over marine objects same as marine vessels and offshore structures. In cold climates such as Arctic reigns, sea spray icing, which is ice accretion on cold substrates, is strongly dependent on the wave-impact sea spray. The rate of cooling of droplets affects the process of icing that can yield to dry or wet ice accretion. Trajectories of droplets determine the potential places for ice accretion. Combining two models of trajectories and heat transfer for droplets can predict the risk of ice accretion reasonably. The majority of the cooling of droplets is because of droplet evaporations. In this study, a combined model using trajectory and heat transfer evaluate the situation of a cloud of spray from the generation to impingement. The model uses some known geometry and initial information from the previous case studies. The 3D model is solved numerically using a standard numerical scheme. Droplets are generated in various size ranges from 7 mm to 0.07 mm which is a suggested range for sea spray icing. The initial temperature of droplets is considered to be the sea water temperature. Wind velocities are assumed same as that of the field observations. Evaluations are conducted using some important heading angles and wind velocities. The characteristic of size-velocity dependence is used to establish a relation between initial sizes and velocities of droplets. Time intervals are chosen properly to maintain a stable and fast numerical solution. A statistical process is conducted to evaluate the probability of expected occurrences. The medium size droplets can reach the highest heights. Very small and very large droplets are limited to lower heights. Results show that higher initial velocities create the most expanded cloud of spray. Wind velocities affect the extent of the spray cloud. The rate of droplet cooling at the start of spray formation is higher than the rest of the process. This is because of higher relative velocities and also higher temperature differences. The amount of water delivery and overall temperature for some sample surfaces over a marine vessel are calculated. Comparing results and some field observations show that the model works accurately. This model is suggested as a primary model for ice accretion on marine vessels.

Keywords: evaporation, sea spray, marine icing, numerical solution, trajectory

Procedia PDF Downloads 220
4420 Evaluation of Potential of Crop Residues for Energy Generation in Nepal

Authors: Narayan Prasad Adhikari

Abstract:

In Nepal, the crop residues have often been considered as one of the potential sources of energy to cope with prevailing energy crisis. However, the lack of systematic studies about production and various other competent uses of crop production is the main obstacle to evaluate net potential of the residues for energy production. Under this background, this study aims to assess the net annual availability of crop residues for energy production by undertaking three different districts with the representation of country’s three major regions of lowland, hill, and mountain. The five major cereal crops of paddy, wheat, maize, millet, and barley are considered for the analysis. The analysis is based upon two modes of household surveys. The first mode of survey is conducted to total of 240 households to obtain key information about crop harvesting and livestock management throughout a year. Similarly, the quantification of main crops along with the respective residues on fixed land is carried out to 45 households during second mode. The range of area of such fixed land is varied from 50 to 100 m2. The measurements have been done in air dry basis. The quantity for competitive uses of respective crop residues is measured on the basis of respondents’ feedback. There are four major competitive uses of crop residues at household which are building material, burning, selling, and livestock fodder. The results reveal that the net annual available crop residues per household are 4663 kg, 2513 kg, and 1731 kg in lowland, hill, and mountain respectively. Of total production of crop residues, the shares of dedicated fodder crop residues (except maize stalk and maize cob) are 94 %, 62 %, and 89 % in lowland, hill, and mountain respectively and of which the corresponding shares of fodder are 87 %, 91 %, and 82 %. The annual percapita energy equivalent from net available crop residues in lowland, hill, and mountain are 2.49 GJ, 3.42 GJ, and 0.44 GJ which represent 30 %, 33 %, and 3 % of total annual energy consumption respectively whereas the corresponding current shares of crop residues are only 23 %, 8 %, and 1 %. Hence, even utmost exploitation of available crop residues can hardly contribute to one third of energy consumption at household level in lowland, and hill whereas this is limited to particularly negligible in mountain. Moreover, further analysis has also been done to evaluate district wise supply-demand context of dedicated fodder crop residues on the basis of presence of livestock. The high deficit of fodder crop residues in hill and mountain is observed where the issue of energy generation from these residues will be ludicrous. As a contrary, the annual production of such residues for livestock fodder in lowland meets annual demand with modest surplus even if entire fodder to be derived from the residues throughout a year and thus there seems to be further potential to utilize the surplus residues for energy generation.

Keywords: crop residues, hill, lowland, mountain

Procedia PDF Downloads 473
4419 Employer Brand Image and Employee Engagement: An Exploratory Study in Britain

Authors: Melisa Mete, Gary Davies, Susan Whelan

Abstract:

Maintaining a good employer brand image is crucial for companies since it has numerous advantages such as better recruitment, retention and employee engagement, and commitment. This study aims to understand the relationship between employer brand image and employee satisfaction and engagement in the British context. A panel survey data (N=228) is tested via the regression models from the Hayes (2012) PROCESS macro, in IBM SPSS 23.0. The results are statistically significant and proves that the more positive employer brand image, the greater employee’ engagement and satisfaction, and the greater is employee satisfaction, the greater their engagement.

Keywords: employer brand, employer brand image, employee engagement, employee satisfaction

Procedia PDF Downloads 337
4418 Association between Noise Levels, Particulate Matter Concentrations and Traffic Intensities in a Near-Highway Urban Area

Authors: Mohammad Javad Afroughi, Vahid Hosseini, Jason S. Olfert

Abstract:

Both traffic-generated particles and noise have been associated with the development of cardiovascular diseases, especially in near-highway environments. Although noise and particulate matters (PM) have different mechanisms of dispersion, sharing the same emission source in urban areas (road traffics) can result in a similar degree of variability in their levels. This study investigated the temporal variation of and correlation between noise levels, PM concentrations and traffic intensities near a major highway in Tehran, Iran. Tehran particulate concentration is highly influenced by road traffic. Additionally, Tehran ultrafine particles (UFP, PM<0.1 µm) are mostly emitted from combustion processes of motor vehicles. This gives a high possibility of a strong association between traffic-related noise and UFP in near-highway environments of this megacity. Hourly average of equivalent continuous sound pressure level (Leq), total number concentration of UFPs, mass concentration of PM2.5 and PM10, as well as traffic count and speed were simultaneously measured over a period of three days in winter. Additionally, meteorological data including temperature, relative humidity, wind speed and direction were collected in a weather station, located 3 km from the monitoring site. Noise levels showed relatively low temporal variability in near-highway environments compared to PM concentrations. Hourly average of Leq ranged from 63.8 to 69.9 dB(A) (mean ~ 68 dB(A)), while hourly concentration of particles varied from 30,800 to 108,800 cm-3 for UFP (mean ~ 64,500 cm-3), 41 to 75 µg m-3 for PM2.5 (mean ~ 53 µg m-3), and 62 to 112 µg m-3 for PM10 (mean ~ 88 µg m-3). The Pearson correlation coefficient revealed strong relationship between noise and UFP (r ~ 0.61) overall. Under downwind conditions, UFP number concentration showed the strongest association with noise level (r ~ 0.63). The coefficient decreased to a lesser degree under upwind conditions (r ~ 0.24) due to the significant role of wind and humidity in UFP dynamics. Furthermore, PM2.5 and PM10 correlated moderately with noise (r ~ 0.52 and 0.44 respectively). In general, traffic counts were more strongly associated with noise and PM compared to traffic speeds. It was concluded that noise level combined with meteorological data can be used as a proxy to estimate PM concentrations (specifically UFP number concentration) in near-highway environments of Tehran. However, it is important to measure joint variability of noise and particles to study their health effects in epidemiological studies.

Keywords: noise, particulate matter, PM10, PM2.5, ultrafine particle

Procedia PDF Downloads 194
4417 Fast Track to the Physical Internet: A Cross-Industry Project from Upper Austria

Authors: Laura Simmer, Maria Kalt, Oliver Schauer

Abstract:

Freight transport is growing fast, but many vehicles are empty or just partially loaded. The vision and concepts of the Physical Internet (PI) proposes to eliminate these inefficiencies. Aiming for a radical sustainability improvement, the PI – inspired by the Digital Internet – is a hyperconnected global logistic system, enabling seamless asset sharing and flow consolidation. The implementation of a PI in its full expression will be a huge challenge: the industry needs innovation and implementation support including change management approaches, awareness creation and good practices diffusion, legislative actions to remove antitrust and international commerce barriers, standardization and public incentives policies. In order to take a step closer to this future the project ‘Atropine - Fast Track to the Physical Internet’ funded under the Strategic Economic and Research Program ‘Innovative Upper Austria 2020’ was set up. The two-year research project unites several research partners in this field, but also industrial partners and logistics service providers. With Atropine, the consortium wants to actively shape the mobility landscape in Upper Austria and make an innovative contribution to an energy-efficient, environmentally sound and sustainable development in the transport area. This paper should, on the one hand, clarify the questions what the project Atropine is about and, on the other hand, how a proof of concept will be reached. Awareness building plays an important role in the project as the PI requires a reorganization of the supply chain and the design of completely new forms of inter-company co-operation. New business models have to be developed and should be verified by simulation. After the simulation process one of these business models will be chosen and tested in real life with the partner companies. The developed results - simulation model and demonstrator - are used to determine how the concept of the PI can be applied in Upper Austria. Atropine shall pave the way for a full-scale development of the PI vision in the next few decades and provide the basis for pushing the industry toward a new level of co-operation with more shared resources and increased standardization.

Keywords: Atropine, inter-company co-operation, Physical Internet, shared resources, sustainable logistics

Procedia PDF Downloads 223
4416 Calcitriol Improves Plasma Lipoprotein Profile by Decreasing Plasma Total Cholesterol and Triglyceride in Hypercholesterolemic Golden Syrian Hamsters

Authors: Xiaobo Wang, Zhen-Yu Chen

Abstract:

Higher plasma total cholesterol (TC) and low-density lipoprotein cholesterol (LDL-C) are independent risk factors of cardiovascular disease while high-density lipoprotein cholesterol (HDL-C) is protective. Vitamin D is well-known for its regulatory role in calcium homeostasis. Its potential important role in cardiovascular disease has recently attracted much attention. This study was conducted to investigate effects of different dosage of calcitriol on plasma lipoprotein profile and the underlying mechanism. Sixty male Syrian Golden hamsters were randomly divided into 6 groups: no-cholesterol control (NCD), high-cholesterol control (HCD), groups with calcitriol supplementation at 10/20/40/80ng/kg body weight (CA, CB, CC, CD), respectively. Calcitriol in medium-chain triacylglycerol (MCT) oil was delivered to four experimental groups via oral gavage every other day, while NCD and HCD received MCT oil in the equivalent amount. NCD hamsters were fed with non-cholesterol diet while other five groups were maintained on diet containing 0.2% cholesterol to induce a hypercholesterolemic condition. The treatment lasts for 6 weeks followed by sample collection after hamsters sacrificed. Four experimental groups experienced a reduction in average food intake around 11% compared to HCD with slight decrease in body weight (not exceeding 10%). This reduction reflects on the deceased relative weights of testis, epididymal and perirenal adipose tissue in a dose-dependent manner. Plasma calcitriol levels were measured and was corresponding to oral gavage. At the end of week 6, lipoprotein profiles were improved with calcitriol supplementation with TC, non-HDL-C and plasma triglyceride (TG) decreased in a dose-dependent manner (TC: r=0.373, p=0.009, non-HDL-C: r=0.479, p=0.001, TG: r=0.405, p=0.004). Since HDL-C of four experiment groups showed no significant difference compared to HCD, the ratio of nHDL-C to HDL-C and HDL-C to TC had been restored in a dose-dependent manner. For hamsters receiving the highest level of calcitriol (80ng/kg) showed a reduction of TC by 11.5%, nHDL-C by 24.1% and TG by 31.25%. Little difference was found among six groups on the acetylcholine-induced endothelium-dependent relaxation or contraction of thoracic aorta. To summarize, calcitriol supplementation in hamster at maximum 80ng/kg body weight for 6 weeks lead to an overall improvement in plasma lipoprotein profile with decreased TC and TG level. The molecular mechanism of its effects is under investigation.

Keywords: cholesterol, vitamin D, calcitriol, hamster

Procedia PDF Downloads 236
4415 Remaining Useful Life (RUL) Assessment Using Progressive Bearing Degradation Data and ANN Model

Authors: Amit R. Bhende, G. K. Awari

Abstract:

Remaining useful life (RUL) prediction is one of key technologies to realize prognostics and health management that is being widely applied in many industrial systems to ensure high system availability over their life cycles. The present work proposes a data-driven method of RUL prediction based on multiple health state assessment for rolling element bearings. Bearing degradation data at three different conditions from run to failure is used. A RUL prediction model is separately built in each condition. Feed forward back propagation neural network models are developed for prediction modeling.

Keywords: bearing degradation data, remaining useful life (RUL), back propagation, prognosis

Procedia PDF Downloads 437
4414 A Graph SEIR Cellular Automata Based Model to Study the Spreading of a Transmittable Disease

Authors: Natasha Sharma, Kulbhushan Agnihotri

Abstract:

Cellular Automata are discrete dynamical systems which are based on local character and spatial disparateness of the spreading process. These factors are generally neglected by traditional models based on differential equations for epidemic spread. The aim of this work is to introduce an SEIR model based on cellular automata on graphs to imitate epidemic spreading. Distinctively, it is an SEIR-type model where the population is divided into susceptible, exposed, infected and recovered individuals. The results obtained from simulations are in accordance with the spreading behavior of a real time epidemics.

Keywords: cellular automata, epidemic spread, graph, susceptible

Procedia PDF Downloads 460
4413 Evaluation of Research in the Field of Energy Efficiency and MCA Methods Using Publications Databases

Authors: Juan Sepúlveda

Abstract:

Energy is a fundamental component in sustainability, the access and use of this resource is related with economic growth, social improvements, and environmental impacts. In this sense, energy efficiency has been studied as a factor that enhances the positive impacts of energy in communities; however, the implementation of efficiency requires strong policy and strategies that usually rely on individual measures focused in independent dimensions. In this paper, the problem of energy efficiency as a multi-objective problem is studied, using scientometric analysis to discover trends and patterns that allow to identify the main variables and study approximations related with a further development of models to integrate energy efficiency and MCA into policy making for small communities.

Keywords: energy efficiency, MCA, scientometric, trends

Procedia PDF Downloads 372
4412 Changing Behaviour in the Digital Era: A Concrete Use Case from the Domain of Health

Authors: Francesca Spagnoli, Shenja van der Graaf, Pieter Ballon

Abstract:

Humans do not behave rationally. We are emotional, easily influenced by others, as well as by our context. The study of human behaviour became a supreme endeavour within many academic disciplines, including economics, sociology, and clinical and social psychology. Understanding what motivates humans and triggers them to perform certain activities, and what it takes to change their behaviour, is central both for researchers and companies, as well as policy makers to implement efficient public policies. While numerous theoretical approaches for diverse domains such as health, retail, environment have been developed, the methodological models guiding the evaluation of such research have reached for a long time their limits. Within this context, digitisation, the Information and communication technologies (ICT) and wearable, the Internet of Things (IoT) connecting networks of devices, and new possibilities to collect and analyse massive amounts of data made it possible to study behaviour from a realistic perspective, as never before. Digital technologies make it possible to (1) capture data in real-life settings, (2) regain control over data by capturing the context of behaviour, and (3) analyse huge set of information through continuous measurement. Within this complex context, this paper describes a new framework for initiating behavioural change, capitalising on the digital developments in applied research projects and applicable both to academia, enterprises and policy makers. By applying this model, behavioural research can be conducted to address the issues of different domains, such as mobility, environment, health or media. The Modular Behavioural Analysis Approach (MBAA) is here described and firstly validated through a concrete use case within the domain of health. The results gathered have proven that disclosing information about health in connection with the use of digital apps for health, can be a leverage for changing behaviour, but it is only a first component requiring further follow-up actions. To this end, a clear definition of different 'behavioural profiles', towards which addressing several typologies of interventions, it is essential to effectively enable behavioural change. In the refined version of the MBAA a strong focus will rely on defining a methodology for shaping 'behavioural profiles' and related interventions, as well as the evaluation of side-effects on the creation of new business models and sustainability plans.

Keywords: behavioural change, framework, health, nudging, sustainability

Procedia PDF Downloads 223
4411 An Exploratory Study on the Impact of Climate Change on Design Rainfalls in the State of Qatar

Authors: Abdullah Al Mamoon, Niels E. Joergensen, Ataur Rahman, Hassan Qasem

Abstract:

Intergovernmental Panel for Climate Change (IPCC) in its fourth Assessment Report AR4 predicts a more extreme climate towards the end of the century, which is likely to impact the design of engineering infrastructure projects with a long design life. A recent study in 2013 developed new design rainfall for Qatar, which provides an improved design basis of drainage infrastructure for the State of Qatar under the current climate. The current design standards in Qatar do not consider increased rainfall intensity caused by climate change. The focus of this paper is to update recently developed design rainfalls in Qatar under the changing climatic conditions based on IPCC's AR4 allowing a later revision to the proposed design standards, relevant for projects with a longer design life. The future climate has been investigated based on the climate models released by IPCC’s AR4 and A2 story line of emission scenarios (SRES) using a stationary approach. Annual maximum series (AMS) of predicted 24 hours rainfall data for both wet (NCAR-CCSM) scenario and dry (CSIRO-MK3.5) scenario for the Qatari grid points in the climate models have been extracted for three periods, current climate 2010-2039, medium term climate (2040-2069) and end of century climate (2070-2099). A homogeneous region of the Qatari grid points has been formed and L-Moments based regional frequency approach is adopted to derive design rainfalls. The results indicate no significant changes in the design rainfall on the short term 2040-2069, but significant changes are expected towards the end of the century (2070-2099). New design rainfalls have been developed taking into account climate change for 2070-2099 scenario and by averaging results from the two scenarios. IPCC’s AR4 predicts that the rainfall intensity for a 5-year return period rain with duration of 1 to 2 hours will increase by 11% in 2070-2099 compared to current climate. Similarly, the rainfall intensity for more extreme rainfall, with a return period of 100 years and duration of 1 to 2 hours will increase by 71% in 2070-2099 compared to current climate. Infrastructure with a design life exceeding 60 years should add safety factors taking the predicted effects from climate change into due consideration.

Keywords: climate change, design rainfalls, IDF, Qatar

Procedia PDF Downloads 394
4410 Parasitic Capacitance Modeling in Pulse Transformer Using FEA

Authors: D. Habibinia, M. R. Feyzi

Abstract:

Nowadays, specialized software is vastly used to verify the performance of an electric machine prototype by evaluating a model of the system. These models mainly consist of electrical parameters such as inductances and resistances. However, when the operating frequency of the device is above one kHz, the effect of parasitic capacitances grows significantly. In this paper, a software-based procedure is introduced to model these capacitances within the electromagnetic simulation of the device. The case study is a high-frequency high-voltage pulse transformer. The Finite Element Analysis (FEA) software with coupled field analysis is used in this method.

Keywords: finite element analysis, parasitic capacitance, pulse transformer, high frequency

Procedia PDF Downloads 515
4409 The Analgesic Impact of Adding Intrathecal Ketamine to Spinal Anaesthesia for Hip or Knee Arthroplasty: A Clinical Audit

Authors: Carl Ashworth, Matthys Campher

Abstract:

Spinal anaesthesia has been identified as the “gold standard” for primary elective total hip and knee arthroplasty, which is most commonly performed using longer-acting local anaesthetics, such as hyperbaric bupivacaine, to prolong the duration of anaesthesia and analgesia suitable for these procedures. Ketamine is known to have local anaesthetic effects with potent analgesic properties and has been evaluated as a sole anaesthetic agent via intrathecal administration; however, the use of intrathecal ketamine as an adjunct to intrathecal hyperbaric bupivacaine, morphine, and fentanyl has not been extensively studied. The objective of this study was to identify the potential analgesic effects of the addition of intrathecal ketamine to spinal anaesthesia and to compare the efficacy and safety of adding intrathecal ketamine to spinal anaesthesia for hip- or knee arthroplasty with spinal anaesthesia for hip- or knee arthroplasty without intrathecal ketamine. The medical records of patients who underwent elective hip- or knee arthroplasty under spinal anaesthesia performed by an individual anaesthetist with either intrathecal hyperbaric bupivacaine, morphine and fentanyl or intrathecal hyperbaric bupivacaine, morphine, fentanyl and ketamine between June 4, 2020, and June 4, 2022, were retrospectively reviewed. These encounters were reviewed and analyzed from a perioperative pain perspective, with the primary outcome measure as the oral morphine equivalent (OME) usage in the 48 hours post-spinal anaesthesia, and secondary outcome measures including time to breakthrough analgesia, self-reported pain scores at rest and during movement at 24 and 48 hours after surgery, adverse effects of analgesia, complications, and length of stay. There were 26 patients identified who underwent TKR between June 4, 2020, and June 4, 2022, and 25 patients who underwent THR with the same conditions. It was identified that patients who underwent traditional spinal anaesthesia with the addition of ketamine for elective hip- or knee arthroplasty had a lower mean total OME in the 48 hours immediately post-spinal anaesthesia yet had a shorter time to breakthrough analgesia administration. The proposed mechanism of action for intrathecal ketamine as an additive to traditional spinal anaesthesia for elective hip- or knee arthroplasty is that it may prolong and attenuate the analgesic effect of traditional spinal anaesthesia. There were no significant differences identified in comparing the efficacy and safety of adding intrathecal ketamine to spinal anaesthesia for hip- or knee arthroplasty with spinal anaesthesia for hip- or knee arthroplasty without intrathecal ketamine.

Keywords: anaesthesia, spinal, intra-thecal, ketamine, spinal-morphine, bupivacaine

Procedia PDF Downloads 52
4408 Nanoporous Metals Reinforced with Fullerenes

Authors: Deni̇z Ezgi̇ Gülmez, Mesut Kirca

Abstract:

Nanoporous (np) metals have attracted considerable attention owing to their cellular morphological features at atomistic scale which yield ultra-high specific surface area awarding a great potential to be employed in diverse applications such as catalytic, electrocatalytic, sensing, mechanical and optical. As one of the carbon based nanostructures, fullerenes are also another type of outstanding nanomaterials that have been extensively investigated due to their remarkable chemical, mechanical and optical properties. In this study, the idea of improving the mechanical behavior of nanoporous metals by inclusion of the fullerenes, which offers a new metal-carbon nanocomposite material, is examined and discussed. With this motivation, tensile mechanical behavior of nanoporous metals reinforced with carbon fullerenes is investigated by classical molecular dynamics (MD) simulations. Atomistic models of the nanoporous metals with ultrathin ligaments are obtained through a stochastic process simply based on the intersection of spherical volumes which has been used previously in literature. According to this technique, the atoms within the ensemble of intersecting spherical volumes is removed from the pristine solid block of the selected metal, which results in porous structures with spherical cells. Following this, fullerene units are added into the cellular voids to obtain final atomistic configurations for the numerical tensile tests. Several numerical specimens are prepared with different number of fullerenes per cell and with varied fullerene sizes. LAMMPS code was used to perform classical MD simulations to conduct uniaxial tension experiments on np models filled by fullerenes. The interactions between the metal atoms are modeled by using embedded atomic method (EAM) while adaptive intermolecular reactive empirical bond order (AIREBO) potential is employed for the interaction of carbon atoms. Furthermore, atomic interactions between the metal and carbon atoms are represented by Lennard-Jones potential with appropriate parameters. In conclusion, the ultimate goal of the study is to present the effects of fullerenes embedded into the cellular structure of np metals on the tensile response of the porous metals. The results are believed to be informative and instructive for the experimentalists to synthesize hybrid nanoporous materials with improved properties and multifunctional characteristics.

Keywords: fullerene, intersecting spheres, molecular dynamic, nanoporous metals

Procedia PDF Downloads 239
4407 Optimizing Machine Learning Algorithms for Defect Characterization and Elimination in Liquids Manufacturing

Authors: Tolulope Aremu

Abstract:

The key process steps to produce liquid detergent products will introduce potential defects, such as formulation, mixing, filling, and packaging, which might compromise product quality, consumer safety, and operational efficiency. Real-time identification and characterization of such defects are of prime importance for maintaining high standards and reducing waste and costs. Usually, defect detection is performed by human inspection or rule-based systems, which is very time-consuming, inconsistent, and error-prone. The present study overcomes these limitations in dealing with optimization in defect characterization within the process for making liquid detergents using Machine Learning algorithms. Performance testing of various machine learning models was carried out: Support Vector Machine, Decision Trees, Random Forest, and Convolutional Neural Network on defect detection and classification of those defects like wrong viscosity, color deviations, improper filling of a bottle, packaging anomalies. These algorithms have significantly benefited from a variety of optimization techniques, including hyperparameter tuning and ensemble learning, in order to greatly improve detection accuracy while minimizing false positives. Equipped with a rich dataset of defect types and production parameters consisting of more than 100,000 samples, our study further includes information from real-time sensor data, imaging technologies, and historic production records. The results are that optimized machine learning models significantly improve defect detection compared to traditional methods. Take, for instance, the CNNs, which run at 98% and 96% accuracy in detecting packaging anomaly detection and bottle filling inconsistency, respectively, by fine-tuning the model with real-time imaging data, through which there was a reduction in false positives of about 30%. The optimized SVM model on detecting formulation defects gave 94% in viscosity variation detection and color variation. These values of performance metrics correspond to a giant leap in defect detection accuracy compared to the usual 80% level achieved up to now by rule-based systems. Moreover, this optimization with models can hasten defect characterization, allowing for detection time to be below 15 seconds from an average of 3 minutes using manual inspections with real-time processing of data. With this, the reduction in time will be combined with a 25% reduction in production downtime because of proactive defect identification, which can save millions annually in recall and rework costs. Integrating real-time machine learning-driven monitoring drives predictive maintenance and corrective measures for a 20% improvement in overall production efficiency. Therefore, the optimization of machine learning algorithms in defect characterization optimum scalability and efficiency for liquid detergent companies gives improved operational performance to higher levels of product quality. In general, this method could be conducted in several industries within the Fast moving consumer Goods industry, which would lead to an improved quality control process.

Keywords: liquid detergent manufacturing, defect detection, machine learning, support vector machines, convolutional neural networks, defect characterization, predictive maintenance, quality control, fast-moving consumer goods

Procedia PDF Downloads 20
4406 A Cohort and Empirical Based Multivariate Mortality Model

Authors: Jeffrey Tzu-Hao Tsai, Yi-Shan Wong

Abstract:

This article proposes a cohort-age-period (CAP) model to characterize multi-population mortality processes using cohort, age, and period variables. Distinct from the factor-based Lee-Carter-type decomposition mortality model, this approach is empirically based and includes the age, period, and cohort variables into the equation system. The model not only provides a fruitful intuition for explaining multivariate mortality change rates but also has a better performance in forecasting future patterns. Using the US and the UK mortality data and performing ten-year out-of-sample tests, our approach shows smaller mean square errors in both countries compared to the models in the literature.

Keywords: longevity risk, stochastic mortality model, multivariate mortality rate, risk management

Procedia PDF Downloads 55
4405 Structural Breaks, Asymmetric Effects and Long Memory in the Volatility of Turkey Stock Market

Authors: Serpil Türkyılmaz, Mesut Balıbey

Abstract:

In this study, long memory properties in volatility of Turkey Stock Market are being examined through the FIGARCH, FIEGARCH and FIAPARCH models under different distribution assumptions as normal and skewed student-t distributions. Furthermore, structural changes in volatility of Turkey Stock Market are investigated. The results display long memory property and the presence of asymmetric effects of shocks in volatility of Turkey Stock Market.

Keywords: FIAPARCH model, FIEGARCH model, FIGARCH model, structural break

Procedia PDF Downloads 291
4404 Combustion Analysis of Suspended Sodium Droplet

Authors: T. Watanabe

Abstract:

Combustion analysis of suspended sodium droplet is performed by solving numerically the Navier-Stokes equations and the energy conservation equations. The combustion model consists of the pre-ignition and post-ignition models. The reaction rate for the pre-ignition model is based on the chemical kinetics, while that for the post-ignition model is based on the mass transfer rate of oxygen. The calculated droplet temperature is shown to be in good agreement with the existing experimental data. The temperature field in and around the droplet is obtained as well as the droplet shape variation, and the present numerical model is confirmed to be effective for the combustion analysis.

Keywords: analysis, combustion, droplet, sodium

Procedia PDF Downloads 211
4403 A Constitutive Model for Time-Dependent Behavior of Clay

Authors: T. N. Mac, B. Shahbodaghkhan, N. Khalili

Abstract:

A new elastic-viscoplastic (EVP) constitutive model is proposed for the analysis of time-dependent behavior of clay. The proposed model is based on the bounding surface plasticity and the concept of viscoplastic consistency framework to establish continuous transition from plasticity to rate dependent viscoplasticity. Unlike the overstress based models, this model will meet the consistency condition in formulating the constitutive equation for EVP model. The procedure of deriving the constitutive relationship is also presented. Simulation results and comparisons with experimental data are then presented to demonstrate the performance of the model.

Keywords: bounding surface, consistency theory, constitutive model, viscosity

Procedia PDF Downloads 492
4402 Jurisdictional Federalism and Formal Federalism: Levels of Political Centralization on American and Brazilian Models

Authors: Henrique Rangel, Alexandre Fadel, Igor De Lazari, Bianca Neri, Carlos Bolonha

Abstract:

This paper promotes a comparative analysis of American and Brazilian models of federalism assuming their levels of political centralization as main criterion. The central problem faced herein is the Brazilian approach of Unitarian regime. Although the hegemony of federative form after 1989, Brazil had a historical frame of political centralization that remains under the 1988 constitutional regime. Meanwhile, United States framed a federalism in which States absorb significant authorities. The hypothesis holds that the amount of alternative criteria of federalization – which can generate political centralization –, and the way they are upheld on judicial review, are crucial to understand the levels of political centralization achieved in each model. To test this hypothesis, the research is conducted by a methodology temporally delimited to 1994-2014 period. Three paradigmatic precedents of U.S. Supreme Court were selected: United States vs. Morrison (2000), on gender-motivated violence, Gonzales vs. Raich (2005), on medical use of marijuana, and United States vs. Lopez (1995), on firearm possession on scholar zones. These most relevant cases over federalism in the recent activity of Supreme Court indicates a determinant parameter of deliberation: the commerce clause. After observe the criterion used to permit or prohibit the political centralization in America, the Brazilian normative context is presented. In this sense, it is possible to identify the eventual legal treatment these controversies could receive in this Country. The decision-making reveals some deliberative parameters, which characterizes each federative model. At the end of research, the precedents of Rehnquist Court promote a broad revival of federalism debate, establishing the commerce clause as a secure criterion to uphold or not the necessity of centralization – even with decisions considered conservative. Otherwise, the Brazilian federalism solves them controversies upon in a formalist fashion, within numerous and comprehensive – sometimes casuistic too – normative devices, oriented to make an intense centralization. The aim of this work is indicate how jurisdictional federalism found in United States can preserve a consistent model with States robustly autonomous, while Brazil gives preference to normative mechanisms designed to starts from centralization.

Keywords: constitutional design, federalism, U.S. Supreme Court, legislative authority

Procedia PDF Downloads 516
4401 Algorithms Utilizing Wavelet to Solve Various Partial Differential Equations

Authors: K. P. Mredula, D. C. Vakaskar

Abstract:

The article traces developments and evolution of various algorithms developed for solving partial differential equations using the significant combination of wavelet with few already explored solution procedures. The approach depicts a study over a decade of traces and remarks on the modifications in implementing multi-resolution of wavelet, finite difference approach, finite element method and finite volume in dealing with a variety of partial differential equations in the areas like plasma physics, astrophysics, shallow water models, modified Burger equations used in optical fibers, biology, fluid dynamics, chemical kinetics etc.

Keywords: multi-resolution, Haar Wavelet, partial differential equation, numerical methods

Procedia PDF Downloads 299