Search results for: fracture modeling
1010 Human Immune Response to Surgery: The Surrogate Prediction of Postoperative Outcomes
Authors: Husham Bayazed
Abstract:
Immune responses following surgical trauma play a pivotal role in predicting postoperative outcomes from healing and recovery to postoperative complications. Postoperative complications, including infections and protracted recovery, occur in a significant number of about 300 million surgeries performed annually worldwide. Complications cause personal suffering along with a significant economic burden on the healthcare system in any community. The accurate prediction of postoperative complications and patient-targeted interventions for their prevention remain major clinical provocations. Recent Findings: Recent studies are focusing on immune dysregulation mechanisms that occur in response to surgical trauma as a key determinant of postoperative complications. Antecedent studies mainly were plunging into the detection of inflammatory plasma markers, which facilitate in providing important clues regarding their pathogenesis. However, recent Single-cell technologies, such as mass cytometry or single-cell RNA sequencing, have markedly enhanced our ability to understand the immunological basis of postoperative immunological trauma complications and to identify their prognostic biological signatures. Summary: The advent of proteomic technologies has significantly advanced our ability to predict the risk of postoperative complications. Multiomic modeling of patients' immune states holds promise for the discovery of preoperative predictive biomarkers and providing patients and surgeons with information to improve surgical outcomes. However, more studies are required to accurately predict the risk of postoperative complications in individual patients.Keywords: immune dysregulation, postoperative complications, surgical trauma, flow cytometry
Procedia PDF Downloads 861009 Modeling Sediment Transports under Extreme Storm Situation along Persian Gulf North Coast
Authors: Majid Samiee Zenoozian
Abstract:
The Persian Gulf is a bordering sea with an normal depth of 35 m and a supreme depth of 100 m near its narrow appearance. Its lengthen bathymetric axis divorces two main geological shires — the steady Arabian Foreland and the unbalanced Iranian Fold Belt — which are imitated in the conflicting shore and bathymetric morphologies of Arabia and Iran. The sediments were experimented with from 72 offshore positions through an oceanographic cruise in the winter of 2018. Throughout the observation era, several storms and river discharge actions happened, as well as the major flood on record since 1982. Suspended-sediment focus at all three sites varied in reaction to both wave resuspension and advection of river-derived sediments. We used hydrological models to evaluation and associate the wave height and inundation distance required to carriage the rocks inland. Our results establish that no known or possible storm happening on the Makran coast is accomplished of detaching and transporting the boulders. The fluid mud consequently is conveyed seaward due to gravitational forcing. The measured sediment focus and velocity profiles on the shelf provide a strong indication to provision this assumption. The sediment model is joined with a 3D hydrodynamic module in the Environmental Fluid Dynamics Code (EFDC) model that offers data on estuarine rotation and salinity transport under normal temperature conditions. 3-D sediment transport from model simulations specify dynamic sediment resuspension and transport near zones of highly industrious oyster beds.Keywords: sediment transport, storm, coast, fluid dynamics
Procedia PDF Downloads 1151008 Cryptographic Resource Allocation Algorithm Based on Deep Reinforcement Learning
Authors: Xu Jie
Abstract:
As a key network security method, cryptographic services must fully cope with problems such as the wide variety of cryptographic algorithms, high concurrency requirements, random job crossovers, and instantaneous surges in workloads. Its complexity and dynamics also make it difficult for traditional static security policies to cope with the ever-changing situation. Cyber Threats and Environment. Traditional resource scheduling algorithms are inadequate when facing complex decision-making problems in dynamic environments. A network cryptographic resource allocation algorithm based on reinforcement learning is proposed, aiming to optimize task energy consumption, migration cost, and fitness of differentiated services (including user, data, and task security) by modeling the multi-job collaborative cryptographic service scheduling problem as a multi-objective optimized job flow scheduling problem and using a multi-agent reinforcement learning method, efficient scheduling and optimal configuration of cryptographic service resources are achieved. By introducing reinforcement learning, resource allocation strategies can be adjusted in real-time in a dynamic environment, improving resource utilization and achieving load balancing. Experimental results show that this algorithm has significant advantages in path planning length, system delay and network load balancing and effectively solves the problem of complex resource scheduling in cryptographic services.Keywords: cloud computing, cryptography on-demand service, reinforcement learning, workflow scheduling
Procedia PDF Downloads 131007 Working Memory Growth from Kindergarten to First Grade: Considering Impulsivity, Parental Discipline Methods and Socioeconomic Status
Authors: Ayse Cobanoglu
Abstract:
Working memory can be defined as a workspace that holds and regulates active information in mind. This study investigates individual changes in children's working memory from kindergarten to first grade. The main purpose of the study is whether parental discipline methods and child impulsive/overactive behaviors affect children's working memory initial status and growth rate, controlling for gender, minority status, and socioeconomic status (SES). A linear growth curve model with the first four waves of the Early Childhood Longitudinal Study-Kindergarten Cohort of 2011 (ECLS-K:2011) is performed to analyze the individual growth of children's working memory longitudinally (N=3915). Results revealed that there is a significant variation among students' initial status in the kindergarten fall semester as well as the growth rate during the first two years of schooling. While minority status, SES, and children's overactive/impulsive behaviors influenced children's initial status, only SES and minority status were significantly associated with the growth rate of working memory. For parental discipline methods, such as giving a warning and ignoring the child's negative behavior, are also negatively associated with initial working memory scores. Following that, students' working memory growth rate is examined, and students with lower SES as well as minorities showed a faster growth pattern during the first two years of schooling. However, the findings of parental disciplinary methods on working memory growth rates were mixed. It can be concluded that schooling helps low-SES minority students to develop their working memory.Keywords: growth curve modeling, impulsive/overactive behaviors, parenting, working memory
Procedia PDF Downloads 1351006 Application of Seasonal Autoregressive Integrated Moving Average Model for Forecasting Monthly Flows in Waterval River, South Africa
Authors: Kassahun Birhanu Tadesse, Megersa Olumana Dinka
Abstract:
Reliable future river flow information is basic for planning and management of any river systems. For data scarce river system having only a river flow records like the Waterval River, a univariate time series models are appropriate for river flow forecasting. In this study, a univariate Seasonal Autoregressive Integrated Moving Average (SARIMA) model was applied for forecasting Waterval River flow using GRETL statistical software. Mean monthly river flows from 1960 to 2016 were used for modeling. Different unit root tests and Mann-Kendall trend analysis were performed to test the stationarity of the observed flow time series. The time series was differenced to remove the seasonality. Using the correlogram of seasonally differenced time series, different SARIMA models were identified, their parameters were estimated, and diagnostic check-up of model forecasts was performed using white noise and heteroscedasticity tests. Finally, based on minimum Akaike Information (AIc) and Hannan-Quinn (HQc) criteria, SARIMA (3, 0, 2) x (3, 1, 3)12 was selected as the best model for Waterval River flow forecasting. Therefore, this model can be used to generate future river information for water resources development and management in Waterval River system. SARIMA model can also be used for forecasting other similar univariate time series with seasonality characteristics.Keywords: heteroscedasticity, stationarity test, trend analysis, validation, white noise
Procedia PDF Downloads 2051005 Time Series Simulation by Conditional Generative Adversarial Net
Authors: Rao Fu, Jie Chen, Shutian Zeng, Yiping Zhuang, Agus Sudjianto
Abstract:
Generative Adversarial Net (GAN) has proved to be a powerful machine learning tool in image data analysis and generation. In this paper, we propose to use Conditional Generative Adversarial Net (CGAN) to learn and simulate time series data. The conditions include both categorical and continuous variables with different auxiliary information. Our simulation studies show that CGAN has the capability to learn different types of normal and heavy-tailed distributions, as well as dependent structures of different time series. It also has the capability to generate conditional predictive distributions consistent with training data distributions. We also provide an in-depth discussion on the rationale behind GAN and the neural networks as hierarchical splines to establish a clear connection with existing statistical methods of distribution generation. In practice, CGAN has a wide range of applications in market risk and counterparty risk analysis: it can be applied to learn historical data and generate scenarios for the calculation of Value-at-Risk (VaR) and Expected Shortfall (ES), and it can also predict the movement of the market risk factors. We present a real data analysis including a backtesting to demonstrate that CGAN can outperform Historical Simulation (HS), a popular method in market risk analysis to calculate VaR. CGAN can also be applied in economic time series modeling and forecasting. In this regard, we have included an example of hypothetical shock analysis for economic models and the generation of potential CCAR scenarios by CGAN at the end of the paper.Keywords: conditional generative adversarial net, market and credit risk management, neural network, time series
Procedia PDF Downloads 1431004 Process Optimization and Automation of Information Technology Services in a Heterogenic Digital Environment
Authors: Tasneem Halawani, Yamen Khateeb
Abstract:
With customers’ ever-increasing expectations for fast services provisioning for all their business needs, information technology (IT) organizations, as business partners, have to cope with this demanding environment and deliver their services in the most effective and efficient way. The purpose of this paper is to identify optimization and automation opportunities for the top requested IT services in a heterogenic digital environment and widely spread customer base. In collaboration with systems, processes, and subject matter experts (SMEs), the processes in scope were approached by analyzing four-year related historical data, identifying and surveying stakeholders, modeling the as-is processes, and studying systems integration/automation capabilities. This effort resulted in identifying several pain areas, including standardization, unnecessary customer and IT involvement, manual steps, systems integration, and performance measurement. These pain areas were addressed by standardizing the top five requested IT services, eliminating/automating 43 steps, and utilizing a single platform for end-to-end process execution. In conclusion, the optimization of IT service request processes in a heterogenic digital environment and widely spread customer base is challenging, yet achievable without compromising the service quality and customers’ added value. Further studies can focus on measuring the value of the eliminated/automated process steps to quantify the enhancement impact. Moreover, a similar approach can be utilized to optimize other IT service requests, with a focus on business criticality.Keywords: automation, customer value, heterogenic, integration, IT services, optimization, processes
Procedia PDF Downloads 1071003 Bluetooth Communication Protocol Study for Multi-Sensor Applications
Authors: Joao Garretto, R. J. Yarwood, Vamsi Borra, Frank Li
Abstract:
Bluetooth Low Energy (BLE) has emerged as one of the main wireless communication technologies used in low-power electronics, such as wearables, beacons, and Internet of Things (IoT) devices. BLE’s energy efficiency characteristic, smart mobiles interoperability, and Over the Air (OTA) capabilities are essential features for ultralow-power devices, which are usually designed with size and cost constraints. Most current research regarding the power analysis of BLE devices focuses on the theoretical aspects of the advertising and scanning cycles, with most results being presented in the form of mathematical models and computer software simulations. Such computer modeling and simulations are important for the comprehension of the technology, but hardware measurement is essential for the understanding of how BLE devices behave in real operation. In addition, recent literature focuses mostly on the BLE technology, leaving possible applications and its analysis out of scope. In this paper, a coin cell battery-powered BLE Data Acquisition Device, with a 4-in-1 sensor and one accelerometer, is proposed and evaluated with respect to its Power Consumption. First, evaluations of the device in advertising mode with the sensors turned off completely, followed by the power analysis when each of the sensors is individually turned on and data is being transmitted, and concluding with the power consumption evaluation when both sensors are on and respectively broadcasting the data to a mobile phone. The results presented in this paper are real-time measurements of the electrical current consumption of the BLE device, where the energy levels that are demonstrated are matched to the BLE behavior and sensor activity.Keywords: bluetooth low energy, power analysis, BLE advertising cycle, wireless sensor node
Procedia PDF Downloads 911002 The Advancement of Environmental Impact Assessment for 5th Transmission Natural Gas Pipeline Project in Thailand
Authors: Penrug Pengsombut, Worawut Hamarn, Teerawuth Suwannasri, Kittiphong Songrukkiat, Kanatip Ratanachoo
Abstract:
PTT Public Company Limited or simply PTT has played an important role in strengthening national energy security of the Kingdom of Thailand by transporting natural gas to customers in power, industrial and commercial sectors since 1981. PTT has been constructing and operating natural gas pipeline system of over 4,500-km network length both onshore and offshore laid through different area classifications i.e., marine, forest, agriculture, rural, urban, and city areas. During project development phase, an Environmental Impact Assessment (EIA) is conducted and submitted to the Office of Natural Resources and Environmental Policy and Planning (ONEP) for approval before project construction commencement. Knowledge and experiences gained and revealed from EIA in the past projects definitely are developed to further advance EIA study process for newly 5th Transmission Natural Gas Pipeline Project (5TP) with approximately 415 kilometers length. The preferred pipeline route is selected and justified by SMARTi map, an advance digital one-map platform with consists of multiple layers geographic and environmental information. Sensitive area impact focus (SAIF) is a practicable impact assessment methodology which appropriate for a particular long distance infrastructure project such as 5TP. An environmental modeling simulation is adopted into SAIF methodology for impact quantified in all sensitive areas whereas other area along pipeline right-of-ways is typically assessed as an impact representative. Resulting time and cost deduction is beneficial to project for early start.Keywords: environmental impact assessment, EIA, natural gas pipeline, sensitive area impact focus, SAIF
Procedia PDF Downloads 4081001 Talent Management, Employee Competency, and Organizational Performance
Authors: Sunyoung Park
Abstract:
Context: Talent management is a strategic approach that has received considerable attention in recent years to improve employee competency and organizational performance in many organizations. The implementation of talent management involves identifying objectives and positions within the organization, developing a pool of high-potential employees, and establishing appropriate HR functions to promote high employee and organizational performance. This study aims to investigate the relationship between talent management, HR functions, employee competency, and organizational performance in the South Korean context. Research Aim: The main objective of this study is to investigate the structural relationships among talent management, human resources (HR) functions, employee competency, and organizational performance. Methodology: To achieve the research aim, this study used a quantitative research method. Specifically, a total of 1,478 responses were analyzed using structural equation modeling based on data obtained from the Human Capital Corporate Panel (HCCP) survey in South Korea. Findings: The study revealed that talent management has a positive influence on HR functions and employee competency. Additionally, HR functions directly affect employee competency and organizational performance. Employee competency was found to be related to organizational performance. Moreover, talent management and HR functions indirectly affect organizational performance through employee competency. Theoretical Importance: This study provides empirical evidence of the relationship between talent management, HR functions, employee competency, and organizational performance in the South Korean context. The findings suggest that organizations should focus on developing appropriate talent management and HR functions to improve employee competency, which, in turn, will lead to better organizational performance. Moreover, the study contributes to the existing literature by emphasizing the importance of the relationship between talent management and HR functions in improving organizational performance.Keywords: employee competency, HR functions, organizational performance, talent management
Procedia PDF Downloads 961000 A Comprehensive Study of Spread Models of Wildland Fires
Authors: Manavjit Singh Dhindsa, Ursula Das, Kshirasagar Naik, Marzia Zaman, Richard Purcell, Srinivas Sampalli, Abdul Mutakabbir, Chung-Horng Lung, Thambirajah Ravichandran
Abstract:
These days, wildland fires, also known as forest fires, are more prevalent than ever. Wildfires have major repercussions that affect ecosystems, communities, and the environment in several ways. Wildfires lead to habitat destruction and biodiversity loss, affecting ecosystems and causing soil erosion. They also contribute to poor air quality by releasing smoke and pollutants that pose health risks, especially for individuals with respiratory conditions. Wildfires can damage infrastructure, disrupt communities, and cause economic losses. The economic impact of firefighting efforts, combined with their direct effects on forestry and agriculture, causes significant financial difficulties for the areas impacted. This research explores different forest fire spread models and presents a comprehensive review of various techniques and methodologies used in the field. A forest fire spread model is a computational or mathematical representation that is used to simulate and predict the behavior of a forest fire. By applying scientific concepts and data from empirical studies, these models attempt to capture the intricate dynamics of how a fire spreads, taking into consideration a variety of factors like weather patterns, topography, fuel types, and environmental conditions. These models assist authorities in understanding and forecasting the potential trajectory and intensity of a wildfire. Emphasizing the need for a comprehensive understanding of wildfire dynamics, this research explores the approaches, assumptions, and findings derived from various models. By using a comparison approach, a critical analysis is provided by identifying patterns, strengths, and weaknesses among these models. The purpose of the survey is to further wildfire research and management techniques. Decision-makers, researchers, and practitioners can benefit from the useful insights that are provided by synthesizing established information. Fire spread models provide insights into potential fire behavior, facilitating authorities to make informed decisions about evacuation activities, allocating resources for fire-fighting efforts, and planning for preventive actions. Wildfire spread models are also useful in post-wildfire mitigation strategies as they help in assessing the fire's severity, determining high-risk regions for post-fire dangers, and forecasting soil erosion trends. The analysis highlights the importance of customized modeling approaches for various circumstances and promotes our understanding of the way forest fires spread. Some of the known models in this field are Rothermel’s wildland fuel model, FARSITE, WRF-SFIRE, FIRETEC, FlamMap, FSPro, cellular automata model, and others. The key characteristics that these models consider include weather (includes factors such as wind speed and direction), topography (includes factors like landscape elevation), and fuel availability (includes factors like types of vegetation) among other factors. The models discussed are physics-based, data-driven, or hybrid models, also utilizing ML techniques like attention-based neural networks to enhance the performance of the model. In order to lessen the destructive effects of forest fires, this initiative aims to promote the development of more precise prediction tools and effective management techniques. The survey expands its scope to address the practical needs of numerous stakeholders. Access to enhanced early warning systems enables decision-makers to take prompt action. Emergency responders benefit from improved resource allocation strategies, strengthening the efficacy of firefighting efforts.Keywords: artificial intelligence, deep learning, forest fire management, fire risk assessment, fire simulation, machine learning, remote sensing, wildfire modeling
Procedia PDF Downloads 81999 Optimizing Groundwater Pumping for a Complex Groundwater/Surface Water System
Authors: Emery A. Coppola Jr., Suna Cinar, Ferenc Szidarovszky
Abstract:
Over-pumping of groundwater resources is a serious problem world-wide. In addition to depleting this valuable resource, hydraulically connected sensitive ecological resources like wetlands and surface water bodies are often impacted and even destroyed by over-pumping. Effectively managing groundwater in a way that satisfy human demand while preserving natural resources is a daunting challenge that will only worsen with growing human populations and climate change. As presented in this paper, a numerical flow model developed for a hypothetical but realistic groundwater/surface water system was combined with formal optimization. Response coefficients were used in an optimization management model to maximize groundwater pumping in a complex, multi-layered aquifer system while protecting against groundwater over-draft, streamflow depletion, and wetland impacts. Pumping optimization was performed for different constraint sets that reflect different resource protection preferences, yielding significantly different optimal pumping solutions. A sensitivity analysis on the optimal solutions was performed on select response coefficients to identify differences between wet and dry periods. Stochastic optimization was also performed, where uncertainty associated with changing irrigation demand due to changing weather conditions are accounted for. One of the strengths of this optimization approach is that it can efficiently and accurately identify superior management strategies that minimize risk and adverse environmental impacts associated with groundwater pumping under different hydrologic conditions.Keywords: numerical groundwater flow modeling, water management optimization, groundwater overdraft, streamflow depletion
Procedia PDF Downloads 233998 Processing and Modeling of High-Resolution Geophysical Data for Archaeological Prospection, Nuri Area, Northern Sudan
Authors: M. Ibrahim Ali, M. El Dawi, M. A. Mohamed Ali
Abstract:
In this study, the use of magnetic gradient survey, and the geoelectrical ground methods used together to explore archaeological features in Nuri’s pyramids area. Research methods used and the procedures and methodologies have taken full right during the study. The magnetic survey method was used to search for archaeological features using (Geoscan Fluxgate Gradiometer (FM36)). The study area was divided into a number of squares (networks) exactly equal (20 * 20 meters). These squares were collected at the end of the study to give a major network for each region. Networks also divided to take the sample using nets typically equal to (0.25 * 0.50 meter), in order to give a more specific archaeological features with some small bipolar anomalies that caused by buildings built from fired bricks. This definition is important to monitor many of the archaeological features such as rooms and others. This main network gives us an integrated map displayed for easy presentation, and it also allows for all the operations required using (Geoscan Geoplot software). The parallel traverse is the main way to take readings of the magnetic survey, to get out the high-quality data. The study area is very rich in old buildings that vary from small to very large. According to the proportion of the sand dunes and the loose soil, most of these buildings are not visible from the surface. Because of the proportion of the sandy dry soil, there is no connection between the ground surface and the electrodes. We tried to get electrical readings by adding salty water to the soil, but, unfortunately, we failed to confirm the magnetic readings with electrical readings as previously planned.Keywords: archaeological features, independent grids, magnetic gradient, Nuri pyramid
Procedia PDF Downloads 482997 The Mediating Role of Psychological Factors in the Relationships Between Youth Problematic Internet and Subjective Well-Being
Authors: Dorit Olenik-Shemesh, Tali Heiman
Abstract:
The rapid increase in the massive use of the internet in recent yearshas led to an increase in the prevalence of a phenomenon called 'Problematic Internet use' (PIU), an emerging, growing health problem, especially during adolescents, that poses a challenge for mental health research and practitioners. Problematic Internet use (PIU) is defined as an excessive overuse of the internet, including an inability to control time spent on the internet, cognitivepreoccupation with the Internet, and continued use in spite of the adverse consequences, which may lead to psychological, social, and academic difficulties in one's life and daily functioning. However, little is known about the nature of the nexusbetween PIU and subjective well-being among adolescents. The main purpose of the current study was to explore in depth the network of connections between PIU, sense of well-being, and fourpersonal-emotional factors (resilience, self-control, depressive mood, and loneliness) that may mediate these relationships. A total sample of 433 adolescents, 214 (49.4%) girls and 219 (50.6%) boys between the ages of 12–17 (mean = 14.9, SD = 2.16), completed self-reportquestionnaires relating to the study variables. In line with the hypothesis, analysis of a Structural Equation modeling (SEM) revealed the main following results: high levels of PIU predicted low levels of well-being among adolescents. In addition, low levels of resilience and high levels of depressivemood (together), as well as low levels of self control and high levels of depressivemood (together), as well as low levels of resilience and high levels of loneliness, mediated the relationships between PIU and well-being. In general, girls were found to be higher in PIU and inresilience than boys. The study results revealed specific implications for developing intervention programs for adolescents in the context of PIU; aiming at more balanced adjusted use of the Internet along withpreventingthe decrease in well being.Keywords: probelmatic inetrent Use, well-being, adolescents, SEM model
Procedia PDF Downloads 168996 Efficient DNN Training on Heterogeneous Clusters with Pipeline Parallelism
Abstract:
Pipeline parallelism has been widely used to accelerate distributed deep learning to alleviate GPU memory bottlenecks and to ensure that models can be trained and deployed smoothly under limited graphics memory conditions. However, in highly heterogeneous distributed clusters, traditional model partitioning methods are not able to achieve load balancing. The overlap of communication and computation is also a big challenge. In this paper, HePipe is proposed, an efficient pipeline parallel training method for highly heterogeneous clusters. According to the characteristics of the neural network model pipeline training task, oriented to the 2-level heterogeneous cluster computing topology, a training method based on the 2-level stage division of neural network modeling and partitioning is designed to improve the parallelism. Additionally, a multi-forward 1F1B scheduling strategy is designed to accelerate the training time of each stage by executing the computation units in advance to maximize the overlap between the forward propagation communication and backward propagation computation. Finally, a dynamic recomputation strategy based on task memory requirement prediction is proposed to improve the fitness ratio of task and memory, which improves the throughput of the cluster and solves the memory shortfall problem caused by memory differences in heterogeneous clusters. The empirical results show that HePipe improves the training speed by 1.6×−2.2× over the existing asynchronous pipeline baselines.Keywords: pipeline parallelism, heterogeneous cluster, model training, 2-level stage partitioning
Procedia PDF Downloads 18995 Factors Affecting Internet Behavior and Life Satisfaction of Older Adult Learners with Use of Smartphone
Authors: Horng-Ji Lai
Abstract:
The intuitive design features and friendly interface of smartphone attract older adults. In Taiwan, many senior education institutes offer smartphone training courses for older adult learners who are interested in learning this innovative technology. It is expected that the training courses can help them to enjoy the benefits of using smartphone and increase their life satisfaction. Therefore, it is important to investigate the factors that influence older adults’ behavior of using smartphone. The purpose of the research was to develop and test a research model that investigates the factors (self-efficacy, social connection, the need to seek health information, and the need to seek financial information) affecting older adult learners’ Internet behaviour and their life satisfaction with use of smartphone. Also, this research sought to identify the relationship between the proposed variables. Survey method was used to collect research data. A Structural Equation Modeling was performed using Partial Least Squares (PLS) regression for data exploration and model estimation. The participants were 394 older adult learners from smartphone training courses in active aging learning centers located in central Taiwan. The research results revealed that self-efficacy significantly affected older adult learner’ social connection, the need to seek health information, and the need to seek financial information. The construct of social connection yielded a positive influence in respondents’ life satisfaction. The implications of these results for practice and future research are also discussed.Keywords: older adults, smartphone, internet behaviour, life satisfaction
Procedia PDF Downloads 190994 Design and Radio Frequency Characterization of Radial Reentrant Narrow Gap Cavity for the Inductive Output Tube
Authors: Meenu Kaushik, Ayon K. Bandhoyadhayay, Lalit M. Joshi
Abstract:
Inductive output tubes (IOTs) are widely used as microwave power amplifiers for broadcast and scientific applications. It is capable of amplifying radio frequency (RF) power with very good efficiency. Its compactness, reliability, high efficiency, high linearity and low operating cost make this device suitable for various applications. The device consists of an integrated structure of electron gun and RF cavity, collector and focusing structure. The working principle of IOT is a combination of triode and klystron. The cathode lies in the electron gun produces a stream of electrons. A control grid is placed in close proximity to the cathode. Basically, the input part of IOT is the integrated structure of gridded electron gun which acts as an input cavity thereby providing the interaction gap where the input RF signal is applied to make it interact with the produced electron beam for supporting the amplification phenomena. The paper presents the design, fabrication and testing of a radial re-entrant cavity for implementing in the input structure of IOT at 350 MHz operating frequency. The model’s suitability has been discussed and a generalized mathematical relation has been introduced for getting the proper transverse magnetic (TM) resonating mode in the radial narrow gap RF cavities. The structural modeling has been carried out in CST and SUPERFISH codes. The cavity is fabricated with the Aluminum material and the RF characterization is done using vector network analyzer (VNA) and the results are presented for the resonant frequency peaks obtained in VNA.Keywords: inductive output tubes, IOT, radial cavity, coaxial cavity, particle accelerators
Procedia PDF Downloads 124993 Dispersion Effects in Waves Reflected by Lossy Conductors: The Optics vs. Electromagnetics Approach
Authors: Oibar Martinez, Clara Oliver, Jose Miguel Miranda
Abstract:
The study of dispersion phenomena in electromagnetic waves reflected by conductors at infrared and lower frequencies is a topic which finds a number of applications. We aim to explain in this work what are the most relevant ones and how this phenomenon is modeled from both optics and electromagnetics points of view. We also explain here how the amplitude of an electromagnetic wave reflected by a lossy conductor could depend on both the frequency of the incident wave, as well as on the electrical properties of the conductor, and we illustrate this phenomenon with a practical example. The mathematical analysis made by a specialist in electromagnetics or a microwave engineer is apparently very different from the one made by a specialist in optics. We show here how both approaches lead to the same physical result and what are the key concepts which enable one to understand that despite the differences in the equations the solution to the problem happens to be the same. Our study starts with an analysis made by using the complex refractive index and the reflectance parameter. We show how this reflectance has a dependence with the square root of the frequency when the reflecting material is a good conductor, and the frequency of the wave is low enough. Then we analyze the same problem with a less known approach, which is based on the reflection coefficient of the electric field, a parameter that is most commonly used in electromagnetics and microwave engineering. In summary, this paper presents a mathematical study illustrated with a worked example which unifies the modeling of dispersion effects made by specialists in optics and the one made by specialists in electromagnetics. The main finding of this work is that it is possible to reproduce the dependence of the Fresnel reflectance with frequency from the intrinsic impedance of the reflecting media.Keywords: dispersion, electromagnetic waves, microwaves, optics
Procedia PDF Downloads 129992 Technical and Economic Analysis of Smart Micro-Grid Renewable Energy Systems: An Applicable Case Study
Authors: M. A. Fouad, M. A. Badr, Z. S. Abd El-Rehim, Taher Halawa, Mahmoud Bayoumi, M. M. Ibrahim
Abstract:
Renewable energy-based micro-grids are presently attracting significant consideration. The smart grid system is presently considered a reliable solution for the expected deficiency in the power required from future power systems. The purpose of this study is to determine the optimal components sizes of a micro-grid, investigating technical and economic performance with the environmental impacts. The micro grid load is divided into two small factories with electricity, both on-grid and off-grid modes are considered. The micro-grid includes photovoltaic cells, back-up diesel generator wind turbines, and battery bank. The estimated load pattern is 76 kW peak. The system is modeled and simulated by MATLAB/Simulink tool to identify the technical issues based on renewable power generation units. To evaluate system economy, two criteria are used: the net present cost and the cost of generated electricity. The most feasible system components for the selected application are obtained, based on required parameters, using HOMER simulation package. The results showed that a Wind/Photovoltaic (W/PV) on-grid system is more economical than a Wind/Photovoltaic/Diesel/Battery (W/PV/D/B) off-grid system as the cost of generated electricity (COE) is 0.266 $/kWh and 0.316 $/kWh, respectively. Considering the cost of carbon dioxide emissions, the off-grid will be competitive to the on-grid system as COE is found to be (0.256 $/kWh, 0.266 $/kWh), for on and off grid systems.Keywords: renewable energy sources, micro-grid system, modeling and simulation, on/off grid system, environmental impacts
Procedia PDF Downloads 270991 The Improvement of Turbulent Heat Flux Parameterizations in Tropical GCMs Simulations Using Low Wind Speed Excess Resistance Parameter
Authors: M. O. Adeniyi, R. T. Akinnubi
Abstract:
The parameterization of turbulent heat fluxes is needed for modeling land-atmosphere interactions in Global Climate Models (GCMs). However, current GCMs still have difficulties with producing reliable turbulent heat fluxes for humid tropical regions, which may be due to inadequate parameterization of the roughness lengths for momentum (z0m) and heat (z0h) transfer. These roughness lengths are usually expressed in term of excess resistance factor (κB^(-1)), and this factor is used to account for different resistances for momentum and heat transfers. In this paper, a more appropriate excess resistance factor (〖 κB〗^(-1)) suitable for low wind speed condition was developed and incorporated into the aerodynamic resistance approach (ARA) in the GCMs. Also, the performance of various standard GCMs κB^(-1) schemes developed for high wind speed conditions were assessed. Based on the in-situ surface heat fluxes and profile measurements of wind speed and temperature from Nigeria Micrometeorological Experimental site (NIMEX), new κB^(-1) was derived through application of the Monin–Obukhov similarity theory and Brutsaert theoretical model for heat transfer. Turbulent flux parameterizations with this new formula provides better estimates of heat fluxes when compared with others estimated using existing GCMs κB^(-1) schemes. The derived κB^(-1) MBE and RMSE in the parameterized QH ranged from -1.15 to – 5.10 Wm-2 and 10.01 to 23.47 Wm-2, while that of QE ranged from - 8.02 to 6.11 Wm-2 and 14.01 to 18.11 Wm-2 respectively. The derived 〖 κB〗^(-1) gave better estimates of QH than QE during daytime. The derived 〖 κB〗^(-1)=6.66〖 Re〗_*^0.02-5.47, where Re_* is the Reynolds number. The derived κB^(-1) scheme which corrects a well documented large overestimation of turbulent heat fluxes is therefore, recommended for most regional models within the tropic where low wind speed is prevalent.Keywords: humid, tropic, excess resistance factor, overestimation, turbulent heat fluxes
Procedia PDF Downloads 202990 On the Added Value of Probabilistic Forecasts Applied to the Optimal Scheduling of a PV Power Plant with Batteries in French Guiana
Authors: Rafael Alvarenga, Hubert Herbaux, Laurent Linguet
Abstract:
The uncertainty concerning the power production of intermittent renewable energy is one of the main barriers to the integration of such assets into the power grid. Efforts have thus been made to develop methods to quantify this uncertainty, allowing producers to ensure more reliable and profitable engagements related to their future power delivery. Even though a diversity of probabilistic approaches was proposed in the literature giving promising results, the added value of adopting such methods for scheduling intermittent power plants is still unclear. In this study, the profits obtained by a decision-making model used to optimally schedule an existing PV power plant connected to batteries are compared when the model is fed with deterministic and probabilistic forecasts generated with two of the most recent methods proposed in the literature. Moreover, deterministic forecasts with different accuracy levels were used in the experiments, testing the utility and the capability of probabilistic methods of modeling the progressively increasing uncertainty. Even though probabilistic approaches are unquestionably developed in the recent literature, the results obtained through a study case show that deterministic forecasts still provide the best performance if accurate, ensuring a gain of 14% on final profits compared to the average performance of probabilistic models conditioned to the same forecasts. When the accuracy of deterministic forecasts progressively decreases, probabilistic approaches start to become competitive options until they completely outperform deterministic forecasts when these are very inaccurate, generating 73% more profits in the case considered compared to the deterministic approach.Keywords: PV power forecasting, uncertainty quantification, optimal scheduling, power systems
Procedia PDF Downloads 87989 A Modular and Reusable Bond Graph Model of Epithelial Transport in the Proximal Convoluted Tubule
Authors: Leyla Noroozbabaee, David Nickerson
Abstract:
We introduce a modular, consistent, reusable bond graph model of the renal nephron’s proximal convoluted tubule (PCT), which can reproduce biological behaviour. In this work, we focus on ion and volume transport in the proximal convoluted tubule of the renal nephron. Modelling complex systems requires complex modelling problems to be broken down into manageable pieces. This can be enabled by developing models of subsystems that are subsequently coupled hierarchically. Because they are based on a graph structure. In the current work, we define two modular subsystems: the resistive module representing the membrane and the capacitive module representing solution compartments. Each module is analyzed based on thermodynamic processes, and all the subsystems are reintegrated into circuit theory in network thermodynamics. The epithelial transport system we introduce in the current study consists of five transport membranes and four solution compartments. Coupled dissipations in the system occur in the membrane subsystems and coupled free-energy increasing, or decreasing processes appear in solution compartment subsystems. These structural subsystems also consist of elementary thermodynamic processes: dissipations, free-energy change, and power conversions. We provide free and open access to the Python implementation to ensure our model is accessible, enabling the reader to explore the model through setting their simulations and reproducibility tests.Keywords: Bond Graph, Epithelial Transport, Water Transport, Mathematical Modeling
Procedia PDF Downloads 87988 Numerical Modelling of Immiscible Fluids Flow in Oil Reservoir Rocks during Enhanced Oil Recovery Processes
Authors: Zahreddine Hafsi, Manoranjan Mishra , Sami Elaoud
Abstract:
Ensuring the maximum recovery rate of oil from reservoir rocks is a challenging task that requires preliminary numerical analysis of different techniques used to enhance the recovery process. After conventional oil recovery processes and in order to retrieve oil left behind after the primary recovery phase, water flooding in one of several techniques used for enhanced oil recovery (EOR). In this research work, EOR via water flooding is numerically modeled, and hydrodynamic instabilities resulted from immiscible oil-water flow in reservoir rocks are investigated. An oil reservoir is a porous medium consisted of many fractures of tiny dimensions. For modeling purposes, the oil reservoir is considered as a collection of capillary tubes which provides useful insights into how fluids behave in the reservoir pore spaces. Equations governing oil-water flow in oil reservoir rocks are developed and numerically solved following a finite element scheme. Numerical results are obtained using Comsol Multiphysics software. The two phase Darcy module of COMSOL Multiphysics allows modelling the imbibition process by the injection of water (as wetting phase) into an oil reservoir. Van Genuchten, Brooks Corey and Levrett models were considered as retention models and obtained flow configurations are compared, and the governing parameters are discussed. For the considered retention models it was found that onset of instabilities viz. fingering phenomenon is highly dependent on the capillary pressure as well as the boundary conditions, i.e., the inlet pressure and the injection velocity.Keywords: capillary pressure, EOR process, immiscible flow, numerical modelling
Procedia PDF Downloads 131987 Simulation of Dynamic Behavior of Seismic Isolators Using a Parallel Elasto-Plastic Model
Authors: Nicolò Vaiana, Giorgio Serino
Abstract:
In this paper, a one-dimensional (1d) Parallel Elasto- Plastic Model (PEPM), able to simulate the uniaxial dynamic behavior of seismic isolators having a continuously decreasing tangent stiffness with increasing displacement, is presented. The parallel modeling concept is applied to discretize the continuously decreasing tangent stiffness function, thus allowing to simulate the dynamic behavior of seismic isolation bearings by putting linear elastic and nonlinear elastic-perfectly plastic elements in parallel. The mathematical model has been validated by comparing the experimental force-displacement hysteresis loops, obtained testing a helical wire rope isolator and a recycled rubber-fiber reinforced bearing, with those predicted numerically. Good agreement between the simulated and experimental results shows that the proposed model can be an effective numerical tool to predict the forcedisplacement relationship of seismic isolators within relatively large displacements. Compared to the widely used Bouc-Wen model, the proposed one allows to avoid the numerical solution of a first order ordinary nonlinear differential equation for each time step of a nonlinear time history analysis, thus reducing the computation effort, and requires the evaluation of only three model parameters from experimental tests, namely the initial tangent stiffness, the asymptotic tangent stiffness, and a parameter defining the transition from the initial to the asymptotic tangent stiffness.Keywords: base isolation, earthquake engineering, parallel elasto-plastic model, seismic isolators, softening hysteresis loops
Procedia PDF Downloads 280986 Aeroacoustics Investigations of Unsteady 3D Airfoil for Different Angle Using Computational Fluid Dynamics Software
Authors: Haydar Kepekçi, Baha Zafer, Hasan Rıza Güven
Abstract:
Noise disturbance is one of the major factors considered in the fast development of aircraft technology. This paper reviews the flow field, which is examined on the 2D NACA0015 and 3D NACA0012 blade profile using SST k-ω turbulence model to compute the unsteady flow field. We inserted the time-dependent flow area variables in Ffowcs-Williams and Hawkings (FW-H) equations as an input and Sound Pressure Level (SPL) values will be computed for different angles of attack (AoA) from the microphone which is positioned in the computational domain to investigate effect of augmentation of unsteady 2D and 3D airfoil region noise level. The computed results will be compared with experimental data which are available in the open literature. As results; one of the calculated Cp is slightly lower than the experimental value. This difference could be due to the higher Reynolds number of the experimental data. The ANSYS Fluent software was used in this study. Fluent includes well-validated physical modeling capabilities to deliver fast, accurate results across the widest range of CFD and multiphysics applications. This paper includes a study which is on external flow over an airfoil. The case of 2D NACA0015 has approximately 7 million elements and solves compressible fluid flow with heat transfer using the SST turbulence model. The other case of 3D NACA0012 has approximately 3 million elements.Keywords: 3D blade profile, noise disturbance, aeroacoustics, Ffowcs-Williams and Hawkings (FW-H) equations, k-ω-SST turbulence model
Procedia PDF Downloads 212985 Avoiding Gas Hydrate Problems in Qatar Oil and Gas Industry: Environmentally Friendly Solvents for Gas Hydrate Inhibition
Authors: Nabila Mohamed, Santiago Aparicio, Bahman Tohidi, Mert Atilhan
Abstract:
Qatar's one of the biggest problem in processing its natural resource, which is natural gas, is the often occurring blockage in the pipelines caused due to uncontrolled gas hydrate formation in the pipelines. Several millions of dollars are being spent at the process site to dehydrate the blockage safely by using chemical inhibitors. We aim to establish national database, which addresses the physical conditions that promotes Qatari natural gas to form gas hydrates in the pipelines. Moreover, we aim to design and test novel hydrate inhibitors that are suitable for Qatari natural gas and its processing facilities. From these perspectives we are aiming to provide more effective and sustainable reservoir utilization and processing of Qatari natural gas. In this work, we present the initial findings of a QNRF funded project, which deals with the natural gas hydrate formation characteristics of Qatari type gas in both experimental (PVTx) and computational (molecular simulations) methods. We present the data from the two fully automated apparatus: a gas hydrate autoclave and a rocking cell. Hydrate equilibrium curves including growth/dissociation conditions for multi-component systems for several gas mixtures that represent Qatari type natural gas with and without the presence of well known kinetic and thermodynamic hydrate inhibitors. Ionic liquids were designed and used for testing their inhibition performance and their DFT and molecular modeling simulation results were also obtained and compared with the experimental results. Results showed significant performance of ionic liquids with up to 0.5 % in volume with up to 2 to 4 0C inhibition at high pressures.Keywords: gas hydrates, natural gas, ionic liquids, inhibition, thermodynamic inhibitors, kinetic inhibitors
Procedia PDF Downloads 1320984 Third Party Logistics (3PL) Selection Criteria for an Indian Heavy Industry Using SEM
Authors: Nadama Kumar, P. Parthiban, T. Niranjan
Abstract:
In the present paper, we propose an incorporated approach for 3PL supplier choice that suits the distinctive strategic needs of the outsourcing organization in southern part of India. Four fundamental criteria have been used in particular Performance, IT, Service and Intangible. These are additionally subdivided into fifteen sub-criteria. The proposed strategy coordinates Structural Equation Modeling (SEM) and Non-additive Fuzzy Integral strategies. The presentation of fluffiness manages the unclearness of human judgments. The SEM approach has been used to approve the determination criteria for the proposed show though the Non-additive Fuzzy Integral approach uses the SEM display contribution to assess a supplier choice score. The case organization has a exclusive vertically integrated assembly that comprises of several companies focusing on a slight array of the value chain. To confirm manufacturing and logistics proficiency, it significantly relies on 3PL suppliers to attain supply chain superiority. However, 3PL supplier selection is an intricate decision-making procedure relating multiple selection criteria. The goal of this work is to recognize the crucial 3PL selection criteria by using the non-additive fuzzy integral approach. Unlike the outmoded multi criterion decision-making (MCDM) methods which frequently undertake independence among criteria and additive importance weights, the nonadditive fuzzy integral is an effective method to resolve the dependency among criteria, vague information, and vital fuzziness of human judgment. In this work, we validate an empirical case that engages the nonadditive fuzzy integral to assess the importance weight of selection criteria and indicate the most suitable 3PL supplier.Keywords: 3PL, non-additive fuzzy integral approach, SEM, fuzzy
Procedia PDF Downloads 280983 Adding a Few Language-Level Constructs to Improve OOP Verifiability of Semantic Correctness
Authors: Lian Yang
Abstract:
Object-oriented programming (OOP) is the dominant programming paradigm in today’s software industry and it has literally enabled average software developers to develop millions of commercial strength software applications in the era of INTERNET revolution over the past three decades. On the other hand, the lack of strict mathematical model and domain constraint features at the language level has long perplexed the computer science academia and OOP engineering community. This situation resulted in inconsistent system qualities and hard-to-understand designs in some OOP projects. The difficulties with regards to fix the current situation are also well known. Although the power of OOP lies in its unbridled flexibility and enormously rich data modeling capability, we argue that the ambiguity and the implicit facade surrounding the conceptual model of a class and an object should be eliminated as much as possible. We listed the five major usage of class and propose to separate them by proposing new language constructs. By using well-established theories of set and FSM, we propose to apply certain simple, generic, and yet effective constraints at OOP language level in an attempt to find a possible solution to the above-mentioned issues regarding OOP. The goal is to make OOP more theoretically sound as well as to aid programmers uncover warning signs of irregularities and domain-specific issues in applications early on the development stage and catch semantic mistakes at runtime, improving correctness verifiability of software programs. On the other hand, the aim of this paper is more practical than theoretical.Keywords: new language constructs, set theory, FSM theory, user defined value type, function groups, membership qualification attribute (MQA), check-constraint (CC)
Procedia PDF Downloads 239982 Prediction Modeling of Alzheimer’s Disease and Its Prodromal Stages from Multimodal Data with Missing Values
Authors: M. Aghili, S. Tabarestani, C. Freytes, M. Shojaie, M. Cabrerizo, A. Barreto, N. Rishe, R. E. Curiel, D. Loewenstein, R. Duara, M. Adjouadi
Abstract:
A major challenge in medical studies, especially those that are longitudinal, is the problem of missing measurements which hinders the effective application of many machine learning algorithms. Furthermore, recent Alzheimer's Disease studies have focused on the delineation of Early Mild Cognitive Impairment (EMCI) and Late Mild Cognitive Impairment (LMCI) from cognitively normal controls (CN) which is essential for developing effective and early treatment methods. To address the aforementioned challenges, this paper explores the potential of using the eXtreme Gradient Boosting (XGBoost) algorithm in handling missing values in multiclass classification. We seek a generalized classification scheme where all prodromal stages of the disease are considered simultaneously in the classification and decision-making processes. Given the large number of subjects (1631) included in this study and in the presence of almost 28% missing values, we investigated the performance of XGBoost on the classification of the four classes of AD, NC, EMCI, and LMCI. Using 10-fold cross validation technique, XGBoost is shown to outperform other state-of-the-art classification algorithms by 3% in terms of accuracy and F-score. Our model achieved an accuracy of 80.52%, a precision of 80.62% and recall of 80.51%, supporting the more natural and promising multiclass classification.Keywords: eXtreme gradient boosting, missing data, Alzheimer disease, early mild cognitive impairment, late mild cognitive impair, multiclass classification, ADNI, support vector machine, random forest
Procedia PDF Downloads 188981 Effect of Surface Treatments on the Cohesive Response of Nylon 6/silica Interfaces
Authors: S. Arabnejad, D. W. C. Cheong, H. Chaobin, V. P. W. Shim
Abstract:
Debonding is the one of the fundamental damage mechanisms in particle field composites. This phenomenon gains more importance in nano composites because of the extensive interfacial region present in these materials. Understanding the debonding mechanism accurately, can help in understanding and predicting the response of nano composites as the interface deteriorates. The small length scale of the phenomenon makes the experimental characterization complicated and the results of it, far from real physical behavior. In this study the damage process in nylon-6/silica interface is examined through Molecular Dynamics (MD) modeling and simulations. The silica has been modeled with three forms of surfaces – without any surface treatment, with the surface treatment of 3-aminopropyltriethoxysilane (APTES) and with Hexamethyldisilazane (HMDZ) surface treatment. The APTES surface modification used to create functional groups on the silica surface, reacts and form covalent bonds with nylon 6 chains while the HMDZ surface treatment only interacts with both particle and polymer by non-bond interaction. The MD model in this study uses a PCFF force field. The atomic model is generated in a periodic box with a layer of vacuum on top of the polymer layer. This layer of vacuum is large enough that assures us from not having any interaction between particle and substrate after debonding. Results show that each of these three models show a different traction separation behavior. However, all of them show an almost bilinear traction separation behavior. The study also reveals a strong correlation between the length of APTES surface treatment and the cohesive strength of the interface.Keywords: debonding, surface treatment, cohesive response, separation behaviour
Procedia PDF Downloads 460