Search results for: IFRS and U.S. GAAP convergence
68 Investigating the Influence of Activation Functions on Image Classification Accuracy via Deep Convolutional Neural Network
Authors: Gulfam Haider, sana danish
Abstract:
Convolutional Neural Networks (CNNs) have emerged as powerful tools for image classification, and the choice of optimizers profoundly affects their performance. The study of optimizers and their adaptations remains a topic of significant importance in machine learning research. While numerous studies have explored and advocated for various optimizers, the efficacy of these optimization techniques is still subject to scrutiny. This work aims to address the challenges surrounding the effectiveness of optimizers by conducting a comprehensive analysis and evaluation. The primary focus of this investigation lies in examining the performance of different optimizers when employed in conjunction with the popular activation function, Rectified Linear Unit (ReLU). By incorporating ReLU, known for its favorable properties in prior research, the aim is to bolster the effectiveness of the optimizers under scrutiny. Specifically, we evaluate the adjustment of these optimizers with both the original Softmax activation function and the modified ReLU activation function, carefully assessing their impact on overall performance. To achieve this, a series of experiments are conducted using a well-established benchmark dataset for image classification tasks, namely the Canadian Institute for Advanced Research dataset (CIFAR-10). The selected optimizers for investigation encompass a range of prominent algorithms, including Adam, Root Mean Squared Propagation (RMSprop), Adaptive Learning Rate Method (Adadelta), Adaptive Gradient Algorithm (Adagrad), and Stochastic Gradient Descent (SGD). The performance analysis encompasses a comprehensive evaluation of the classification accuracy, convergence speed, and robustness of the CNN models trained with each optimizer. Through rigorous experimentation and meticulous assessment, we discern the strengths and weaknesses of the different optimization techniques, providing valuable insights into their suitability for image classification tasks. By conducting this in-depth study, we contribute to the existing body of knowledge surrounding optimizers in CNNs, shedding light on their performance characteristics for image classification. The findings gleaned from this research serve to guide researchers and practitioners in making informed decisions when selecting optimizers and activation functions, thus advancing the state-of-the-art in the field of image classification with convolutional neural networks.Keywords: deep neural network, optimizers, RMsprop, ReLU, stochastic gradient descent
Procedia PDF Downloads 12767 Analyzing Water Waves in Underground Pumped Storage Reservoirs: A Combined 3D Numerical and Experimental Approach
Authors: Elena Pummer, Holger Schuettrumpf
Abstract:
By today underground pumped storage plants as an outstanding alternative for classical pumped storage plants do not exist. They are needed to ensure the required balance between production and demand of energy. As a short to medium term storage pumped storage plants have been used economically over a long period of time, but their expansion is limited locally. The reasons are in particular the required topography and the extensive human land use. Through the use of underground reservoirs instead of surface lakes expansion options could be increased. Fulfilling the same functions, several hydrodynamic processes result in the specific design of the underground reservoirs and must be implemented in the planning process of such systems. A combined 3D numerical and experimental approach leads to currently unknown results about the occurring wave types and their behavior in dependence of different design and operating criteria. For the 3D numerical simulations, OpenFOAM was used and combined with an experimental approach in the laboratory of the Institute of Hydraulic Engineering and Water Resources Management at RWTH Aachen University, Germany. Using the finite-volume method and an explicit time discretization, a RANS-Simulation (k-ε) has been run. Convergence analyses for different time discretization, different meshes etc. and clear comparisons between both approaches lead to the result, that the numerical and experimental models can be combined and used as hybrid model. Undular bores partly with secondary waves and breaking bores occurred in the underground reservoir. Different water levels and discharges change the global effects, defined as the time-dependent average of the water level as well as the local processes, defined as the single, local hydrodynamic processes (water waves). Design criteria, like branches, directional changes, changes in cross-section or bottom slope, as well as changes in roughness have a great effect on the local processes, the global effects remain unaffected. Design calculations for underground pumped storage plants were developed on the basis of existing formulae and the results of the hybrid approach. Using the design calculations reservoirs heights as well as oscillation periods can be determined and lead to the knowledge of construction and operation possibilities of the plants. Consequently, future plants can be hydraulically optimized applying the design calculations on the local boundary conditions.Keywords: energy storage, experimental approach, hybrid approach, undular and breaking Bores, 3D numerical approach
Procedia PDF Downloads 21366 An Adjoint-Based Method to Compute Derivatives with Respect to Bed Boundary Positions in Resistivity Measurements
Authors: Mostafa Shahriari, Theophile Chaumont-Frelet, David Pardo
Abstract:
Resistivity measurements are used to characterize the Earth’s subsurface. They are categorized into two different groups: (a) those acquired on the Earth’s surface, for instance, controlled source electromagnetic (CSEM) and Magnetotellurics (MT), and (b) those recorded with borehole logging instruments such as Logging-While-Drilling (LWD) devices. LWD instruments are mostly used for geo-steering purposes, i.e., to adjust dip and azimuthal angles of a well trajectory to drill along a particular geological target. Modern LWD tools measure all nine components of the magnetic field corresponding to three orthogonal transmitter and receiver orientations. In order to map the Earth’s subsurface and perform geo-steering, we invert measurements using a gradient-based method that utilizes the derivatives of the recorded measurements with respect to the inversion variables. For resistivity measurements, these inversion variables are usually the constant resistivity value of each layer and the bed boundary positions. It is well-known how to compute derivatives with respect to the constant resistivity value of each layer using semi-analytic or numerical methods. However, similar formulas for computing the derivatives with respect to bed boundary positions are unavailable. The main contribution of this work is to provide an adjoint-based formulation for computing derivatives with respect to the bed boundary positions. The key idea to obtain the aforementioned adjoint state formulations for the derivatives is to separate the tangential and normal components of the field and treat them differently. This formulation allows us to compute the derivatives faster and more accurately than with traditional finite differences approximations. In the presentation, we shall first derive a formula for computing the derivatives with respect to the bed boundary positions for the potential equation. Then, we shall extend our formulation to 3D Maxwell’s equations. Finally, by considering a 1D domain and reducing the dimensionality of the problem, which is a common practice in the inversion of resistivity measurements, we shall derive a formulation to compute the derivatives of the measurements with respect to the bed boundary positions using a 1.5D variational formulation. Then, we shall illustrate the accuracy and convergence properties of our formulations by comparing numerical results with the analytical derivatives for the potential equation. For the 1.5D Maxwell’s system, we shall compare our numerical results based on the proposed adjoint-based formulation vs those obtained with a traditional finite difference approach. Numerical results shall show that our proposed adjoint-based technique produces enhanced accuracy solutions while its cost is negligible, as opposed to the finite difference approach that requires the solution of one additional problem per derivative.Keywords: inverse problem, bed boundary positions, electromagnetism, potential equation
Procedia PDF Downloads 17865 Systematic Evaluation of Convolutional Neural Network on Land Cover Classification from Remotely Sensed Images
Authors: Eiman Kattan, Hong Wei
Abstract:
In using Convolutional Neural Network (CNN) for classification, there is a set of hyperparameters available for the configuration purpose. This study aims to evaluate the impact of a range of parameters in CNN architecture i.e. AlexNet on land cover classification based on four remotely sensed datasets. The evaluation tests the influence of a set of hyperparameters on the classification performance. The parameters concerned are epoch values, batch size, and convolutional filter size against input image size. Thus, a set of experiments were conducted to specify the effectiveness of the selected parameters using two implementing approaches, named pertained and fine-tuned. We first explore the number of epochs under several selected batch size values (32, 64, 128 and 200). The impact of kernel size of convolutional filters (1, 3, 5, 7, 10, 15, 20, 25 and 30) was evaluated against the image size under testing (64, 96, 128, 180 and 224), which gave us insight of the relationship between the size of convolutional filters and image size. To generalise the validation, four remote sensing datasets, AID, RSD, UCMerced and RSCCN, which have different land covers and are publicly available, were used in the experiments. These datasets have a wide diversity of input data, such as number of classes, amount of labelled data, and texture patterns. A specifically designed interactive deep learning GPU training platform for image classification (Nvidia Digit) was employed in the experiments. It has shown efficiency in both training and testing. The results have shown that increasing the number of epochs leads to a higher accuracy rate, as expected. However, the convergence state is highly related to datasets. For the batch size evaluation, it has shown that a larger batch size slightly decreases the classification accuracy compared to a small batch size. For example, selecting the value 32 as the batch size on the RSCCN dataset achieves the accuracy rate of 90.34 % at the 11th epoch while decreasing the epoch value to one makes the accuracy rate drop to 74%. On the other extreme, setting an increased value of batch size to 200 decreases the accuracy rate at the 11th epoch is 86.5%, and 63% when using one epoch only. On the other hand, selecting the kernel size is loosely related to data set. From a practical point of view, the filter size 20 produces 70.4286%. The last performed image size experiment shows a dependency in the accuracy improvement. However, an expensive performance gain had been noticed. The represented conclusion opens the opportunities toward a better classification performance in various applications such as planetary remote sensing.Keywords: CNNs, hyperparamters, remote sensing, land cover, land use
Procedia PDF Downloads 17064 Deep Reinforcement Learning Approach for Trading Automation in The Stock Market
Authors: Taylan Kabbani, Ekrem Duman
Abstract:
The design of adaptive systems that take advantage of financial markets while reducing the risk can bring more stagnant wealth into the global market. However, most efforts made to generate successful deals in trading financial assets rely on Supervised Learning (SL), which suffered from various limitations. Deep Reinforcement Learning (DRL) offers to solve these drawbacks of SL approaches by combining the financial assets price "prediction" step and the "allocation" step of the portfolio in one unified process to produce fully autonomous systems capable of interacting with its environment to make optimal decisions through trial and error. In this paper, a continuous action space approach is adopted to give the trading agent the ability to gradually adjust the portfolio's positions with each time step (dynamically re-allocate investments), resulting in better agent-environment interaction and faster convergence of the learning process. In addition, the approach supports the managing of a portfolio with several assets instead of a single one. This work represents a novel DRL model to generate profitable trades in the stock market, effectively overcoming the limitations of supervised learning approaches. We formulate the trading problem, or what is referred to as The Agent Environment as Partially observed Markov Decision Process (POMDP) model, considering the constraints imposed by the stock market, such as liquidity and transaction costs. More specifically, we design an environment that simulates the real-world trading process by augmenting the state representation with ten different technical indicators and sentiment analysis of news articles for each stock. We then solve the formulated POMDP problem using the Twin Delayed Deep Deterministic Policy Gradient (TD3) algorithm, which can learn policies in high-dimensional and continuous action spaces like those typically found in the stock market environment. From the point of view of stock market forecasting and the intelligent decision-making mechanism, this paper demonstrates the superiority of deep reinforcement learning in financial markets over other types of machine learning such as supervised learning and proves its credibility and advantages of strategic decision-making.Keywords: the stock market, deep reinforcement learning, MDP, twin delayed deep deterministic policy gradient, sentiment analysis, technical indicators, autonomous agent
Procedia PDF Downloads 17863 Computational and Experimental Determination of Acoustic Impedance of Internal Combustion Engine Exhaust
Authors: A. O. Glazkov, A. S. Krylova, G. G. Nadareishvili, A. S. Terenchenko, S. I. Yudin
Abstract:
The topic of the presented materials concerns the design of the exhaust system for a certain internal combustion engine. The exhaust system can be divided into two parts. The first is the engine exhaust manifold, turbocharger, and catalytic converters, which are called “hot part.” The second part is the gas exhaust system, which contains elements exclusively for reducing exhaust noise (mufflers, resonators), the accepted designation of which is the "cold part." The design of the exhaust system from the point of view of acoustics, that is, reducing the exhaust noise to a predetermined level, consists of working on the second part. Modern computer technology and software make it possible to design "cold part" with high accuracy in a given frequency range but with the condition of accurately specifying the input parameters, namely, the amplitude spectrum of the input noise and the acoustic impedance of the noise source in the form of an engine with a "hot part". Getting this data is a difficult problem: high temperatures, high exhaust gas velocities (turbulent flows), and high sound pressure levels (non-linearity mode) do not allow the calculated results to be applied with sufficient accuracy. The aim of this work is to obtain the most reliable acoustic output parameters of an engine with a "hot part" based on a complex of computational and experimental studies. The presented methodology includes several parts. The first part is a finite element simulation of the "cold part" of the exhaust system (taking into account the acoustic impedance of radiation of outlet pipe into open space) with the result in the form of the input impedance of "cold part". The second part is a finite element simulation of the "hot part" of the exhaust system (taking into account acoustic characteristics of catalytic units and geometry of turbocharger) with the result in the form of the input impedance of the "hot part". The next third part of the technique consists of the mathematical processing of the results according to the proposed formula for the convergence of the mathematical series of summation of multiple reflections of the acoustic signal "cold part" - "hot part". This is followed by conducting a set of tests on an engine stand with two high-temperature pressure sensors measuring pulsations in the nozzle between "hot part" and "cold part" of the exhaust system and subsequent processing of test results according to a well-known technique in order to separate the "incident" and "reflected" waves. The final stage consists of the mathematical processing of all calculated and experimental data to obtain a result in the form of a spectrum of the amplitude of the engine noise and its acoustic impedance.Keywords: acoustic impedance, engine exhaust system, FEM model, test stand
Procedia PDF Downloads 5962 Unlocking Synergy: Exploring the Impact of Integrating Knowledge Management and Competitive Intelligence for Synergistic Advantage for Efficient, Inclusive and Optimum Organizational Performance
Authors: Godian Asami Mabindah
Abstract:
The convergence of knowledge management (KM) and competitive intelligence (CI) has gained significant attention in recent years as organizations seek to enhance their competitive advantage in an increasingly complex and dynamic business environment. This research study aims to explore and understand the synergistic relationship between KM and CI and its impact on organizational performance. By investigating how the integration of KM and CI practices can contribute to decision-making, innovation, and competitive advantage, this study seeks to unlock the potential benefits and challenges associated with this integration. The research employs a mixed-methods approach to gather comprehensive data. A quantitative analysis is conducted using survey data collected from a diverse sample of organizations across different industries. The survey measures the extent of integration between KM and CI practices and examines the perceived benefits and challenges associated with this integration. Additionally, qualitative interviews are conducted with key organizational stakeholders to gain deeper insights into their experiences, perspectives, and best practices regarding the synergistic relationship. The findings of this study are expected to reveal several significant outcomes. Firstly, it is anticipated that organizations that effectively integrate KM and CI practices will outperform those that treat them as independent functions. The study aims to highlight the positive impact of this integration on decision-making, innovation, organizational learning, and competitive advantage. Furthermore, the research aims to identify critical success factors and enablers for achieving constructive interaction between KM and CI, such as leadership support, culture, technology infrastructure, and knowledge-sharing mechanisms. The implications of this research are far-reaching. Organizations can leverage the findings to develop strategies and practices that facilitate the integration of KM and CI, leading to enhanced competitive intelligence capabilities and improved knowledge management processes. Additionally, the research contributes to the academic literature by providing a comprehensive understanding of the synergistic relationship between KM and CI and proposing a conceptual framework that can guide future research in this area. By exploring the synergies between KM and CI, this study seeks to help organizations harness their collective power to gain a competitive edge in today's dynamic business landscape. The research provides practical insights and guidelines for organizations to effectively integrate KM and CI practices, leading to improved decision-making, innovation, and overall organizational performance.Keywords: Competitive Intelligence, Knowledge Management, Organizational Performance, Incusivity, Optimum Performance
Procedia PDF Downloads 9361 Elasto-Plastic Analysis of Structures Using Adaptive Gaussian Springs Based Applied Element Method
Authors: Mai Abdul Latif, Yuntian Feng
Abstract:
Applied Element Method (AEM) is a method that was developed to aid in the analysis of the collapse of structures. Current available methods cannot deal with structural collapse accurately; however, AEM can simulate the behavior of a structure from an initial state of no loading until collapse of the structure. The elements in AEM are connected with sets of normal and shear springs along the edges of the elements, that represent the stresses and strains of the element in that region. The elements are rigid, and the material properties are introduced through the spring stiffness. Nonlinear dynamic analysis has been widely modelled using the finite element method for analysis of progressive collapse of structures; however, difficulties in the analysis were found at the presence of excessively deformed elements with cracking or crushing, as well as having a high computational cost, and difficulties on choosing the appropriate material models for analysis. The Applied Element method is developed and coded to significantly improve the accuracy and also reduce the computational costs of the method. The scheme works for both linear elastic, and nonlinear cases, including elasto-plastic materials. This paper will focus on elastic and elasto-plastic material behaviour, where the number of springs required for an accurate analysis is tested. A steel cantilever beam is used as the structural element for the analysis. The first modification of the method is based on the Gaussian Quadrature to distribute the springs. Usually, the springs are equally distributed along the face of the element, but it was found that using Gaussian springs, only up to 2 springs were required for perfectly elastic cases, while with equal springs at least 5 springs were required. The method runs on a Newton-Raphson iteration scheme, and quadratic convergence was obtained. The second modification is based on adapting the number of springs required depending on the elasticity of the material. After the first Newton Raphson iteration, Von Mises stress conditions were used to calculate the stresses in the springs, and the springs are classified as elastic or plastic. Then transition springs, springs located exactly between the elastic and plastic region, are interpolated between regions to strictly identify the elastic and plastic regions in the cross section. Since a rectangular cross-section was analyzed, there were two plastic regions (top and bottom), and one elastic region (middle). The results of the present study show that elasto-plastic cases require only 2 springs for the elastic region, and 2 springs for the plastic region. This showed to improve the computational cost, reducing the minimum number of springs in elasto-plastic cases to only 6 springs. All the work is done using MATLAB and the results will be compared to models of structural elements using the finite element method in ANSYS.Keywords: applied element method, elasto-plastic, Gaussian springs, nonlinear
Procedia PDF Downloads 22560 Multi-Criteria Evolutionary Algorithm to Develop Efficient Schedules for Complex Maintenance Problems
Authors: Sven Tackenberg, Sönke Duckwitz, Andreas Petz, Christopher M. Schlick
Abstract:
This paper introduces an extension to the well-established Resource-Constrained Project Scheduling Problem (RCPSP) to apply it to complex maintenance problems. The problem is to assign technicians to a team which has to process several tasks with multi-level skill requirements during a work shift. Here, several alternative activities for a task allow both, the temporal shift of activities or the reallocation of technicians and tools. As a result, switches from one valid work process variant to another can be considered and may be selected by the developed evolutionary algorithm based on the present skill level of technicians or the available tools. An additional complication of the observed scheduling problem is that the locations of the construction sites are only temporarily accessible during a day. Due to intensive rail traffic, the available time slots for maintenance and repair works are extremely short and are often distributed throughout the day. To identify efficient working periods, a first concept of a Bayesian network is introduced and is integrated into the extended RCPSP with pre-emptive and non-pre-emptive tasks. Thereby, the Bayesian network is used to calculate the probability of a maintenance task to be processed during a specific period of the shift. Focusing on the domain of maintenance of the railway infrastructure in metropolitan areas as the most unproductive implementation process at construction site, the paper illustrates how the extended RCPSP can be applied for maintenance planning support. A multi-criteria evolutionary algorithm with a problem representation is introduced which is capable of revising technician-task allocations, whereas the duration of the task may be stochastic. The approach uses a novel activity list representation to ensure easily describable and modifiable elements which can be converted into detailed shift schedules. Thereby, the main objective is to develop a shift plan which maximizes the utilization of each technician due to a minimization of the waiting times caused by rail traffic. The results of the already implemented core algorithm illustrate a fast convergence towards an optimal team composition for a shift, an efficient sequence of tasks and a high probability of the subsequent implementation due to the stochastic durations of the tasks. In the paper, the algorithm for the extended RCPSP is analyzed in experimental evaluation using real-world example problems with various size, resource complexity, tightness and so forth.Keywords: maintenance management, scheduling, resource constrained project scheduling problem, genetic algorithms
Procedia PDF Downloads 23259 Public Values in Service Innovation Management: Case Study in Elderly Care in Danish Municipality
Authors: Christian T. Lystbaek
Abstract:
Background: The importance of innovation management has traditionally been ascribed to private production companies, however, there is an increasing interest in public services innovation management. One of the major theoretical challenges arising from this situation is to understand public values justifying public services innovation management. However, there is not single and stable definition of public value in the literature. The research question guiding this paper is: What is the supposed added value operating in the public sphere? Methodology: The study takes an action research strategy. This is highly contextualized methodology, which is enacted within a particular set of social relations into which on expects to integrate the results. As such, this research strategy is particularly well suited for its potential to generate results that can be applied by managers. The aim of action research is to produce proposals with a creative dimension capable of compelling actors to act in a new and pertinent way in relation to the situations they encounter. The context of the study is a workshop on public services innovation within elderly care. The workshop brought together different actors, such as managers, personnel and two groups of users-citizens (elderly clients and their relatives). The process was designed as an extension of the co-construction methods inherent in action research. Scenario methods and focus groups were applied to generate dialogue. The main strength of these techniques is to gather and exploit as much data as possible by exposing the discourse of justification used by the actors to explain or justify their points of view when interacting with others on a given subject. The approach does not directly interrogate the actors on their values, but allows their values to emerge through debate and dialogue. Findings: The public values related to public services innovation management in elderly care were identified in two steps. In the first step, identification of values, values were identified in the discussions. Through continuous analysis of the data, a network of interrelated values was developed. In the second step, tracking group consensus, we then ascertained the degree to which the meaning attributed to the value was common to the participants, classifying the degree of consensus as high, intermediate or low. High consensus corresponds to strong convergence in meaning, intermediate to generally shared meanings between participants, and low to divergences regarding the meaning between participants. Only values with high or intermediate degree of consensus were retained in the analysis. Conclusion: The study shows that the fundamental criterion for justifying public services innovation management is the capacity for actors to enact public values in their work. In the workshop, we identified two categories of public values, intrinsic value and behavioural values, and a list of more specific values.Keywords: public services innovation management, public value, co-creation, action research
Procedia PDF Downloads 28158 An Analysis of the Strategic Pathway to Building a Successful Mobile Advertising Business in Nigeria: From Strategic Intent to Competitive Advantage
Authors: Pius A. Onobhayedo, Eugene A. Ohu
Abstract:
Nigeria has one of the fastest growing mobile telecommunications industry in the world. In the absence of fixed connection access to the Internet, access to the Internet is primarily via mobile devices. It, therefore, provides a test case for how to penetrate the mobile market in an emerging economy. We also hope to contribute to a sparse literature on strategies employed in building successful data-driven mobile businesses in emerging economies. We, therefore, sought to identify and analyse the strategic approach taken in a successful locally born mobile data-driven business in Nigeria. The analysis was carried out through the framework of strategic intent and competitive advantages developed from the conception of the company to date. This study is based on an exploratory investigation of an innovative digital company based in Nigeria specializing in the mobile advertising business. The projected growth and high adoption of mobile in this African country, coinciding with the smartphone revolution triggered by the launch of iPhone in 2007 opened a new entrepreneurial horizon for the founder of the company, who reached the conclusion that ‘the future is mobile’. This dream led to the establishment of three digital businesses, designed for convergence and complementarity of medium and content. The mobile Ad subsidiary soon grew to become a truly African network with operations and campaigns across West, East and South Africa, successfully delivering campaigns in several African countries including Nigeria, Kenya, South Africa, Ghana, Uganda, Zimbabwe, and Zambia amongst others. The company recently declared a 40% year-end profit which was nine times that of the previous financial year. This study drew from an in-depth interview with the company’s founder, analysis of primary and secondary data from and about the business, as well as case studies of digital marketing campaigns. We hinge our analysis on the strategic intent concept which has been proposed to be an engine that drives the quest for sustainable strategic advantage in the global marketplace. Our goal was specifically to identify the strategic intents of the founder and how these were transformed creatively into processes that may have led to some distinct competitive advantages. Along with the strategic intents, we sought to identify the respective absorptive capacities that constituted favourable antecedents to the creation of such competitive advantages. Our recommendations and findings will be pivotal information for anybody wishing to invest in the world’s fastest technology business space - Africa.Keywords: Africa, competitive advantage, competitive strategy, digital, mobile business, marketing, strategic intent
Procedia PDF Downloads 43857 The Lopsided Burden of Non-Communicable Diseases in India: Evidences from the Decade 2004-2014
Authors: Kajori Banerjee, Laxmi Kant Dwivedi
Abstract:
India is a part of the ongoing globalization, contemporary convergence, industrialization and technical advancement that is taking place world-wide. Some of the manifestations of this evolution is rapid demographic, socio-economic, epidemiological and health transition. There has been a considerable increase in non-communicable diseases due to change in lifestyle. This study aims to assess the direction of burden of disease and compare the pressure of infectious diseases against cardio-vascular, endocrine, metabolic and nutritional diseases. The change in prevalence in a ten-year period (2004-2014) is further decomposed to determine the net contribution of various socio-economic and demographic covariates. The present study uses the recent 71st (2014) and 60th (2004) rounds of National Sample Survey. The pressure of infectious diseases against cardio-vascular (CVD), endocrine, metabolic and nutritional (EMN) diseases during 2004-2014 is calculated by Prevalence Rates (PR), Hospitalization Rates (HR) and Case Fatality Rates (CFR). The prevalence of non-communicable diseases are further used as a dependent variable in a logit regression to find the effect of various social, economic and demographic factors on the chances of suffering from the particular disease. Multivariate decomposition technique further assists in determining the net contribution of socio-economic and demographic covariates. This paper upholds evidences of stagnation of the burden of communicable diseases (CD) and rapid increase in the burden of non-communicable diseases (NCD) uniformly for all population sub-groups in India. CFR for CVD has increased drastically in 2004-2014. Logit regression indicates the chances of suffering from CVD and EMN is significantly higher among the urban residents, older ages, females, widowed/ divorced and separated individuals. Decomposition displays ample proof that improvement in quality of life markers like education, urbanization, longevity of life has positively contributed in increasing the NCD prevalence rate. In India’s current epidemiological phase, compression theory of morbidity is in action as a significant rise in the probability of contracting the NCDs over the time period among older ages is observed. Age is found to play a vital contributor in increasing the probability of having CVD and EMN over the study decade 2004-2014 in the nationally representative sample of National Sample Survey.Keywords: cardio-vascular disease, case-fatality rate, communicable diseases, hospitalization rate, multivariate decomposition, non-communicable diseases, prevalence rate
Procedia PDF Downloads 31456 Role of Indigenous Peoples in Climate Change
Authors: Neelam Kadyan, Pratima Ranga, Yogender
Abstract:
Indigenous people are the One who are affected by the climate change the most, although there have contributed little to its causes. This is largely a result of their historic dependence on local biological diversity, ecosystem services and cultural landscapes as a source of their sustenance and well-being. Comprising only four percent of the world’s population they utilize 22 percent of the world’s land surface. Despite their high exposure-sensitivity indigenous peoples and local communities are actively responding to changing climatic conditions and have demonstrated their resourcefulness and resilience in the face of climate change. Traditional Indigenous territories encompass up to 22 percent of the world’s land surface and they coincide with areas that hold 80 percent of the planet’s biodiversity. Also, the greatest diversity of indigenous groups coincides with the world’s largest tropical forest wilderness areas in the Americas (including Amazon), Africa, and Asia, and 11 percent of world forest lands are legally owned by Indigenous Peoples and communities. This convergence of biodiversity-significant areas and indigenous territories presents an enormous opportunity to expand efforts to conserve biodiversity beyond parks, which tend to benefit from most of the funding for biodiversity conservation. Tapping on Ancestral Knowledge Indigenous Peoples are carriers of ancestral knowledge and wisdom about this biodiversity. Their effective participation in biodiversity conservation programs as experts in protecting and managing biodiversity and natural resources would result in more comprehensive and cost effective conservation and management of biodiversity worldwide. Addressing the Climate Change Agenda Indigenous Peoples has played a key role in climate change mitigation and adaptation. The territories of indigenous groups who have been given the rights to their lands have been better conserved than the adjacent lands (i.e., Brazil, Colombia, Nicaragua, etc.). Preserving large extensions of forests would not only support the climate change objectives, but it would respect the rights of Indigenous Peoples and conserve biodiversity as well. A climate change agenda fully involving Indigenous Peoples has many more benefits than if only government and/or the private sector are involved. Indigenous peoples are some of the most vulnerable groups to the negative effects of climate change. Also, they are a source of knowledge to the many solutions that will be needed to avoid or ameliorate those effects. For example, ancestral territories often provide excellent examples of a landscape design that can resist the negatives effects of climate change. Over the millennia, Indigenous Peoples have developed adaptation models to climate change. They have also developed genetic varieties of medicinal and useful plants and animal breeds with a wider natural range of resistance to climatic and ecological variability.Keywords: ancestral knowledge, cost effective conservation, management, indigenous peoples, climate change
Procedia PDF Downloads 67855 Diurnal Circle of Rainfall and Convective Properties over West and Central Africa
Authors: Balogun R. Ayodeji, Adefisan E. Adesanya, Adeyewa Z. Debo, E. C. Okogbue
Abstract:
The need to investigate diurnal weather circles in West Africa is coined in the fact that complex interactions often results from diurnal weather patterns. This study investigates diurnal circles of wind, rainfall and convective properties using six (6) hour interval data from the ERA-Interim and the Tropical Rainfall Measurement Mission (TRMM). The seven distinct zones, used in this work and classified as rainforest (west-coast, dry, Nigeria-Cameroon), Savannah (Nigeria, and Central Africa and South Sudan (CASS)), Sudano-Sahel, and Sahel, were clearly indicated by the rainfall pattern in each zones. Results showed that the land‐ocean warming contrast was more strongly sensitive to seasonal cycle and has been very weak during March-May (MAM) but clearly spelt out during June-September (JJAS). Dipoles of wind convergence/divergence and wet/dry precipitation, between CASS and Nigeria Savannah zones, were identified in morning and evening hours of MAM, whereas distinct night and day anomaly, in the same location of CASS, were found to be consistent during the JJAS season. Diurnal variation of convective properties showed that stratiform precipitation, due to the extremely low occurrence of flashcount climatology, was dominant during morning hours for both MAM and JJAS than other periods of the day. On the other hand, diurnal variation of the system sizes showed that small system sizes were most dominant during the day time periods for both MAM and JJAS, whereas larger system sizes were frequent during the evening, night, and morning hours. The locations of flashcount and system sizes agreed with earlier results that morning and day-time hours were dominated by stratiform precipitation and small system sizes respectively. Most results clearly showed that the eastern locations of Sudano and Sahel were consistently dry because rainfall and precipitation features were predominantly few. System sizes greater than or equal to 800 km² were found in the western axis of the Sudano and Sahel zones, whereas the eastern axis, particularly in the Sahel zone, had minimal occurrences of small/large system sizes. From the results of locations of extreme systems, flashcount greater than 275 in one single system was never observed during the morning (6Z) diurnal, whereas, the evening (18Z) diurnal had the most frequent cases (at least 8) of flashcount exceeding 275 in one single system. Results presented had shown the importance of diurnal variation in understanding precipitation, flashcount, system sizes patterns at diurnal scales, and understanding land-ocean contrast, precipitation, and wind field anomaly at diurnal scales.Keywords: convective properties, diurnal circle, flashcount, system sizes
Procedia PDF Downloads 13354 Problem Based Learning and Teaching by Example in Dimensioning of Mechanisms: Feedback
Authors: Nicolas Peyret, Sylvain Courtois, Gaël Chevallier
Abstract:
This article outlines the development of the Project Based Learning (PBL) at the level of a last year’s Bachelor’s Degree. This form of pedagogy has for objective to allow a better involving of the students from the beginning of the module. The theoretical contributions are introduced during the project to solving a technological problem. The module in question is the module of mechanical dimensioning method of Supméca a French engineering school. This school issues a Master’s Degree. While the teaching methods used in primary and secondary education are frequently renewed in France at the instigation of teachers and inspectors, higher education remains relatively traditional in its practices. Recently, some colleagues have felt the need to put the application back at the heart of their theoretical teaching. This need is induced by the difficulty of covering all the knowledge deductively before its application. It is therefore tempting to make the students 'learn by doing', even if it doesn’t cover some parts of the theoretical knowledge. The other argument that supports this type of learning is the lack of motivation the students have for the magisterial courses. The role-play allowed scenarios favoring interaction between students and teachers… However, this pedagogical form known as 'pedagogy by project' is difficult to apply in the first years of university studies because of the low level of autonomy and individual responsibility that the students have. The question of what the student actually learns from the initial program as well as the evaluation of the competences acquired by the students in this type of pedagogy also remains an open problem. Thus we propose to add to the pedagogy by project format a regressive part of interventionism by the teacher based on pedagogy by example. This pedagogical scenario is based on the cognitive load theory and Bruner's constructivist theory. It has been built by relying on the six points of the encouragement process defined by Bruner, with a concrete objective, to allow the students to go beyond the basic skills of dimensioning and allow them to acquire the more global skills of engineering. The implementation of project-based teaching coupled with pedagogy by example makes it possible to compensate for the lack of experience and autonomy of first-year students, while at the same time involving them strongly in the first few minutes of the module. In this project, students have been confronted with the real dimensioning problems and are able to understand the links and influences between parameter variations and dimensioning, an objective that we did not reach in classical teaching. It is this form of pedagogy which allows to accelerate the mastery of basic skills and so spend more time on the engineer skills namely the convergence of each dimensioning in order to obtain a validated mechanism. A self-evaluation of the project skills acquired by the students will also be presented.Keywords: Bruner's constructivist theory, mechanisms dimensioning, pedagogy by example, problem based learning
Procedia PDF Downloads 19053 Most Recent Lifespan Estimate for the Itaipu Hydroelectric Power Plant Computed by Using Borland and Miller Method and Mass Balance in Brazil, Paraguay
Authors: Anderson Braga Mendes
Abstract:
Itaipu Hydroelectric Power Plant is settled on the Paraná River, which is a natural boundary between Brazil and Paraguay; thus, the facility is shared by both countries. Itaipu Power Plant is the biggest hydroelectric generator in the world, and provides clean and renewable electrical energy supply for 17% and 76% of Brazil and Paraguay, respectively. The plant started its generation in 1984. It counts on 20 Francis turbines and has installed capacity of 14,000 MWh. Its historic generation record occurred in 2016 (103,098,366 MWh), and since the beginning of its operation until the last day of 2016 the plant has achieved the sum of 2,415,789,823 MWh. The distinct sedimentologic aspects of the drainage area of Itaipu Power Plant, from its stretch upstream (Porto Primavera and Rosana dams) to downstream (Itaipu dam itself), were taken into account in order to best estimate the increase/decrease in the sediment yield by using data from 2001 to 2016. Such data are collected through a network of 14 automatic sedimentometric stations managed by the company itself and operating in an hourly basis, covering an area of around 136,000 km² (92% of the incremental drainage area of the undertaking). Since 1972, a series of lifespan studies for the Itaipu Power Plant have been made, being first assessed by Sir Hans Albert Einstein, at the time of the feasibility studies for the enterprise. From that date onwards, eight further studies were made through the last 44 years aiming to confer more precision upon the estimates based on more updated data sets. From the analysis of each monitoring station, it was clearly noticed strong increase tendencies in the sediment yield through the last 14 years, mainly in the Iguatemi, Ivaí, São Francisco Falso and Carapá Rivers, the latter situated in Paraguay, whereas the others are utterly in Brazilian territory. Five lifespan scenarios considering different sediment yield tendencies were simulated with the aid of the softwares SEDIMENT and DPOSIT, both developed by the author of the present work. Such softwares thoroughly follow the Borland & Miller methodology (empirical method of area-reduction). The soundest scenario out of the five ones under analysis indicated a lifespan foresight of 168 years, being the reservoir only 1.8% silted by the end of 2016, after 32 years of operation. Besides, the mass balance in the reservoir (water inflows minus outflows) between 1986 and 2016 shows that 2% of the whole Itaipu lake is silted nowadays. Owing to the convergence of both results, which were acquired by using different methodologies and independent input data, it is worth concluding that the mathematical modeling is satisfactory and calibrated, thus assigning credibility to this most recent lifespan estimate.Keywords: Borland and Miller method, hydroelectricity, Itaipu Power Plant, lifespan, mass balance
Procedia PDF Downloads 27552 Enhance Concurrent Design Approach through a Design Methodology Based on an Artificial Intelligence Framework: Guiding Group Decision Making to Balanced Preliminary Design Solution
Authors: Loris Franchi, Daniele Calvi, Sabrina Corpino
Abstract:
This paper presents a design methodology in which stakeholders are assisted with the exploration of a so-called negotiation space, aiming to the maximization of both group social welfare and single stakeholder’s perceived utility. The outcome results in less design iterations needed for design convergence while obtaining a higher solution effectiveness. During the early stage of a space project, not only the knowledge about the system but also the decision outcomes often are unknown. The scenario is exacerbated by the fact that decisions taken in this stage imply delayed costs associated with them. Hence, it is necessary to have a clear definition of the problem under analysis, especially in the initial definition. This can be obtained thanks to a robust generation and exploration of design alternatives. This process must consider that design usually involves various individuals, who take decisions affecting one another. An effective coordination among these decision-makers is critical. Finding mutual agreement solution will reduce the iterations involved in the design process. To handle this scenario, the paper proposes a design methodology which, aims to speed-up the process of pushing the mission’s concept maturity level. This push up is obtained thanks to a guided negotiation space exploration, which involves autonomously exploration and optimization of trade opportunities among stakeholders via Artificial Intelligence algorithms. The negotiation space is generated via a multidisciplinary collaborative optimization method, infused by game theory and multi-attribute utility theory. In particular, game theory is able to model the negotiation process to reach the equilibria among stakeholder needs. Because of the huge dimension of the negotiation space, a collaborative optimization framework with evolutionary algorithm has been integrated in order to guide the game process to efficiently and rapidly searching for the Pareto equilibria among stakeholders. At last, the concept of utility constituted the mechanism to bridge the language barrier between experts of different backgrounds and differing needs, using the elicited and modeled needs to evaluate a multitude of alternatives. To highlight the benefits of the proposed methodology, the paper presents the design of a CubeSat mission for the observation of lunar radiation environment. The derived solution results able to balance all stakeholders needs and guaranteeing the effectiveness of the selection mission concept thanks to its robustness in valuable changeability. The benefits provided by the proposed design methodology are highlighted, and further development proposed.Keywords: concurrent engineering, artificial intelligence, negotiation in engineering design, multidisciplinary optimization
Procedia PDF Downloads 13751 Modeling, Topology Optimization and Experimental Validation of Glass-Transition-Based 4D-Printed Polymeric Structures
Authors: Sara A. Pakvis, Giulia Scalet, Stefania Marconi, Ferdinando Auricchio, Matthijs Langelaar
Abstract:
In recent developments in the field of multi-material additive manufacturing, differences in material properties are exploited to create printed shape-memory structures, which are referred to as 4D-printed structures. New printing techniques allow for the deliberate introduction of prestresses in the specimen during manufacturing, and, in combination with the right design, this enables new functionalities. This research focuses on bi-polymer 4D-printed structures, where the transformation process is based on a heat-induced glass transition in one material lowering its Young’s modulus, combined with an initial prestress in the other material. Upon the decrease in stiffness, the prestress is released, which results in the realization of an essentially pre-programmed deformation. As the design of such functional multi-material structures is crucial but far from trivial, a systematic methodology to find the design of 4D-printed structures is developed, where a finite element model is combined with a density-based topology optimization method to describe the material layout. This modeling approach is verified by a convergence analysis and validated by comparing its numerical results to analytical and published data. Specific aspects that are addressed include the interplay between the definition of the prestress and the material interpolation function used in the density-based topology description, the inclusion of a temperature-dependent stiffness relationship to simulate the glass transition effect, and the importance of the consideration of geometric nonlinearity in the finite element modeling. The efficacy of topology optimization to design 4D-printed structures is explored by applying the methodology to a variety of design problems, both in 2D and 3D settings. Bi-layer designs composed of thermoplastic polymers are printed by means of the fused deposition modeling (FDM) technology. Acrylonitrile butadiene styrene (ABS) polymer undergoes the glass transition transformation, while polyurethane (TPU) polymer is prestressed by means of the 3D-printing process itself. Tests inducing shape transformation in the printed samples through heating are performed to calibrate the prestress and validate the modeling approach by comparing the numerical results to the experimental findings. Using the experimentally obtained prestress values, more complex designs have been generated through topology optimization, and samples have been printed and tested to evaluate their performance. This study demonstrates that by combining topology optimization and 4D-printing concepts, stimuli-responsive structures with specific properties can be designed and realized.Keywords: 4D-printing, glass transition, shape memory polymer, topology optimization
Procedia PDF Downloads 21150 Coastal Resources Spatial Planning and Potential Oil Risk Analysis: Case Study of Misratah’s Coastal Resources, Libya
Authors: Abduladim Maitieg, Kevin Lynch, Mark Johnson
Abstract:
The goal of the Libyan Environmental General Authority (EGA) and National Oil Corporation (Department of Health, Safety & Environment) during the last 5 years has been to adopt a common approach to coastal and marine spatial planning. Protection and planning of the coastal zone is a significant for Libya, due to the length of coast and, the high rate of oil export, and spills’ potential negative impacts on coastal and marine habitats. Coastal resource scenarios constitute an important tool for exploring the long-term and short-term consequences of oil spill impact and available response options that would provide an integrated perspective on mitigation. To investigate that, this paper reviews the Misratah coastal parameters to present the physical and human controls and attributes of coastal habitats as the first step in understanding how they may be damaged by an oil spill. This paper also investigates costal resources, providing a better understanding of the resources and factors that impact the integrity of the ecosystem. Therefore, the study described the potential spatial distribution of oil spill risk and the coastal resources value, and also created spatial maps of coastal resources and their vulnerability to oil spills along the coast. This study proposes an analysis of coastal resources condition at a local level in the Misratah region of the Mediterranean Sea, considering the implementation of coastal and marine spatial planning over time as an indication of the will to manage urban development. Oil spill contamination analysis and their impact on the coastal resources depend on (1) oil spill sequence, (2) oil spill location, (3) oil spill movement near the coastal area. The resulting maps show natural, socio-economic activity, environmental resources along of the coast, and oil spill location. Moreover, the study provides significant geodatabase information which is required for coastal sensitivity index mapping and coastal management studies. The outcome of study provides the information necessary to set an Environmental Sensitivity Index (ESI) for the Misratah shoreline, which can be used for management of coastal resources and setting boundaries for each coastal sensitivity sectors, as well as to help planners measure the impact of oil spills on coastal resources. Geographic Information System (GIS) tools were used in order to store and illustrate the spatial convergence of existing socio-economic activities such as fishing, tourism, and the salt industry, and ecosystem components such as sea turtle nesting area, Sabkha habitats, and migratory birds feeding sites. These geodatabases help planners investigate the vulnerability of coastal resources to an oil spill.Keywords: coastal and marine spatial planning advancement training, GIS mapping, human uses, ecosystem components, Misratah coast, Libyan, oil spill
Procedia PDF Downloads 36249 Experimental and Computational Fluid Dynamic Modeling of a Progressing Cavity Pump Handling Newtonian Fluids
Authors: Deisy Becerra, Edwar Perez, Nicolas Rios, Miguel Asuaje
Abstract:
Progressing Cavity Pump (PCP) is a type of positive displacement pump that is being awarded greater importance as capable artificial lift equipment in the heavy oil field. The most commonly PCP used is driven single lobe pump that consists of a single external helical rotor turning eccentrically inside a double internal helical stator. This type of pump was analyzed by the experimental and Computational Fluid Dynamic (CFD) approach from the DCAB031 model located in a closed-loop arrangement. Experimental measurements were taken to determine the pressure rise and flow rate with a flow control valve installed at the outlet of the pump. The flowrate handled was measured by a FLOMEC-OM025 oval gear flowmeter. For each flowrate considered, the pump’s rotational speed and power input were controlled using an Invertek Optidrive E3 frequency driver. Once a steady-state operation was attained, pressure rise measurements were taken with a Sper Scientific wide range digital pressure meter. In this study, water and three Newtonian oils of different viscosities were tested at different rotational speeds. The CFD model implementation was developed on Star- CCM+ using an Overset Mesh that includes the relative motion between rotor and stator, which is one of the main contributions of the present work. The simulations are capable of providing detailed information about the pressure and velocity fields inside the device in laminar and unsteady regimens. The simulations have a good agreement with the experimental data due to Mean Squared Error (MSE) in under 21%, and the Grid Convergence Index (GCI) was calculated for the validation of the mesh, obtaining a value of 2.5%. In this case, three different rotational speeds were evaluated (200, 300, 400 rpm), and it is possible to show a directly proportional relationship between the rotational speed of the rotor and the flow rate calculated. The maximum production rates for the different speeds for water were 3.8 GPM, 4.3 GPM, and 6.1 GPM; also, for the oil tested were 1.8 GPM, 2.5 GPM, 3.8 GPM, respectively. Likewise, an inversely proportional relationship between the viscosity of the fluid and pump performance was observed, since the viscous oils showed the lowest pressure increase and the lowest volumetric flow pumped, with a degradation around of 30% of the pressure rise, between performance curves. Finally, the Productivity Index (PI) remained approximately constant for the different speeds evaluated; however, between fluids exist a diminution due to the viscosity.Keywords: computational fluid dynamic, CFD, Newtonian fluids, overset mesh, PCP pressure rise
Procedia PDF Downloads 12848 Robust Inference with a Skew T Distribution
Authors: M. Qamarul Islam, Ergun Dogan, Mehmet Yazici
Abstract:
There is a growing body of evidence that non-normal data is more prevalent in nature than the normal one. Examples can be quoted from, but not restricted to, the areas of Economics, Finance and Actuarial Science. The non-normality considered here is expressed in terms of fat-tailedness and asymmetry of the relevant distribution. In this study a skew t distribution that can be used to model a data that exhibit inherent non-normal behavior is considered. This distribution has tails fatter than a normal distribution and it also exhibits skewness. Although maximum likelihood estimates can be obtained by solving iteratively the likelihood equations that are non-linear in form, this can be problematic in terms of convergence and in many other respects as well. Therefore, it is preferred to use the method of modified maximum likelihood in which the likelihood estimates are derived by expressing the intractable non-linear likelihood equations in terms of standardized ordered variates and replacing the intractable terms by their linear approximations obtained from the first two terms of a Taylor series expansion about the quantiles of the distribution. These estimates, called modified maximum likelihood estimates, are obtained in closed form. Hence, they are easy to compute and to manipulate analytically. In fact the modified maximum likelihood estimates are equivalent to maximum likelihood estimates, asymptotically. Even in small samples the modified maximum likelihood estimates are found to be approximately the same as maximum likelihood estimates that are obtained iteratively. It is shown in this study that the modified maximum likelihood estimates are not only unbiased but substantially more efficient than the commonly used moment estimates or the least square estimates that are known to be biased and inefficient in such cases. Furthermore, in conventional regression analysis, it is assumed that the error terms are distributed normally and, hence, the well-known least square method is considered to be a suitable and preferred method for making the relevant statistical inferences. However, a number of empirical researches have shown that non-normal errors are more prevalent. Even transforming and/or filtering techniques may not produce normally distributed residuals. Here, a study is done for multiple linear regression models with random error having non-normal pattern. Through an extensive simulation it is shown that the modified maximum likelihood estimates of regression parameters are plausibly robust to the distributional assumptions and to various data anomalies as compared to the widely used least square estimates. Relevant tests of hypothesis are developed and are explored for desirable properties in terms of their size and power. The tests based upon modified maximum likelihood estimates are found to be substantially more powerful than the tests based upon least square estimates. Several examples are provided from the areas of Economics and Finance where such distributions are interpretable in terms of efficient market hypothesis with respect to asset pricing, portfolio selection, risk measurement and capital allocation, etc.Keywords: least square estimates, linear regression, maximum likelihood estimates, modified maximum likelihood method, non-normality, robustness
Procedia PDF Downloads 39747 Co-Seismic Deformation Using InSAR Sentinel-1A: Case Study of the 6.5 Mw Pidie Jaya, Aceh, Earthquake
Authors: Jefriza, Habibah Lateh, Saumi Syahreza
Abstract:
The 2016 Mw 6.5 Pidie Jaya earthquake is one of the biggest disasters that has occurred in Aceh within the last five years. This earthquake has caused severe damage to many infrastructures such as schools, hospitals, mosques, and houses in the district of Pidie Jaya and surrounding areas. Earthquakes commonly occur in Aceh Province due to the Aceh-Sumatra is located in the convergent boundaries of the Sunda Plate subducted beneath the Indo-Australian Plate. This convergence is responsible for the intensification of seismicity in this region. The plates are tilted at a speed of 63 mm per year and the right lateral component is accommodated by strike- slip faulting within Sumatra, mainly along the great Sumatran fault. This paper presents preliminary findings of InSAR study aimed at investigating the co-seismic surface deformation pattern in Pidie Jaya, Aceh-Indonesia. Coseismic surface deformation is rapid displacement that occurs at the time of an earthquake. Coseismic displacement mapping is required to study the behavior of seismic faults. InSAR is a powerful tool for measuring Earth surface deformation to a precision of a few centimetres. In this study, two radar images of the same area but at two different times are required to detect changes in the Earth’s surface. The ascending and descending Sentinel-1A (S1A) synthetic aperture radar (SAR) data and Sentinels application platform (SNAP) toolbox were used to generate SAR interferogram image. In order to visualize the InSAR interferometric, the S1A from both master (26 Nov 2016) and slave data-sets (26 Dec 2016) were utilized as the main data source for mapping the coseismic surface deformation. The results show that the fringes of phase difference have appeared in the border region as a result of the movement that was detected with interferometric technique. On the other hand, the dominant fringes pattern also appears near the coastal area, this is consistent with the field investigations two days after the earthquake. However, the study has also limitations of resolution and atmospheric artefacts in SAR interferograms. The atmospheric artefacts are caused by changes in the atmospheric refractive index of the medium, as a result, has limitation to produce coherence image. Low coherence will be affected the result in creating fringes (movement can be detected by fringes). The spatial resolution of the Sentinel satellite has not been sufficient for studying land surface deformation in this area. Further studies will also be investigated using both ALOS and TerraSAR-X. ALOS and TerraSAR-X improved the spatial resolution of SAR satellite.Keywords: earthquake, InSAR, interferometric, Sentinel-1A
Procedia PDF Downloads 19746 Tectonic Setting of Hinterland and Foreland Basins According to Tectonic Vergence in Eastern Iran
Authors: Shahriyar Keshtgar, Mahmoud Reza Heyhat, Sasan Bagheri, Ebrahim Gholami, Seyed Naser Raiisosadat
Abstract:
Various tectonic interpretations have been presented by different researchers to explain the geological evolution of eastern Iran, but there are still many ambiguities and many disagreements about the geodynamic nature of the Paleogene mountain range of eastern Iran. The purpose of this research is to clarify and discuss the tectonic position of the foreland and hinterland regions of eastern Iran from the tectonic perspective of sedimentary basins. In the tectonic model of oceanic subduction crust under the Afghan block, the hinterland is located to the east and on the Afghan block, and the foreland is located on the passive margin of the Sistan open ocean in the west. After the collision of the two microcontinents, the foreland basin must be located somewhere on the passive margin of the Lut block. This basin can deposit thick Paleocene to Oligocene sediments on the Cretaceous and older sediments. Thrust faults here will move towards the west. If we accept the subduction model of the Sistan Ocean under the Lut Block, the hinterland is located to the west towards the Lut Block, and the foreland basin is located towards the Sistan Ocean in the east. After the collision of the two microcontinents, the foreland basin with Paleogene sediments should expand on the Sefidaba basin. Thrust faults here will move towards the east. If we consider the two-sided subduction model of the ocean crust under both Lut and Afghan continental blocks, the tectonic position of the foreland and hinterland basins will not change and will be similar to the one-sided subduction models. After the collision of two microcontinents, the foreland basin should develop in the central part of the eastern Iranian orogen. In the oroclinic buckling model, the foreland basin will continue not only in the east and west but continuously in the north as well. In this model, since there is practically no collision, the foreland basin is not developed, and the remnants of the Sistan Ocean ophiolites and their deep turbidite sediments appear in the axial part of the mountain range, where the Neh and Khash complexes are located. The structural data from this research in the northern border of the Sistan belt and the Lut block indicate the convergence of the tectonic vergence directions towards the interior of the Sistan belt (in the Ahangaran area towards the southwest, in the north of Birjand towards the south-southeast, in the Sechengi area to the southeast). According to this research, not only the general movement of thrust sheets do not follow the linear orogeny models, but the expected active foreland basins have not been formed in the mentioned places in eastern Iran. Therefore, these results do not follow previous tectonic models for eastern Iran (i.e., rifting of eastern Iran continental crust and subsequent linear collision of the Lut and Afghan blocks), but it seems that was caused by buckling model in the Late Eocene-Oligocene.Keywords: foreland, hinterland, tectonic vergence, orocline buckling, eastern Iran
Procedia PDF Downloads 6945 Sea Surface Trend over the Arabian Sea and Its Influence on the South West Monsoon Rainfall Variability over Sri Lanka
Authors: Sherly Shelton, Zhaohui Lin
Abstract:
In recent decades, the inter-annual variability of summer precipitation over the India and Sri Lanka has intensified significantly with an increased frequency of both abnormally dry and wet summers. Therefore prediction of the inter-annual variability of summer precipitation is crucial and urgent for water management and local agriculture scheduling. However, none of the hypotheses put forward so far could understand the relationship to monsoon variability and related factors that affect to the South West Monsoon (SWM) variability in Sri Lanka. This study focused to identify the spatial and temporal variability of SWM rainfall events from June to September (JJAS) over Sri Lanka and associated trend. The monthly rainfall records covering 1980-2013 over the Sri Lanka are used for 19 stations to investigate long-term trends in SWM rainfall over Sri Lanka. The linear trends of atmospheric variables are calculated to understand the drivers behind the changers described based on the observed precipitation, sea surface temperature and atmospheric reanalysis products data for 34 years (1980–2013). Empirical orthogonal function (EOF) analysis was applied to understand the spatial and temporal behaviour of seasonal SWM rainfall variability and also investigate whether the trend pattern is the dominant mode that explains SWM rainfall variability. The spatial and stations based precipitation over the country showed statistically insignificant decreasing trends except few stations. The first two EOFs of seasonal (JJAS) mean of rainfall explained 52% and 23 % of the total variance and first PC showed positive loadings of the SWM rainfall for the whole landmass while strongest positive lording can be seen in western/ southwestern part of the Sri Lanka. There is a negative correlation (r ≤ -0.3) between SMRI and SST in the Arabian Sea and Central Indian Ocean which indicate that lower temperature in the Arabian Sea and Central Indian Ocean are associated with greater rainfall over the country. This study also shows that consistently warming throughout the Indian Ocean. The result shows that the perceptible water over the county is decreasing with the time which the influence to the reduction of precipitation over the area by weakening drawn draft. In addition, evaporation is getting weaker over the Arabian Sea, Bay of Bengal and Sri Lankan landmass which leads to reduction of moisture availability required for the SWM rainfall over Sri Lanka. At the same time, weakening of the SST gradients between Arabian Sea and Bay of Bengal can deteriorate the monsoon circulation, untimely which diminish SWM over Sri Lanka. The decreasing trends of moisture, moisture transport, zonal wind, moisture divergence with weakening evaporation over Arabian Sea, during the past decade having an aggravating influence on decreasing trends of monsoon rainfall over the Sri Lanka.Keywords: Arabian Sea, moisture flux convergence, South West Monsoon, Sri Lanka, sea surface temperature
Procedia PDF Downloads 13344 Impact of Transitioning to Renewable Energy Sources on Key Performance Indicators and Artificial Intelligence Modules of Data Center
Authors: Ahmed Hossam ElMolla, Mohamed Hatem Saleh, Hamza Mostafa, Lara Mamdouh, Yassin Wael
Abstract:
Artificial intelligence (AI) is reshaping industries, and its potential to revolutionize renewable energy and data center operations is immense. By harnessing AI's capabilities, we can optimize energy consumption, predict fluctuations in renewable energy generation, and improve the efficiency of data center infrastructure. This convergence of technologies promises a future where energy is managed more intelligently, sustainably, and cost-effectively. The integration of AI into renewable energy systems unlocks a wealth of opportunities. Machine learning algorithms can analyze vast amounts of data to forecast weather patterns, solar irradiance, and wind speeds, enabling more accurate energy production planning. AI-powered systems can optimize energy storage and grid management, ensuring a stable power supply even during intermittent renewable generation. Moreover, AI can identify maintenance needs for renewable energy infrastructure, preventing costly breakdowns and maximizing system lifespan. Data centers, which consume substantial amounts of energy, are prime candidates for AI-driven optimization. AI can analyze energy consumption patterns, identify inefficiencies, and recommend adjustments to cooling systems, server utilization, and power distribution. Predictive maintenance using AI can prevent equipment failures, reducing energy waste and downtime. Additionally, AI can optimize data placement and retrieval, minimizing energy consumption associated with data transfer. As AI transforms renewable energy and data center operations, modified Key Performance Indicators (KPIs) will emerge. Traditional metrics like energy efficiency and cost-per-megawatt-hour will continue to be relevant, but additional KPIs focused on AI's impact will be essential. These might include AI-driven cost savings, predictive accuracy of energy generation and consumption, and the reduction of carbon emissions attributed to AI-optimized operations. By tracking these KPIs, organizations can measure the success of their AI initiatives and identify areas for improvement. Ultimately, the synergy between AI, renewable energy, and data centers holds the potential to create a more sustainable and resilient future. By embracing these technologies, we can build smarter, greener, and more efficient systems that benefit both the environment and the economy.Keywords: data center, artificial intelligence, renewable energy, energy efficiency, sustainability, optimization, predictive analytics, energy consumption, energy storage, grid management, data center optimization, key performance indicators, carbon emissions, resiliency
Procedia PDF Downloads 3643 Biocompatibility Tests for Chronic Application of Sieve-Type Neural Electrodes in Rats
Authors: Jeong-Hyun Hong, Wonsuk Choi, Hyungdal Park, Jinseok Kim, Junesun Kim
Abstract:
Identifying the chronic functions of an implanted neural electrode is an important factor in acquiring neural signals through the electrode or restoring the nerve functions after peripheral nerve injury. The purpose of this study was to investigate the biocompatibility of the chronic implanted neural electrode into the sciatic nerve. To do this, a sieve-type neural electrode was implanted at proximal and distal ends of a transected sciatic nerve as an experimental group (Sieve group, n=6), and the end-to-end epineural repair was operated with the cut sciatic nerve as a control group (reconstruction group, n=6). All surgeries were performed on the sciatic nerve of the right leg in Sprague Dawley rats. Behavioral tests were performed before and 1, 4, 7, 10, 14, and weekly days until 5 months following surgery. Changes in sensory function were assessed by measuring paw withdrawal responses to mechanical and cold stimuli. Motor function was assessed by motion analysis using a Qualisys program, which showed a range of motion (ROM) related to the joints. Neurofilament-heavy chain and fibronectin expression were detected 5 months after surgery. In both groups, the paw withdrawal response to mechanical stimuli was slightly decreased from 3 weeks after surgery and then significantly decreased at 6 weeks after surgery. The paw withdrawal response to cold stimuli was increased from 4 days following surgery in both groups and began to decrease from 6 weeks after surgery. The ROM of the ankle joint was showed a similar pattern in both groups. There was significantly increased from 1 day after surgery and then decreased from 4 days after surgery. Neurofilament-heavy chain expression was observed throughout the entire sciatic nerve tissues in both groups. Especially, the sieve group was showed several neurofilaments that passed through the channels of the sieve-type neural electrode. In the reconstruction group, however, a suture line was seen through neurofilament-heavy chain expression up to 5 months following surgery. In the reconstruction group, fibronectin was detected throughout the sciatic nerve. However, in the sieve group, the fibronectin was observed only in the surrounding nervous tissues of an implanted neural electrode. The present results demonstrated that the implanted sieve-type neural electrode induced a focal inflammatory response. However, the chronic implanted sieve-type neural electrodes did not cause any further inflammatory response following peripheral nerve injury, suggesting the possibility of the chronic application of the sieve-type neural electrodes. This work was supported by the Basic Science Research Program funded by the Ministry of Science (2016R1D1A1B03933986), and by the convergence technology development program for bionic arm (2017M3C1B2085303).Keywords: biocompatibility, motor functions, neural electrodes, peripheral nerve injury, sensory functions
Procedia PDF Downloads 15142 Multiphase Equilibrium Characterization Model For Hydrate-Containing Systems Based On Trust-Region Method Non-Iterative Solving Approach
Authors: Zhuoran Li, Guan Qin
Abstract:
A robust and efficient compositional equilibrium characterization model for hydrate-containing systems is required, especially for time-critical simulations such as subsea pipeline flow assurance analysis, compositional simulation in hydrate reservoirs etc. A multiphase flash calculation framework, which combines Gibbs energy minimization function and cubic plus association (CPA) EoS, is developed to describe the highly non-ideal phase behavior of hydrate-containing systems. A non-iterative eigenvalue problem-solving approach for the trust-region sub-problem is selected to guarantee efficiency. The developed flash model is based on the state-of-the-art objective function proposed by Michelsen to minimize the Gibbs energy of the multiphase system. It is conceivable that a hydrate-containing system always contains polar components (such as water and hydrate inhibitors), introducing hydrogen bonds to influence phase behavior. Thus, the cubic plus associating (CPA) EoS is utilized to compute the thermodynamic parameters. The solid solution theory proposed by van der Waals and Platteeuw is applied to represent hydrate phase parameters. The trust-region method combined with the trust-region sub-problem non-iterative eigenvalue problem-solving approach is utilized to ensure fast convergence. The developed multiphase flash model's accuracy performance is validated by three available models (one published and two commercial models). Hundreds of published hydrate-containing system equilibrium experimental data are collected to act as the standard group for the accuracy test. The accuracy comparing results show that our model has superior performances over two models and comparable calculation accuracy to CSMGem. Efficiency performance test also has been carried out. Because the trust-region method can determine the optimization step's direction and size simultaneously, fast solution progress can be obtained. The comparison results show that less iteration number is needed to optimize the objective function by utilizing trust-region methods than applying line search methods. The non-iterative eigenvalue problem approach also performs faster computation speed than the conventional iterative solving algorithm for the trust-region sub-problem, further improving the calculation efficiency. A new thermodynamic framework of the multiphase flash model for the hydrate-containing system has been constructed in this work. Sensitive analysis and numerical experiments have been carried out to prove the accuracy and efficiency of this model. Furthermore, based on the current thermodynamic model in the oil and gas industry, implementing this model is simple.Keywords: equation of state, hydrates, multiphase equilibrium, trust-region method
Procedia PDF Downloads 17341 Coupling Strategy for Multi-Scale Simulations in Micro-Channels
Authors: Dahia Chibouti, Benoit Trouette, Eric Chenier
Abstract:
With the development of micro-electro-mechanical systems (MEMS), understanding fluid flow and heat transfer at the micrometer scale is crucial. In the case where the flow characteristic length scale is narrowed to around ten times the mean free path of gas molecules, the classical fluid mechanics and energy equations are still valid in the bulk flow, but particular attention must be paid to the gas/solid interface boundary conditions. Indeed, in the vicinity of the wall, on a thickness of about the mean free path of the molecules, called the Knudsen layer, the gas molecules are no longer in local thermodynamic equilibrium. Therefore, macroscopic models based on the continuity of velocity, temperature and heat flux jump conditions must be applied at the fluid/solid interface to take this non-equilibrium into account. Although these macroscopic models are widely used, the assumptions on which they depend are not necessarily verified in realistic cases. In order to get rid of these assumptions, simulations at the molecular scale are carried out to study how molecule interaction with walls can change the fluid flow and heat transfers at the vicinity of the walls. The developed approach is based on a kind of heterogeneous multi-scale method: micro-domains overlap the continuous domain, and coupling is carried out through exchanges of information between both the molecular and the continuum approaches. In practice, molecular dynamics describes the fluid flow and heat transfers in micro-domains while the Navier-Stokes and energy equations are used at larger scales. In this framework, two kinds of micro-simulation are performed: i) in bulk, to obtain the thermo-physical properties (viscosity, conductivity, ...) as well as the equation of state of the fluid, ii) close to the walls to identify the relationships between the slip velocity and the shear stress or between the temperature jump and the normal temperature gradient. The coupling strategy relies on an implicit formulation of the quantities extracted from micro-domains. Indeed, using the results of the molecular simulations, a Bayesian regression is performed in order to build continuous laws giving both the behavior of the physical properties, the equation of state and the slip relationships, as well as their uncertainties. These latter allow to set up a learning strategy to optimize the number of micro simulations. In the present contribution, the first results regarding this coupling associated with the learning strategy are illustrated through parametric studies of convergence criteria, choice of basis functions and noise of input data. Anisothermic flows of a Lennard Jones fluid in micro-channels are finally presented.Keywords: multi-scale, microfluidics, micro-channel, hybrid approach, coupling
Procedia PDF Downloads 16840 A Comparison of Methods for Estimating Dichotomous Treatment Effects: A Simulation Study
Authors: Jacqueline Y. Thompson, Sam Watson, Lee Middleton, Karla Hemming
Abstract:
Introduction: The odds ratio (estimated via logistic regression) is a well-established and common approach for estimating covariate-adjusted binary treatment effects when comparing a treatment and control group with dichotomous outcomes. Its popularity is primarily because of its stability and robustness to model misspecification. However, the situation is different for the relative risk and risk difference, which are arguably easier to interpret and better suited to specific designs such as non-inferiority studies. So far, there is no equivalent, widely acceptable approach to estimate an adjusted relative risk and risk difference when conducting clinical trials. This is partly due to the lack of a comprehensive evaluation of available candidate methods. Methods/Approach: A simulation study is designed to evaluate the performance of relevant candidate methods to estimate relative risks to represent conditional and marginal estimation approaches. We consider the log-binomial, generalised linear models (GLM) with iteratively weighted least-squares (IWLS) and model-based standard errors (SE); log-binomial GLM with convex optimisation and model-based SEs; log-binomial GLM with convex optimisation and permutation tests; modified-Poisson GLM IWLS and robust SEs; log-binomial generalised estimation equations (GEE) and robust SEs; marginal standardisation and delta method SEs; and marginal standardisation and permutation test SEs. Independent and identically distributed datasets are simulated from a randomised controlled trial to evaluate these candidate methods. Simulations are replicated 10000 times for each scenario across all possible combinations of sample sizes (200, 1000, and 5000), outcomes (10%, 50%, and 80%), and covariates (ranging from -0.05 to 0.7) representing weak, moderate or strong relationships. Treatment effects (ranging from 0, -0.5, 1; on the log-scale) will consider null (H0) and alternative (H1) hypotheses to evaluate coverage and power in realistic scenarios. Performance measures (bias, mean square error (MSE), relative efficiency, and convergence rates) are evaluated across scenarios covering a range of sample sizes, event rates, covariate prognostic strength, and model misspecifications. Potential Results, Relevance & Impact: There are several methods for estimating unadjusted and adjusted relative risks. However, it is unclear which method(s) is the most efficient, preserves type-I error rate, is robust to model misspecification, or is the most powerful when adjusting for non-prognostic and prognostic covariates. GEE estimations may be biased when the outcome distributions are not from marginal binary data. Also, it seems that marginal standardisation and convex optimisation may perform better than GLM IWLS log-binomial.Keywords: binary outcomes, statistical methods, clinical trials, simulation study
Procedia PDF Downloads 11539 Transcription Skills and Written Composition in Chinese
Authors: Pui-sze Yeung, Connie Suk-han Ho, David Wai-ock Chan, Kevin Kien-hoa Chung
Abstract:
Background: Recent findings have shown that transcription skills play a unique and significant role in Chinese word reading and spelling (i.e. word dictation), and written composition development. The interrelationships among component skills of transcription, word reading, word spelling, and written composition in Chinese have rarely been examined in the literature. Is the contribution of component skills of transcription to Chinese written composition mediated by word level skills (i.e., word reading and spelling)? Methods: The participants in the study were 249 Chinese children in Grade 1, Grade 3, and Grade 5 in Hong Kong. They were administered measures of general reasoning ability, orthographic knowledge, stroke sequence knowledge, word spelling, handwriting fluency, word reading, and Chinese narrative writing. Orthographic knowledge- orthographic knowledge was assessed by a task modeled after the lexical decision subtest of the Hong Kong Test of Specific Learning Difficulties in Reading and Writing (HKT-SpLD). Stroke sequence knowledge: The participants’ performance in producing legitimate stroke sequences was measured by a stroke sequence knowledge task. Handwriting fluency- Handwriting fluency was assessed by a task modeled after the Chinese Handwriting Speed Test. Word spelling: The stimuli of the word spelling task consist of fourteen two-character Chinese words. Word reading: The stimuli of the word reading task consist of 120 two-character Chinese words. Written composition: A narrative writing task was used to assess the participants’ text writing skills. Results: Analysis of covariance results showed that there were significant between-grade differences in the performance of word reading, word spelling, handwriting fluency, and written composition. Preliminary hierarchical multiple regression analysis results showed that orthographic knowledge, word spelling, and handwriting fluency were unique predictors of Chinese written composition even after controlling for age, IQ, and word reading. The interaction effects between grade and each of these three skills (orthographic knowledge, word spelling, and handwriting fluency) were not significant. Path analysis results showed that orthographic knowledge contributed to written composition both directly and indirectly through word spelling, while handwriting fluency contributed to written composition directly and indirectly through both word reading and spelling. Stroke sequence knowledge only contributed to written composition indirectly through word spelling. Conclusions: Preliminary hierarchical regression results were consistent with previous findings about the significant role of transcription skills in Chinese word reading, spelling and written composition development. The fact that orthographic knowledge contributed both directly and indirectly to written composition through word reading and spelling may reflect the impact of the script-sound-meaning convergence of Chinese characters on the composing process. The significant contribution of word spelling and handwriting fluency to Chinese written composition across elementary grades highlighted the difficulty in attaining automaticity of transcription skills in Chinese, which limits the working memory resources available for other composing processes.Keywords: orthographic knowledge, transcription skills, word reading, writing
Procedia PDF Downloads 425