Search results for: hybrid forecasting models
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 8303

Search results for: hybrid forecasting models

7313 An Integrated Approach to the Carbonate Reservoir Modeling: Case Study of the Eastern Siberia Field

Authors: Yana Snegireva

Abstract:

Carbonate reservoirs are known for their heterogeneity, resulting from various geological processes such as diagenesis and fracturing. These complexities may cause great challenges in understanding fluid flow behavior and predicting the production performance of naturally fractured reservoirs. The investigation of carbonate reservoirs is crucial, as many petroleum reservoirs are naturally fractured, which can be difficult due to the complexity of their fracture networks. This can lead to geological uncertainties, which are important for global petroleum reserves. The problem outlines the key challenges in carbonate reservoir modeling, including the accurate representation of fractures and their connectivity, as well as capturing the impact of fractures on fluid flow and production. Traditional reservoir modeling techniques often oversimplify fracture networks, leading to inaccurate predictions. Therefore, there is a need for a modern approach that can capture the complexities of carbonate reservoirs and provide reliable predictions for effective reservoir management and production optimization. The modern approach to carbonate reservoir modeling involves the utilization of the hybrid fracture modeling approach, including the discrete fracture network (DFN) method and implicit fracture network, which offer enhanced accuracy and reliability in characterizing complex fracture systems within these reservoirs. This study focuses on the application of the hybrid method in the Nepsko-Botuobinskaya anticline of the Eastern Siberia field, aiming to prove the appropriateness of this method in these geological conditions. The DFN method is adopted to model the fracture network within the carbonate reservoir. This method considers fractures as discrete entities, capturing their geometry, orientation, and connectivity. But the method has significant disadvantages since the number of fractures in the field can be very high. Due to limitations in the amount of main memory, it is very difficult to represent these fractures explicitly. By integrating data from image logs (formation micro imager), core data, and fracture density logs, a discrete fracture network (DFN) model can be constructed to represent fracture characteristics for hydraulically relevant fractures. The results obtained from the DFN modeling approaches provide valuable insights into the East Siberia field's carbonate reservoir behavior. The DFN model accurately captures the fracture system, allowing for a better understanding of fluid flow pathways, connectivity, and potential production zones. The analysis of simulation results enables the identification of zones of increased fracturing and optimization opportunities for reservoir development with the potential application of enhanced oil recovery techniques, which were considered in further simulations on the dual porosity and dual permeability models. This approach considers fractures as separate, interconnected flow paths within the reservoir matrix, allowing for the characterization of dual-porosity media. The case study of the East Siberia field demonstrates the effectiveness of the hybrid model method in accurately representing fracture systems and predicting reservoir behavior. The findings from this study contribute to improved reservoir management and production optimization in carbonate reservoirs with the use of enhanced and improved oil recovery methods.

Keywords: carbonate reservoir, discrete fracture network, fracture modeling, dual porosity, enhanced oil recovery, implicit fracture model, hybrid fracture model

Procedia PDF Downloads 61
7312 Survey Research Assessment for Renewable Energy Integration into the Mining Industry

Authors: Kateryna Zharan, Jan C. Bongaerts

Abstract:

Mining operations are energy intensive, and the share of energy costs in total costs is often quoted in the range of 40 %. Saving on energy costs is, therefore, a key element of any mine operator. With the improving reliability and security of renewable energy (RE) sources, and requirements to reduce carbon dioxide emissions, perspectives for using RE in mining operations emerge. These aspects are stimulating the mining companies to search for ways to substitute fossil energy with RE. Hereby, the main purpose of this study is to present the survey research assessment in matter of finding out the key issues related to the integration of RE into mining activities, based on the mining and renewable energy experts’ opinion. The purpose of the paper is to present the outcomes of a survey conducted among mining and renewable energy experts about the feasibility of RE in mining operations. The survey research has been developed taking into consideration the following categories: first of all, the mining and renewable energy experts were chosen based on the specific criteria. Secondly, they were offered a questionnaire to gather their knowledge and opinions on incentives for mining operators to turn to RE, barriers and challenges to be expected, environmental effects, appropriate business models and the overall impact of RE on mining operations. The outcomes of the survey allow for the identification of factors which favor and disfavor decision-making on the use of RE in mining operations. It concludes with a set of recommendations for further study. One of them relates to a deeper analysis of benefits for mining operators when using RE, and another one suggests that appropriate business models considering economic and environmental issues need to be studied and developed. The results of the paper will be used for developing a hybrid optimized model which might be adopted at mines according to their operation processes as well as economic and environmental perspectives.

Keywords: carbon dioxide emissions, mining industry, photovoltaic, renewable energy, survey research, wind generation

Procedia PDF Downloads 341
7311 Probing Language Models for Multiple Linguistic Information

Authors: Bowen Ding, Yihao Kuang

Abstract:

In recent years, large-scale pre-trained language models have achieved state-of-the-art performance on a variety of natural language processing tasks. The word vectors produced by these language models can be viewed as dense encoded presentations of natural language that in text form. However, it is unknown how much linguistic information is encoded and how. In this paper, we construct several corresponding probing tasks for multiple linguistic information to clarify the encoding capabilities of different language models and performed a visual display. We firstly obtain word presentations in vector form from different language models, including BERT, ELMo, RoBERTa and GPT. Classifiers with a small scale of parameters and unsupervised tasks are then applied on these word vectors to discriminate their capability to encode corresponding linguistic information. The constructed probe tasks contain both semantic and syntactic aspects. The semantic aspect includes the ability of the model to understand semantic entities such as numbers, time, and characters, and the grammatical aspect includes the ability of the language model to understand grammatical structures such as dependency relationships and reference relationships. We also compare encoding capabilities of different layers in the same language model to infer how linguistic information is encoded in the model.

Keywords: language models, probing task, text presentation, linguistic information

Procedia PDF Downloads 86
7310 Application Difference between Cox and Logistic Regression Models

Authors: Idrissa Kayijuka

Abstract:

The logistic regression and Cox regression models (proportional hazard model) at present are being employed in the analysis of prospective epidemiologic research looking into risk factors in their application on chronic diseases. However, a theoretical relationship between the two models has been studied. By definition, Cox regression model also called Cox proportional hazard model is a procedure that is used in modeling data regarding time leading up to an event where censored cases exist. Whereas the Logistic regression model is mostly applicable in cases where the independent variables consist of numerical as well as nominal values while the resultant variable is binary (dichotomous). Arguments and findings of many researchers focused on the overview of Cox and Logistic regression models and their different applications in different areas. In this work, the analysis is done on secondary data whose source is SPSS exercise data on BREAST CANCER with a sample size of 1121 women where the main objective is to show the application difference between Cox regression model and logistic regression model based on factors that cause women to die due to breast cancer. Thus we did some analysis manually i.e. on lymph nodes status, and SPSS software helped to analyze the mentioned data. This study found out that there is an application difference between Cox and Logistic regression models which is Cox regression model is used if one wishes to analyze data which also include the follow-up time whereas Logistic regression model analyzes data without follow-up-time. Also, they have measurements of association which is different: hazard ratio and odds ratio for Cox and logistic regression models respectively. A similarity between the two models is that they are both applicable in the prediction of the upshot of a categorical variable i.e. a variable that can accommodate only a restricted number of categories. In conclusion, Cox regression model differs from logistic regression by assessing a rate instead of proportion. The two models can be applied in many other researches since they are suitable methods for analyzing data but the more recommended is the Cox, regression model.

Keywords: logistic regression model, Cox regression model, survival analysis, hazard ratio

Procedia PDF Downloads 435
7309 Comparison of Wake Oscillator Models to Predict Vortex-Induced Vibration of Tall Chimneys

Authors: Saba Rahman, Arvind K. Jain, S. D. Bharti, T. K. Datta

Abstract:

The present study compares the semi-empirical wake-oscillator models that are used to predict vortex-induced vibration of structures. These models include those proposed by Facchinetti, Farshidian, and Dolatabadi, and Skop and Griffin. These models combine a wake oscillator model resembling the Van der Pol oscillator model and a single degree of freedom oscillation model. In order to use these models for estimating the top displacement of chimneys, the first mode vibration of the chimneys is only considered. The modal equation of the chimney constitutes the single degree of freedom model (SDOF). The equations of the wake oscillator model and the SDOF are simultaneously solved using an iterative procedure. The empirical parameters used in the wake-oscillator models are estimated using a newly developed approach, and response is compared with experimental data, which appeared comparable. For carrying out the iterative solution, the ode solver of MATLAB is used. To carry out the comparative study, a tall concrete chimney of height 210m has been chosen with the base diameter as 28m, top diameter as 20m, and thickness as 0.3m. The responses of the chimney are also determined using the linear model proposed by E. Simiu and the deterministic model given in Eurocode. It is observed from the comparative study that the responses predicted by the Facchinetti model and the model proposed by Skop and Griffin are nearly the same, while the model proposed by Fashidian and Dolatabadi predicts a higher response. The linear model without considering the aero-elastic phenomenon provides a less response as compared to the non-linear models. Further, for large damping, the prediction of the response by the Euro code is relatively well compared to those of non-linear models.

Keywords: chimney, deterministic model, van der pol, vortex-induced vibration

Procedia PDF Downloads 205
7308 Hybrid Quasi-Steady Thermal Lattice Boltzmann Model for Studying the Behavior of Oil in Water Emulsions Used in Machining Tool Cooling and Lubrication

Authors: W. Hasan, H. Farhat, A. Alhilo, L. Tamimi

Abstract:

Oil in water (O/W) emulsions are utilized extensively for cooling and lubricating cutting tools during parts machining. A robust Lattice Boltzmann (LBM) thermal-surfactants model, which provides a useful platform for exploring complex emulsions’ characteristics under variety of flow conditions, is used here for the study of the fluid behavior during conventional tools cooling. The transient thermal capabilities of the model are employed for simulating the effects of the flow conditions of O/W emulsions on the cooling of cutting tools. The model results show that the temperature outcome is slightly affected by reversing the direction of upper plate (workpiece). On the other hand, an important increase in effective viscosity is seen which supports better lubrication during the work.

Keywords: hybrid lattice Boltzmann method, Gunstensen model, thermal, surfactant-covered droplet, Marangoni stress

Procedia PDF Downloads 288
7307 A False Introduction: Teaching in a Pandemic

Authors: Robert Michael, Kayla Tobin, William Foster, Rachel Fairchild

Abstract:

The COVID-19 pandemic has caused significant disruptions in education, particularly in the teaching of health and physical education (HPE). This study examined a cohort of teachers that experienced being a preservice and first-year teacher during various stages of the pandemic. Qualitative data collection was conducted by interviewing six teachers from different schools in the Eastern U.S. over a series of structured interviews. Thematic analysis was employed to analyze the data. The pandemic significantly impacted the way HPE was taught as schools shifted to virtual and hybrid models. Findings revealed five major themes: (a) You want me to teach HOW?, (b) PE without equipment and six feet apart, (c) Behind the Scenes, (d) They’re back…I became a behavior management guru, and (e) The Pandemic Crater. Overall, this study highlights the significant challenges faced by preservice and first-year teachers in teaching physical education during the pandemic and underscores the need for ongoing support and resources to help them adapt and succeed in these challenging circumstances.

Keywords: teacher education, preservice teachers, first year teachers, health and physical education

Procedia PDF Downloads 160
7306 Automatic Classification of Lung Diseases from CT Images

Authors: Abobaker Mohammed Qasem Farhan, Shangming Yang, Mohammed Al-Nehari

Abstract:

Pneumonia is a kind of lung disease that creates congestion in the chest. Such pneumonic conditions lead to loss of life of the severity of high congestion. Pneumonic lung disease is caused by viral pneumonia, bacterial pneumonia, or Covidi-19 induced pneumonia. The early prediction and classification of such lung diseases help to reduce the mortality rate. We propose the automatic Computer-Aided Diagnosis (CAD) system in this paper using the deep learning approach. The proposed CAD system takes input from raw computerized tomography (CT) scans of the patient's chest and automatically predicts disease classification. We designed the Hybrid Deep Learning Algorithm (HDLA) to improve accuracy and reduce processing requirements. The raw CT scans have pre-processed first to enhance their quality for further analysis. We then applied a hybrid model that consists of automatic feature extraction and classification. We propose the robust 2D Convolutional Neural Network (CNN) model to extract the automatic features from the pre-processed CT image. This CNN model assures feature learning with extremely effective 1D feature extraction for each input CT image. The outcome of the 2D CNN model is then normalized using the Min-Max technique. The second step of the proposed hybrid model is related to training and classification using different classifiers. The simulation outcomes using the publically available dataset prove the robustness and efficiency of the proposed model compared to state-of-art algorithms.

Keywords: CT scan, Covid-19, deep learning, image processing, lung disease classification

Procedia PDF Downloads 132
7305 Analysis of Moving Loads on Bridges Using Surrogate Models

Authors: Susmita Panda, Arnab Banerjee, Ajinkya Baxy, Bappaditya Manna

Abstract:

The design of short to medium-span high-speed bridges in critical locations is an essential aspect of vehicle-bridge interaction. Due to dynamic interaction between moving load and bridge, mathematical models or finite element modeling computations become time-consuming. Thus, to reduce the computational effort, a universal approximator using an artificial neural network (ANN) has been used to evaluate the dynamic response of the bridge. The data set generation and training of surrogate models have been conducted over the results obtained from mathematical modeling. Further, the robustness of the surrogate model has been investigated, which showed an error percentage of less than 10% with conventional methods. Additionally, the dependency of the dynamic response of the bridge on various load and bridge parameters has been highlighted through a parametric study.

Keywords: artificial neural network, mode superposition method, moving load analysis, surrogate models

Procedia PDF Downloads 85
7304 A Hybrid Distributed Algorithm for Solving Job Shop Scheduling Problem

Authors: Aydin Teymourifar, Gurkan Ozturk

Abstract:

In this paper, a distributed hybrid algorithm is proposed for solving the job shop scheduling problem. The suggested method executes different artificial neural networks, heuristics and meta-heuristics simultaneously on more than one machine. The neural networks are used to control the constraints of the problem while the meta-heuristics search the global space and the heuristics are used to prevent the premature convergence. To attain an efficient distributed intelligent method for solving big and distributed job shop scheduling problems, Apache Spark and Hadoop frameworks are used. In the algorithm implementation and design steps, new approaches are applied. Comparison between the proposed algorithm and other efficient algorithms from the literature shows its efficiency, which is able to solve large size problems in short time.

Keywords: distributed algorithms, Apache Spark, Hadoop, job shop scheduling, neural network

Procedia PDF Downloads 371
7303 Fire Resistance of High Alumina Cement and Slag Based Ultra High Performance Fibre-Reinforced Cementitious Composites

Authors: A. Q. Sobia, M. S. Hamidah, I. Azmi, S. F. A. Rafeeqi

Abstract:

Fibre-reinforced polymer (FRP) strengthened reinforced concrete (RC) structures are susceptible to intense deterioration when exposed to elevated temperatures, particularly in the incident of fire. FRP has the tendency to lose bond with the substrate due to the low glass transition temperature of epoxy; the key component of FRP matrix.  In the past few decades, various types of high performance cementitious composites (HPCC) were explored for the protection of RC structural members against elevated temperature. However, there is an inadequate information on the influence of elevated temperature on the ultra high performance fibre-reinforced cementitious composites (UHPFRCC) containing ground granulated blast furnace slag (GGBS) as a replacement of high alumina cement (HAC) in conjunction with hybrid fibres (basalt and polypropylene fibres), which could be a prospective fire resisting material for the structural components. The influence of elevated temperatures on the compressive as well as flexural strength of UHPFRCC, made of HAC-GGBS and hybrid fibres, were examined in this study. Besides control sample (without fibres), three other samples, containing 0.5%, 1% and 1.5% of basalt fibres by total weight of mix and 1 kg/m3 of polypropylene fibres, were prepared and tested. Another mix was also prepared with only 1 kg/m3 of polypropylene fibres. Each of the samples were retained at ambient temperature as well as exposed to 400, 700 and 1000 °C followed by testing after 28 and 56 days of conventional curing. Investigation of results disclosed that the use of hybrid fibres significantly helped to improve the ambient temperature compressive and flexural strength of UHPFRCC, which was found to be 80 and 14.3 MPa respectively. However, the optimum residual compressive strength was marked by UHPFRCC-CP (with polypropylene fibres only), equally after both curing days (28 and 56 days), i.e. 41%. In addition, the utmost residual flexural strength, after 28 and 56 days of curing, was marked by UHPFRCC– CP and UHPFRCC– CB2 (1 kg/m3 of PP fibres + 1% of basalt fibres) i.e. 39% and 48.5% respectively.

Keywords: fibre reinforced polymer materials (FRP), ground granulated blast furnace slag (GGBS), high-alumina cement, hybrid, fibres

Procedia PDF Downloads 273
7302 The Role of Information Technology in Supply Chain Management

Authors: V. Jagadeesh, K. Venkata Subbaiah, P. Govinda Rao

Abstract:

This paper explaining about the significance of information technology tools and software packages in supply chain management (SCM) in order to manage the entire supply chain. Managing materials flow and financial flow and information flow effectively and efficiently with the aid of information technology tools and packages in order to deliver right quantity with right quality of goods at right time by using right methods and technology. Information technology plays a vital role in streamlining the sales forecasting and demand planning and Inventory control and transportation in supply networks and finally deals with production planning and scheduling. It achieves the objectives by streamlining the business process and integrates within the enterprise and its extended enterprise. SCM starts with customer and it involves sequence of activities from customer, retailer, distributor, manufacturer and supplier within the supply chain framework. It is the process of integrating demand planning and supply network planning and production planning and control. Forecasting indicates the direction for planning raw materials in order to meet the production planning requirements. Inventory control and transportation planning allocate the optimal or economic order quantity by utilizing shortest possible routes to deliver the goods to the customer. Production planning and control utilize the optimal resources mix in order to meet the capacity requirement planning. The above operations can be achieved by using appropriate information technology tools and software packages for the supply chain management.

Keywords: supply chain management, information technology, business process, extended enterprise

Procedia PDF Downloads 361
7301 Applying Multiplicative Weight Update to Skin Cancer Classifiers

Authors: Animish Jain

Abstract:

This study deals with using Multiplicative Weight Update within artificial intelligence and machine learning to create models that can diagnose skin cancer using microscopic images of cancer samples. In this study, the multiplicative weight update method is used to take the predictions of multiple models to try and acquire more accurate results. Logistic Regression, Convolutional Neural Network (CNN), and Support Vector Machine Classifier (SVMC) models are employed within the Multiplicative Weight Update system. These models are trained on pictures of skin cancer from the ISIC-Archive, to look for patterns to label unseen scans as either benign or malignant. These models are utilized in a multiplicative weight update algorithm which takes into account the precision and accuracy of each model through each successive guess to apply weights to their guess. These guesses and weights are then analyzed together to try and obtain the correct predictions. The research hypothesis for this study stated that there would be a significant difference in the accuracy of the three models and the Multiplicative Weight Update system. The SVMC model had an accuracy of 77.88%. The CNN model had an accuracy of 85.30%. The Logistic Regression model had an accuracy of 79.09%. Using Multiplicative Weight Update, the algorithm received an accuracy of 72.27%. The final conclusion that was drawn was that there was a significant difference in the accuracy of the three models and the Multiplicative Weight Update system. The conclusion was made that using a CNN model would be the best option for this problem rather than a Multiplicative Weight Update system. This is due to the possibility that Multiplicative Weight Update is not effective in a binary setting where there are only two possible classifications. In a categorical setting with multiple classes and groupings, a Multiplicative Weight Update system might become more proficient as it takes into account the strengths of multiple different models to classify images into multiple categories rather than only two categories, as shown in this study. This experimentation and computer science project can help to create better algorithms and models for the future of artificial intelligence in the medical imaging field.

Keywords: artificial intelligence, machine learning, multiplicative weight update, skin cancer

Procedia PDF Downloads 58
7300 Preparation of Wireless Networks and Security; Challenges in Efficient Accession of Encrypted Data in Healthcare

Authors: M. Zayoud, S. Oueida, S. Ionescu, P. AbiChar

Abstract:

Background: Wireless sensor network is encompassed of diversified tools of information technology, which is widely applied in a range of domains, including military surveillance, weather forecasting, and earthquake forecasting. Strengthened grounds are always developed for wireless sensor networks, which usually emerges security issues during professional application. Thus, essential technological tools are necessary to be assessed for secure aggregation of data. Moreover, such practices have to be incorporated in the healthcare practices that shall be serving in the best of the mutual interest Objective: Aggregation of encrypted data has been assessed through homomorphic stream cipher to assure its effectiveness along with providing the optimum solutions to the field of healthcare. Methods: An experimental design has been incorporated, which utilized newly developed cipher along with CPU-constrained devices. Modular additions have also been employed to evaluate the nature of aggregated data. The processes of homomorphic stream cipher have been highlighted through different sensors and modular additions. Results: Homomorphic stream cipher has been recognized as simple and secure process, which has allowed efficient aggregation of encrypted data. In addition, the application has led its way to the improvisation of the healthcare practices. Statistical values can be easily computed through the aggregation on the basis of selected cipher. Sensed data in accordance with variance, mean, and standard deviation has also been computed through the selected tool. Conclusion: It can be concluded that homomorphic stream cipher can be an ideal tool for appropriate aggregation of data. Alongside, it shall also provide the best solutions to the healthcare sector.

Keywords: aggregation, cipher, homomorphic stream, encryption

Procedia PDF Downloads 242
7299 Chemometric Estimation of Inhibitory Activity of Benzimidazole Derivatives by Linear Least Squares and Artificial Neural Networks Modelling

Authors: Sanja O. Podunavac-Kuzmanović, Strahinja Z. Kovačević, Lidija R. Jevrić, Stela Jokić

Abstract:

The subject of this paper is to correlate antibacterial behavior of benzimidazole derivatives with their molecular characteristics using chemometric QSAR (Quantitative Structure–Activity Relationships) approach. QSAR analysis has been carried out on the inhibitory activity of benzimidazole derivatives against Staphylococcus aureus. The data were processed by linear least squares (LLS) and artificial neural network (ANN) procedures. The LLS mathematical models have been developed as a calibration models for prediction of the inhibitory activity. The quality of the models was validated by leave one out (LOO) technique and by using external data set. High agreement between experimental and predicted inhibitory acivities indicated the good quality of the derived models. These results are part of the CMST COST Action No. CM1306 "Understanding Movement and Mechanism in Molecular Machines".

Keywords: Antibacterial, benzimidazoles, chemometric, QSAR.

Procedia PDF Downloads 301
7298 Surface Modified Core–Shell Type Lipid–Polymer Hybrid Nanoparticles of Trans-Resveratrol, an Anticancer Agent, for Long Circulation and Improved Efficacy against MCF-7 Cells

Authors: M. R. Vijayakumar, K. Priyanka, Ramoji Kosuru, Lakshmi, Sanjay Singh

Abstract:

Trans resveratrol (RES) is a non-flavonoid poly-phenolic compound proved for its therapeutic and preventive effect against various types of cancer. However, the practical application of RES in cancer treatment is limited because of its higher dose (up to 7.5 g/day in humans), low biological half life, rapid metabolism and faster elimination in mammals. PEGylated core-shell type lipid polymer hybrid nanoparticles are the novel drug delivery systems for long circulation and improved anti cancer effect of its therapeutic payloads. Therefore, the main objective of this study is to extend the biological half life (long circulation) and improve the therapeutic efficacy of RES through core shell type of nanoparticles. D-α-tocopheryl polyethylene glycol 1000 succinate (vitamin E TPGS), a novel surfactant is applied for the preparation of PEGylated lipid polymer hybrid nanoparticles. The prepared nanoparticles were evaluated by various state of the art techniques such as dynamic light scattering (DLS) technique for particle size and zeta potential, TEM for shape, differential scanning calorimetry (DSC) for interaction analysis and XRD for crystalline changes of drug. Entrapment efficiency and invitro drug release were determined by ultracentrifugation method and dialysis bag method, respectively. Cancer cell viability studies were performed by MTT assay, respectively. Pharmacokinetic studies after i.v administration were performed in sprague dawley rats. The prepared NPs were found to be spherical in shape with smooth surfaces. Particle size and zeta potential of prepared NPs were found to be in the range of 179.2±7.45 to 266.8±9.61 nm and -0.63 to -48.35 mV, respectively. DSC revealed absence of potential interaction. XRD study revealed presence of amorphous form in nanoparticles. Entrapment efficiency was found to be 83.7 % and drug release was found to be in controlled manner. MTT assay showed low MEC and pharmacokinetic studies showed higher AUC of nanoformulaition than its pristine drug. All these studies revealed that the RES loaded PEG modified core-shell type lipid polymer hybrid nanoparticles can be an alternative tool for chemopreventive and therapeutic application of RES in cancer.

Keywords: trans resveratrol, cancer nanotechnology, long circulating nanoparticles, bioavailability enhancement, core shell nanoparticles, lipid polymer hybrid nanoparticles

Procedia PDF Downloads 455
7297 Seismic Analysis of Vertical Expansion Hybrid Structure by Response Spectrum Method Concern with Disaster Management and Solving the Problems of Urbanization

Authors: Gautam, Gurcharan Singh, Mandeep Kaur, Yogesh Aggarwal, Sanjeev Naval

Abstract:

The present ground reality scenario of suffering of humanity shows the evidence of failure to take wrong decisions to shape the civilization with Irresponsibilities in the history. A strong positive will of right responsibilities make the right civilization structure which affects itself and the whole world. Present suffering of humanity shows and reflect the failure of past decisions taken to shape the true culture with right social structure of society, due to unplanned system of Indian civilization and its rapid disaster of population make the failure to face all kind of problems which make the society sufferer. Our India is still suffering from disaster like earthquake, floods, droughts, tsunamis etc. and we face the uncountable disaster of deaths from the beginning of humanity at the present time. In this research paper our focus is to make a Disaster Resistance Structure having the solution of dense populated urban cities area by high vertical expansion HYBRID STRUCTURE. Our efforts are to analyse the Reinforced Concrete Hybrid Structure at different seismic zones, these concrete frames were analyzed using the response spectrum method to calculate and compare the different seismic displacement and drift. Seismic analysis by this method generally is based on dynamic analysis of building. Analysis results shows that the Reinforced Concrete Building at seismic Zone V having maximum peak story shear, base shear, drift and node displacement as compare to the analytical results of Reinforced Concrete Building at seismic Zone III and Zone IV. This analysis results indicating to focus on structural drawings strictly at construction site to make a HYBRID STRUCTURE. The study case is deal with the 10 story height of a vertical expansion Hybrid frame structure at different zones i.e. zone III, zone IV and zone V having the column 0.45x0.36mt and beam 0.6x0.36mt. with total height of 30mt, to make the structure more stable bracing techniques shell be applied like mage bracing and V shape bracing. If this kind of efforts or structure drawings are followed by the builders and contractors then we save the lives during earthquake disaster at Bhuj (Gujarat State, India) on 26th January, 2001 which resulted in more than 19,000 deaths. This kind of Disaster Resistance Structure having the capabilities to solve the problems of densely populated area of cities by the utilization of area in vertical expansion hybrid structure. We request to Government of India to make new plans and implementing it to save the lives from future disasters instead of unnecessary wants of development plans like Bullet Trains.

Keywords: history, irresponsibilities, unplanned social structure, humanity, hybrid structure, response spectrum analysis, DRIFT, and NODE displacement

Procedia PDF Downloads 187
7296 Evaluation and Selection of Elite Jatropha Genotypes for Biofuel

Authors: Bambang Heliyanto, Rully Dyah Purwati, Hasnam, Fadjry Djufry

Abstract:

Jatropha curcas L., a drought tolerant and monoecious perennial shrub, has received attention worldwide during the past decade. Realizing the facts, the Indonesian government has decided to option for Jatropha and palm oil for in country biofuel production. To support the program development of high yielding jatropha varieties is necessary. This paper reviews Jatropha improvement program in Indonesia using mass selection and hybrid development. To start with, at the end of 2005, in-country germplasm collection was mobilized to Lampung and Nusa Tenggara Barat (NTB) provinces and successfully collected 15 provenances/sub-provenances which serves as a base population for selection. A significant improvement has been achieved through a simple recurrent breeding selection during 2006 to 2007. Seed yield productivity increased more than double, from 0.36 to 0.97 ton dry seed per hectare during the first selection cycle (IP-1), and then increased to 2.2 ton per hectare during the second cycles (IP-2) in Lampung provenance. Similar result was also observed in NTB provenance. Seed yield productivity increased from 0.43 ton to 1 ton dry seed per hectare in the first cycle (IP-1), and then 1.9 ton in the second cycle (IP-2). In 2008, the population IP-3 resulted from the third cycle of selection have been identified which were capable of producing 2.2 to 2.4 ton seed yield per hectare. To improve the seed yield per hectare, jatropha hybrid varieties was developed involving superior provenances. As a result a Jatropha Energy Terbarukan (JET) variety-2 was released in 2017 with seed yield potential of 2.6 ton per hectare. The use of this high yielding genotypes for biofuel is discussed.

Keywords: Jatropha curcas, provenance, biofuel, improve population, hybrid

Procedia PDF Downloads 153
7295 Climate Related Financial Risk on Automobile Industry and the Impact to the Financial Institutions

Authors: Mahalakshmi Vivekanandan S.

Abstract:

As per the recent changes happening in the global policies, climate-related changes and the impact it causes across every sector are viewed as green swan events – in essence, climate-related changes can often happen and lead to risk and a lot of uncertainty, but needs to be mitigated instead of considering them as black swan events. This brings about a question on how this risk can be computed so that the financial institutions can plan to mitigate it. Climate-related changes impact all risk types – credit risk, market risk, operational risk, liquidity risk, reputational risk and other risk types. And the models required to compute this has to consider the different industrial needs of the counterparty, as well as the factors that are contributing to this – be it in the form of different risk drivers, or the different transmission channels or the different approaches and the granular form of data availability. This brings out the suggestion that the climate-related changes, though it affects Pillar I risks, will be a Pillar II risk. This has to be modeled specifically based on the financial institution’s actual exposure to different industries instead of generalizing the risk charge. And this will have to be considered as the additional capital to be met by the financial institution in addition to their Pillar I risks, as well as the existing Pillar II risks. In this paper, the author presents a risk assessment framework to model and assess climate change risks - for both credit and market risks. This framework helps in assessing the different scenarios and how the different transition risks affect the risk associated with the different parties. This research paper delves into the topic of the increase in the concentration of greenhouse gases that in turn cause global warming. It then considers the various scenarios of having the different risk drivers impacting the Credit and market risk of an institution by understanding the transmission channels and also considering the transition risk. The paper then focuses on the industry that’s fast seeing a disruption: the automobile industry. The paper uses the framework to show how the climate changes and the change to the relevant policies have impacted the entire financial institution. Appropriate statistical models for forecasting, anomaly detection and scenario modeling are built to demonstrate how the framework can be used by the relevant agencies to understand their financial risks. The paper also focuses on the climate risk calculation for the Pillar II Capital calculations and how it will make sense for the bank to maintain this in addition to their regular Pillar I and Pillar II capital.

Keywords: capital calculation, climate risk, credit risk, pillar ii risk, scenario modeling

Procedia PDF Downloads 122
7294 Fusion of MOLA-based DEMs and HiRISE Images for Large-Scale Mars Mapping

Authors: Ahmed F. Elaksher, Islam Omar

Abstract:

In this project, we used MOLA-based DEMs to orthorectify HiRISE optical images. The MOLA data was interpolated using the kriging interpolation technique. Corresponding tie points were then digitized from both datasets. These points were employed in co-registering both datasets using GIS analysis tools. Different transformation models, including the affine and projective transformation models, were used with different sets and distributions of tie points. Additionally, we evaluated the use of the MOLA elevations in co-registering the MOLA and HiRISE datasets. The planimetric RMSEs achieved for each model are reported. Results suggested the use of 3D-2D transformation models.

Keywords: photogrammetry, Mars, MOLA, HiRISE

Procedia PDF Downloads 59
7293 Evaluation of QSRR Models by Sum of Ranking Differences Approach: A Case Study of Prediction of Chromatographic Behavior of Pesticides

Authors: Lidija R. Jevrić, Sanja O. Podunavac-Kuzmanović, Strahinja Z. Kovačević

Abstract:

The present study deals with the selection of the most suitable quantitative structure-retention relationship (QSRR) models which should be used in prediction of the retention behavior of basic, neutral, acidic and phenolic pesticides which belong to different classes: fungicides, herbicides, metabolites, insecticides and plant growth regulators. Sum of ranking differences (SRD) approach can give a different point of view on selection of the most consistent QSRR model. SRD approach can be applied not only for ranking of the QSRR models, but also for detection of similarity or dissimilarity among them. Applying the SRD analysis, the most similar models can be found easily. In this study, selection of the best model was carried out on the basis of the reference ranking (“golden standard”) which was defined as the row average values of logarithm of retention time (logtr) defined by high performance liquid chromatography (HPLC). Also, SRD analysis based on experimental logtr values as reference ranking revealed similar grouping of the established QSRR models already obtained by hierarchical cluster analysis (HCA).

Keywords: chemometrics, chromatography, pesticides, sum of ranking differences

Procedia PDF Downloads 362
7292 FACTS Based Stabilization for Smart Grid Applications

Authors: Adel. M. Sharaf, Foad H. Gandoman

Abstract:

Nowadays, Photovoltaic-PV Farms/ Parks and large PV-Smart Grid Interface Schemes are emerging and commonly utilized in Renewable Energy distributed generation. However, PV-hybrid-Dc-Ac Schemes using interface power electronic converters usually has negative impact on power quality and stabilization of modern electrical network under load excursions and network fault conditions in smart grid. Consequently, robust FACTS based interface schemes are required to ensure efficient energy utilization and stabilization of bus voltages as well as limiting switching/fault onrush current condition. FACTS devices are also used in smart grid-Battery Interface and Storage Schemes with PV-Battery Storage hybrid systems as an elegant alternative to renewable energy utilization with backup battery storage for electric utility energy and demand side management to provide needed energy and power capacity under heavy load conditions. The paper presents a robust interface PV-Li-Ion Battery Storage Interface Scheme for Distribution/Utilization Low Voltage Interface using FACTS stabilization enhancement and dynamic maximum PV power tracking controllers. Digital simulation and validation of the proposed scheme is done using MATLAB/Simulink software environment for Low Voltage- Distribution/Utilization system feeding a hybrid Linear-Motorized inrush and nonlinear type loads from a DC-AC Interface VSC-6-pulse Inverter Fed from the PV Park/Farm with a back-up Li-Ion Storage Battery.

Keywords: AC FACTS, smart grid, stabilization, PV-battery storage, Switched Filter-Compensation (SFC)

Procedia PDF Downloads 402
7291 Study on Seismic Response Feature of Multi-Span Bridges Crossing Fault

Authors: Yingxin Hui

Abstract:

Understanding seismic response feature of the bridges crossing fault is the basis of the seismic fortification. Taking a multi-span bridge crossing active fault under construction as an example, the seismic ground motions at bridge site were generated following hybrid simulation methodology. Multi-support excitations displacement input models and nonlinear time history analysis was used to calculate seismic response of structures, and the results were compared with bridge in the near-fault region. The results showed that the seismic response features of bridges crossing fault were different from the bridges in the near-fault region. The design according to the bridge in near-fault region would cause the calculation results with insecurity and non-reasonable if the effect of cross the fault was ignored. The design of seismic fortification should be based on seismic response feature, which could reduce the adverse effect caused by the structure damage.

Keywords: bridge engineering, seismic response feature, across faults, rupture directivity effect, fling step

Procedia PDF Downloads 412
7290 Dual Language Immersion Models in Theory and Practice

Authors: S. Gordon

Abstract:

Dual language immersion is growing fast in language teaching today. This study provides an overview and evaluation of the different models of Dual language immersion programs in US K-12 schools. First, the paper provides a brief current literature review on the theory of Dual Language Immersion (DLI) in Second Language Acquisition (SLA) studies. Second, examples of several types of DLI language teaching models in US K-12 public schools are presented (including 50/50 models, 90/10 models, etc.). Third, we focus on the unique example of DLI education in the state of Utah, a successful, growing program in K-12 schools that includes: French, Chinese, Spanish, and Portuguese. The project investigates the theory and practice particularly of the case of public elementary and secondary school children that study half their school day in the L1 and the other half in the chosen L2, from kindergarten (age 5-6) through high school (age 17-18). Finally, the project takes the observations of Utah French DLI elementary through secondary programs as a case study. To conclude, we look at the principal challenges, pedagogical objectives and outcomes, and important implications for other US states and other countries (such as France currently) that are in the process of developing similar language learning programs.

Keywords: dual language immersion, second language acquisition, language teaching, pedagogy, teaching, French

Procedia PDF Downloads 155
7289 Fixed-Bed Column Studies of Green Malachite Removal by Use of Alginate-Encapsulated Aluminium Pillared Clay

Authors: Lazhar mouloud, Chemat Zoubida, Ouhoumna Faiza

Abstract:

The main objective of this study, concerns the modeling of breakthrough curves obtained in the adsorption column of malachite green into alginate-encapsulated aluminium pillared clay in fixed bed according to various operating parameters such as the initial concentration, the feed rate and the height fixed bed, applying mathematical models namely: the model of Bohart and Adams, Wolborska, Bed Depth Service Time, Clark and Yoon-Nelson. These models allow us to express the different parameters controlling the performance of the dynamic adsorption system. The results have shown that all models were found suitable for describing the whole or a definite part of the dynamic behavior of the column with respect to the flow rate, the inlet dye concentration and the height of fixed bed.

Keywords: adsorption column, malachite green, pillared clays, alginate, modeling, mathematic models, encapsulation.

Procedia PDF Downloads 492
7288 The Road Ahead: Merging Human Cyber Security Expertise with Generative AI

Authors: Brennan Lodge

Abstract:

Amidst a complex regulatory landscape, Retrieval Augmented Generation (RAG) emerges as a transformative tool for Governance Risk and Compliance (GRC) officers. This paper details the application of RAG in synthesizing Large Language Models (LLMs) with external knowledge bases, offering GRC professionals an advanced means to adapt to rapid changes in compliance requirements. While the development for standalone LLM’s (Large Language Models) is exciting, such models do have their downsides. LLM’s cannot easily expand or revise their memory, and they can’t straightforwardly provide insight into their predictions, and may produce “hallucinations.” Leveraging a pre-trained seq2seq transformer and a dense vector index of domain-specific data, this approach integrates real-time data retrieval into the generative process, enabling gap analysis and the dynamic generation of compliance and risk management content. We delve into the mechanics of RAG, focusing on its dual structure that pairs parametric knowledge contained within the transformer model with non-parametric data extracted from an updatable corpus. This hybrid model enhances decision-making through context-rich insights, drawing from the most current and relevant information, thereby enabling GRC officers to maintain a proactive compliance stance. Our methodology aligns with the latest advances in neural network fine-tuning, providing a granular, token-level application of retrieved information to inform and generate compliance narratives. By employing RAG, we exhibit a scalable solution that can adapt to novel regulatory challenges and cybersecurity threats, offering GRC officers a robust, predictive tool that augments their expertise. The granular application of RAG’s dual structure not only improves compliance and risk management protocols but also informs the development of compliance narratives with pinpoint accuracy. It underscores AI’s emerging role in strategic risk mitigation and proactive policy formation, positioning GRC officers to anticipate and navigate the complexities of regulatory evolution confidently.

Keywords: cybersecurity, gen AI, retrieval augmented generation, cybersecurity defense strategies

Procedia PDF Downloads 76
7287 An Improvement of a Dynamic Model of the Secondary Sedimentation Tank and Field Validation

Authors: Zahir Bakiri, Saci Nacefa

Abstract:

In this paper a comparison in made between two models, with and without dispersion term, and focused on the characterization of the movement of the sludge blanket in the secondary sedimentation tank using the solid flux theory and the velocity settling. This allowed us develop a one-dimensional models, with and without dispersion based on a thorough experimental study carried out in situ and the application of online data which are the mass load flow, transfer concentration, and influent characteristic. On the other hand, in the proposed model, the new settling velocity law (double-exponential function) used is based on the Vesilind function.

Keywords: wastewater, activated sludge, sedimentation, settling velocity, settling models

Procedia PDF Downloads 371
7286 Near-Miss Deep Learning Approach for Neuro-Fuzzy Risk Assessment in Pipelines

Authors: Alexander Guzman Urbina, Atsushi Aoyama

Abstract:

The sustainability of traditional technologies employed in energy and chemical infrastructure brings a big challenge for our society. Making decisions related with safety of industrial infrastructure, the values of accidental risk are becoming relevant points for discussion. However, the challenge is the reliability of the models employed to get the risk data. Such models usually involve large number of variables and with large amounts of uncertainty. The most efficient techniques to overcome those problems are built using Artificial Intelligence (AI), and more specifically using hybrid systems such as Neuro-Fuzzy algorithms. Therefore, this paper aims to introduce a hybrid algorithm for risk assessment trained using near-miss accident data. As mentioned above the sustainability of traditional technologies related with energy and chemical infrastructure constitutes one of the major challenges that today’s societies and firms are facing. Besides that, the adaptation of those technologies to the effects of the climate change in sensible environments represents a critical concern for safety and risk management. Regarding this issue argue that social consequences of catastrophic risks are increasing rapidly, due mainly to the concentration of people and energy infrastructure in hazard-prone areas, aggravated by the lack of knowledge about the risks. Additional to the social consequences described above, and considering the industrial sector as critical infrastructure due to its large impact to the economy in case of a failure the relevance of industrial safety has become a critical issue for the current society. Then, regarding the safety concern, pipeline operators and regulators have been performing risk assessments in attempts to evaluate accurately probabilities of failure of the infrastructure, and consequences associated with those failures. However, estimating accidental risks in critical infrastructure involves a substantial effort and costs due to number of variables involved, complexity and lack of information. Therefore, this paper aims to introduce a well trained algorithm for risk assessment using deep learning, which could be capable to deal efficiently with the complexity and uncertainty. The advantage point of the deep learning using near-miss accidents data is that it could be employed in risk assessment as an efficient engineering tool to treat the uncertainty of the risk values in complex environments. The basic idea of using a Near-Miss Deep Learning Approach for Neuro-Fuzzy Risk Assessment in Pipelines is focused in the objective of improve the validity of the risk values learning from near-miss accidents and imitating the human expertise scoring risks and setting tolerance levels. In summary, the method of Deep Learning for Neuro-Fuzzy Risk Assessment involves a regression analysis called group method of data handling (GMDH), which consists in the determination of the optimal configuration of the risk assessment model and its parameters employing polynomial theory.

Keywords: deep learning, risk assessment, neuro fuzzy, pipelines

Procedia PDF Downloads 281
7285 Mapping Poverty in the Philippines: Insights from Satellite Data and Spatial Econometrics

Authors: Htet Khaing Lin

Abstract:

This study explores the relationship between a diverse set of variables, encompassing both environmental and socio-economic factors, and poverty levels in the Philippines for the years 2012, 2015, and 2018. Employing Ordinary Least Squares (OLS), Spatial Lag Models (SLM), and Spatial Error Models (SEM), this study delves into the dynamics of key indicators, including daytime and nighttime land surface temperature, cropland surface, urban land surface, rainfall, population size, normalized difference water, vegetation, and drought indices. The findings reveal consistent patterns and unexpected correlations, highlighting the need for nuanced policies that address the multifaceted challenges arising from the interplay of environmental and socio-economic factors.

Keywords: poverty analysis, OLS, spatial lag models, spatial error models, Philippines, google earth engine, satellite data, environmental dynamics, socio-economic factors

Procedia PDF Downloads 78
7284 Hybrid Risk Assessment Model for Construction Based on Multicriteria Decision Making Methods

Authors: J. Tamosaitiene

Abstract:

The article focuses on the identification and classification of key risk management criteria that represent the most important sustainability aspects of the construction industry. The construction sector is one of the most important sectors in Lithuania. Nowadays, the assessment of the risk level of a construction project is especially important for the quality of construction projects, the growth of enterprises and the sector. To establish the most important criteria for successful growth of the sector, a questionnaire for experts was developed. The analytic hierarchy process (AHP), the expert judgement method and other multicriteria decision making (MCDM) methods were used to develop the hybrid model. The results were used to develop an integrated knowledge system for the measurement of a risk level particular to construction projects. The article presents a practical case that details the developed system, sustainable aspects, and risk assessment.

Keywords: risk, system, model, construction

Procedia PDF Downloads 152