Search results for: facility location selection problem
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 11752

Search results for: facility location selection problem

6592 Non-Linear Assessment of Chromatographic Lipophilicity of Selected Steroid Derivatives

Authors: Milica Karadžić, Lidija Jevrić, Sanja Podunavac-Kuzmanović, Strahinja Kovačević, Anamarija Mandić, Aleksandar Oklješa, Andrea Nikolić, Marija Sakač, Katarina Penov Gaši

Abstract:

Using chemometric approach, the relationships between the chromatographic lipophilicity and in silico molecular descriptors for twenty-nine selected steroid derivatives were studied. The chromatographic lipophilicity was predicted using artificial neural networks (ANNs) method. The most important in silico molecular descriptors were selected applying stepwise selection (SS) paired with partial least squares (PLS) method. Molecular descriptors with satisfactory variable importance in projection (VIP) values were selected for ANN modeling. The usefulness of generated models was confirmed by detailed statistical validation. High agreement between experimental and predicted values indicated that obtained models have good quality and high predictive ability. Global sensitivity analysis (GSA) confirmed the importance of each molecular descriptor used as an input variable. High-quality networks indicate a strong non-linear relationship between chromatographic lipophilicity and used in silico molecular descriptors. Applying selected molecular descriptors and generated ANNs the good prediction of chromatographic lipophilicity of the studied steroid derivatives can be obtained. This article is based upon work from COST Actions (CM1306 and CA15222), supported by COST (European Cooperation and Science and Technology).

Keywords: artificial neural networks, chemometrics, global sensitivity analysis, liquid chromatography, steroids

Procedia PDF Downloads 350
6591 The Sensitivity of Electrical Geophysical Methods for Mapping Salt Stores within the Soil Profile

Authors: Fathi Ali Swaid

Abstract:

Soil salinization is one of the most hazardous phenomenons accelerating the land degradation processes. It either occurs naturally or is human-induced. High levels of soil salinity negatively affect crop growth and productivity leading land degradation ultimately. Thus, it is important to monitor and map soil salinity at an early stage to enact effective soil reclamation program that helps lessen or prevent future increase in soil salinity. Geophysical method has outperformed the traditional method for assessing soil salinity offering more informative and professional rapid assessment techniques for monitoring and mapping soil salinity. Soil sampling, EM38 and 2D conductivity imaging have been evaluated for their ability to delineate and map the level of salinity variations at Second Ponds Creek. The three methods have shown that the subsoil in the study area is saline. Salt variations were successfully observed under either method. However, EM38 reading and 2D inversion data show a clear spatial structure comparing to EC1:5 of soil samples in spite of that all soil samples, EM38 and 2D imaging were collected from the same location. Because EM38 readings and 2D imaging data are a weighted average of electrical soil conductance, it is more representative of soil properties than the soil samples method. The mapping of subsurface soil at the study area has been successful and the resistivity imaging has proven to be an advantage. The soil salinity analysis (EC1:5) correspond well to the true resistivity bringing together a good result of soil salinity. Soil salinity clearly indicated by previous investigation EM38 have been confirmed by the interpretation of the true resistivity at study area.

Keywords: 2D conductivity imaging, EM38 readings, soil salinization, true resistivity, urban salinity

Procedia PDF Downloads 381
6590 Technology of Gyro Orientation Measurement Unit (Gyro Omu) for Underground Utility Mapping Practice

Authors: Mohd Ruzlin Mohd Mokhtar

Abstract:

At present, most operators who are working on projects for utilities such as power, water, oil, gas, telecommunication and sewerage are using technologies e.g. Total station, Global Positioning System (GPS), Electromagnetic Locator (EML) and Ground Penetrating Radar (GPR) to perform underground utility mapping. With the increase in popularity of Horizontal Directional Drilling (HDD) method among the local authorities and asset owners, most of newly installed underground utilities need to use the HDD method. HDD method is seen as simple and create not much disturbance to the public and traffic. Thus, it was the preferred utilities installation method in most of areas especially in urban areas. HDDs were installed much deeper than exiting utilities (some reports saying that HDD is averaging 5 meter in depth). However, this impacts the accuracy or ability of existing underground utility mapping technologies. In most of Malaysia underground soil condition, those technologies were limited to maximum of 3 meter depth. Thus, those utilities which were installed much deeper than 3 meter depth could not be detected by using existing detection tools. The accuracy and reliability of existing underground utility mapping technologies or work procedure were in doubt. Thus, a mitigation action plan is required. While installing new utility using Horizontal Directional Drilling (HDD) method, a more accurate underground utility mapping can be achieved by using Gyro OMU compared to existing practice using e.g. EML and GPR. Gyro OMU is a method to accurately identify the location of HDD thus this mapping can be used or referred to avoid those cost of breakdown due to future HDD works which can be caused by inaccurate underground utility mapping.

Keywords: Gyro Orientation Measurement Unit (Gyro OMU), Horizontal Directional Drilling (HDD), Ground Penetrating Radar (GPR), Electromagnetic Locator (EML)

Procedia PDF Downloads 146
6589 Management of Fitness-For-Duty for Human Error Prevention in Nuclear Power Plants

Authors: Hyeon-Kyo Lim, Tong-Il Jang, Yong-Hee Lee

Abstract:

For the past several decades, not a few researchers have warned that even a trivial human error may result in unexpected accidents, especially in Nuclear Power Plants. To prevent accidents in Nuclear Power Plants, it is quite indispensable to make any factors under the effective control that may raise the possibility of human errors for accident prevention. This study aimed to develop a risk management program, especially in the sense that guaranteeing Fitness-for-Duty (FFD) of human beings working in Nuclear Power Plants. Throughout a literal survey, it was found that work stress and fatigue are major psychophysical factors requiring sophisticated management. A set of major management factors related to work stress and fatigue was through repetitive literal surveys and classified into several categories. To maintain the fitness of human workers, a 4-level – individual worker, team, staff within plants, and external professional - approach was adopted for FFD management program. Moreover, the program was arranged to envelop the whole employment cycle from selection and screening of workers, job allocation, and job rotation. Also, a managerial care program was introduced for employee assistance based on the concept of Employee Assistance Program (EAP). The developed program was reviewed with repetition by ex-operators in nuclear power plants, and assessed in the affirmative. As a whole, responses implied additional treatment to guarantee high performance of human workers not only in normal operations but also in emergency situations. Consequently, the program is under administrative modification for practical application.

Keywords: fitness-for-duty (FFD), human error, work stress, fatigue, Employee-Assistance-Program (EAP)

Procedia PDF Downloads 305
6588 A Preliminary Analysis of Sustainable Development in the Belgrade Metropolitan Area

Authors: Slavka Zeković, Miodrag Vujošević, Tamara Maričić

Abstract:

The paper provides a comprehensive analysis of the sustainable development in the Belgrade Metropolitan Region - BMA (level NUTS 2) preliminary evaluating the three chosen components: 1) economic growth and developmental changes; 2) competitiveness; and 3) territorial concentration and industrial specialization. First, we identified the main results of development changes and economic growth by applying Shift-share analysis on the metropolitan level. Second, the empirical evaluation of competitiveness in the BMA is based on the analysis of absolute and relative values of eight indicators by Spider method. Paper shows that the consideration of the national share, industrial mix and metropolitan/regional share in total Shift share of the BMA, as well as economic/functional specialization of the BMA indicate very strong process of deindustrialization. Allocative component of the BMA economic growth has positive value, reflecting the above-average sector productivity compared to the national average. Third, the important positive role of metropolitan/regional component in decomposition of the BMA economic growth is highlighted as one of the key results. Finally, comparative analysis of the industrial territorial concentration in the BMA in relation to Serbia is based on location quotient (LQ) or Balassa index as a valid measure. The results indicate absolute and relative differences in decrease of industry territorial concentration as well as inefficiency of utilizing territorial capital in the BMA. Results are important for the increase of regional competitiveness and territorial distribution in this area as well as for improvement of sustainable metropolitan and sector policies, planning and governance on this level.

Keywords: Belgrade Metropolitan Area (BMA), comprehensive analysis / evaluation, economic growth, competitiveness, sustainable development

Procedia PDF Downloads 446
6587 Climate Change Based Frontier Research in Landscape Architecture

Authors: Xiaoyan Wang, Zhongde Wang

Abstract:

The issue of climate change, which originated in the middle of the twentieth century, has become a focus of international political, academic, and non-governmental organizations and public attention. In order to address the problems caused by climate change, the Chinese government has proposed a dual-carbon target and taken some national measures, such as ecological priority and green low-carbon development. These goals and measures are highly aligned with the values of the landscape architecture industry. This is an opportunity for the architectural discipline and the landscape architecture industry, so it is very necessary to summarize and analyze the hotspots related to climate change in the field of building science in China, which can assist the landscape architecture industry and related organizations in formulating more rational professional goals and taking actions that contribute to the betterment of societal, environmental development. Through the study, it is found as follows: firstly, after 20 years of rapid development, the research on climate change in the major architectural disciplines has shown a trend of diversification of research perspectives, interdisciplinary cross-cutting, and broadening of content; secondly, the research contents of landscape architecture focuses on the strategies to adapt to climate change, such as selection of urban tree species, the urban green infrastructure space layout, and the resilient city. Finally, in the future, climate change-based landscape architecture research will make the content system more diversified, but at the same time, it is still necessary to further deepen the research on quantitative methodology and construct scale systematic planning and design methods.

Keywords: climate change, landscape architecture, knowledge mapping, cites-pace

Procedia PDF Downloads 60
6586 Social Media Influencers and Tourist’s Hotel Booking Decisions: A Case Study of Facebook

Authors: Fahsai Pawapootanont, Sasithon Yuwakosol

Abstract:

The objectives of this research study are as follows: 1) Study the information-seeking behavior of followers of influencers on Facebook in making hotel booking decisions and 2) Study the characteristics of travel influencers that affect their followers' hotel booking decisions. The Data was collected by interviewing 35 key informants, consisting of 25 Thai tourists who were followers of travel influencers and 10 travel influencers, as well as collecting data using online questionnaires from a sample of 400 Thai tourists and using statistical data analysis: percentage, standard deviation, mean, T-Test and One-Way Analysis of Variance: ANOVA. The results of the influence of travel influencers on Facebook on hotel booking decisions in Thailand revealed the following: People in different age groups have different information-seeking behaviours. Depends on experience and aptitude in using technology. The sample group did not seek information from only one source. There is also a search for information from various places in order to get comparative information and the most truthful information to make decisions. In addition, travel influencers should be those who present honest, clear, and complete content. And present services honestly. In addition to the characteristics of travel influencers affecting hotel booking decisions, Presentation formats and platforms also affect hotel booking decisions. But it must be designed and presented to suit the behavior of the group of people we want. As for the influence of travel influencers, it can be concluded that The influence of travel influencers can influence their followers' interests and hotel booking decisions. However, it was found that there are other factors that followers of travel influencers on Facebook will factor into their decision to book a hotel, such as Whether the hotel's comfort meets your needs or not; location, price, and promotions also play an important role in deciding to book a hotel.

Keywords: influencer, travel, facebook, hotel booking decisions, Thailand

Procedia PDF Downloads 55
6585 Maximizing the Role of Companion Teachers for the Achievement of Professional Competencies and Pedagogics Workshop Activities of Teacher Professional Participants in the Faculty of Teaching and Education of Mulawarman University

Authors: Makrina Tindangen

Abstract:

The problems faced by participants of teacher profession program in Faculty of teaching and education Mulawarman University is professional and pedagogic competence. Professional competence related to the mastery of teaching materials, while pedagogic competence related with the ability to plan and to implement learning. Based on the problems, the purpose of the research is to maximize the role of companion teacher for the achievement of professional and pedagogic competencies in the workshop of the participants of teacher professional education in the Faculty of Teaching and Education of Mulawarman University. Qualitative research method with interview guidance and document to get in-depth data on how to maximize the role of companion teachers in the achievement of professional and pedagogic competencies in the workshop participants of professional education participants. Location of this research is on the Faculty of Teaching and Education of Mulawarman University, Samarinda City, East Kalimantan Province. Research respondents were 12 teachers of workshop facilitator. Descriptive data analysis is through interpretation of interview data. The conclusion of the research result, how to maximize the role of assistant teachers in workshop activities for the professional competence and pedagogic competence of professional teacher training program participants, through facilitation activities conducted by teachers of companion related to real problems faced by students in school, so that the workshop participants have professional competence and pedagogic as an initial competence before carrying out practical activities of field experience in school.

Keywords: companion teacher, professional and pedagogical competence, activities, workshop participants

Procedia PDF Downloads 192
6584 Polymorphisms of Calpastatin Gene and Its Association with Growth Traits in Indonesian Thin Tail Sheep

Authors: Muhammad Ihsan Andi Dagong, Cece Sumantri, Ronny Rachman Noor, Rachmat Herman, Mohamad Yamin

Abstract:

Calpastatin involved in various physiological processes in the body such as the protein turnover, growth, fusion and mioblast migration. Thus, allegedly Calpastatin gene diversity (CAST) have an association with growth and potential use as candidate genes for growth trait. This study aims to identify the association between the genetic diversity of CAST gene with some growth properties such as body dimention (morphometric), body weight and daily weight gain in sheep. A total of 157 heads of Thin Tail Sheep (TTS) reared intensively for fattening purposes in the uniform environmental conditions. Overall sheep used were male, and maintained for 3 months. The parameters of growth properties were measured among others: body weight gain (ADG) (g/head / day), body weight (kg), body length (cm), chest circumference (cm), height (cm). All the sheep were genotyped by using PCR-SSCP (single strand conformational polymorphism) methods. CAST gene in locus fragment intron 5 - exon 6 were amplified with a predicted length of about 254 bp PCR products. Then the sheep were stratified based on their CAST genotypes. The result of this research showed that no association were found between the CAST gene variations with morphometric body weight, but there was a significant association with daily body weight gain (ADG) in sheep observed. CAST-23 and CAST-33 genotypes has higher average daily gain than other genotypes. CAST-23 and CAST-33 genotypes that carrying the CAST-2 and CAST-3 alleles potential to be used in the selection of the nature of the growth trait of the TTS sheep.

Keywords: body weight, calpastatin, genotype, growth trait, thin tail sheep

Procedia PDF Downloads 326
6583 An Accurate Method for Phylogeny Tree Reconstruction Based on a Modified Wild Dog Algorithm

Authors: Essam Al Daoud

Abstract:

This study solves a phylogeny problem by using modified wild dog pack optimization. The least squares error is considered as a cost function that needs to be minimized. Therefore, in each iteration, new distance matrices based on the constructed trees are calculated and used to select the alpha dog. To test the suggested algorithm, ten homologous genes are selected and collected from National Center for Biotechnology Information (NCBI) databanks (i.e., 16S, 18S, 28S, Cox 1, ITS1, ITS2, ETS, ATPB, Hsp90, and STN). The data are divided into three categories: 50 taxa, 100 taxa and 500 taxa. The empirical results show that the proposed algorithm is more reliable and accurate than other implemented methods.

Keywords: least square, neighbor joining, phylogenetic tree, wild dog pack

Procedia PDF Downloads 322
6582 A Comparative Study of Sampling-Based Uncertainty Propagation with First Order Error Analysis and Percentile-Based Optimization

Authors: M. Gulam Kibria, Shourav Ahmed, Kais Zaman

Abstract:

In system analysis, the information on the uncertain input variables cause uncertainty in the system responses. Different probabilistic approaches for uncertainty representation and propagation in such cases exist in the literature. Different uncertainty representation approaches result in different outputs. Some of the approaches might result in a better estimation of system response than the other approaches. The NASA Langley Multidisciplinary Uncertainty Quantification Challenge (MUQC) has posed challenges about uncertainty quantification. Subproblem A, the uncertainty characterization subproblem, of the challenge posed is addressed in this study. In this subproblem, the challenge is to gather knowledge about unknown model inputs which have inherent aleatory and epistemic uncertainties in them with responses (output) of the given computational model. We use two different methodologies to approach the problem. In the first methodology we use sampling-based uncertainty propagation with first order error analysis. In the other approach we place emphasis on the use of Percentile-Based Optimization (PBO). The NASA Langley MUQC’s subproblem A is developed in such a way that both aleatory and epistemic uncertainties need to be managed. The challenge problem classifies each uncertain parameter as belonging to one the following three types: (i) An aleatory uncertainty modeled as a random variable. It has a fixed functional form and known coefficients. This uncertainty cannot be reduced. (ii) An epistemic uncertainty modeled as a fixed but poorly known physical quantity that lies within a given interval. This uncertainty is reducible. (iii) A parameter might be aleatory but sufficient data might not be available to adequately model it as a single random variable. For example, the parameters of a normal variable, e.g., the mean and standard deviation, might not be precisely known but could be assumed to lie within some intervals. It results in a distributional p-box having the physical parameter with an aleatory uncertainty, but the parameters prescribing its mathematical model are subjected to epistemic uncertainties. Each of the parameters of the random variable is an unknown element of a known interval. This uncertainty is reducible. From the study, it is observed that due to practical limitations or computational expense, the sampling is not exhaustive in sampling-based methodology. That is why the sampling-based methodology has high probability of underestimating the output bounds. Therefore, an optimization-based strategy to convert uncertainty described by interval data into a probabilistic framework is necessary. This is achieved in this study by using PBO.

Keywords: aleatory uncertainty, epistemic uncertainty, first order error analysis, uncertainty quantification, percentile-based optimization

Procedia PDF Downloads 246
6581 A Study of Blockchain Oracles

Authors: Abdeljalil Beniiche

Abstract:

The limitation with smart contracts is that they cannot access external data that might be required to control the execution of business logic. Oracles can be used to provide external data to smart contracts. An oracle is an interface that delivers data from external data outside the blockchain to a smart contract to consume. Oracle can deliver different types of data depending on the industry and requirements. In this paper, we study and describe the widely used blockchain oracles. Then, we elaborate on his potential role, technical architecture, and design patterns. Finally, we discuss the human oracle and its key role in solving the truth problem by reaching a consensus about a certain inquiry and tasks.

Keywords: blockchain, oracles, oracles design, human oracles

Procedia PDF Downloads 142
6580 Broadcasting Stabilization for Dynamical Multi-Agent Systems

Authors: Myung-Gon Yoon, Jung-Ho Moon, Tae Kwon Ha

Abstract:

This paper deals with a stabilization problem for multi-agent systems, when all agents in a multi-agent system receive the same broadcasting control signal and the controller can measure not each agent output but the sum of all agent outputs. It is analytically shown that when the sum of all agent outputs is bounded with a certain broadcasting controller for a given reference, each agent output is separately bounded:stabilization of the sum of agent outputs always results in the stability of every agent output. A numerical example is presented to illustrate our theoretic findings in this paper.

Keywords: broadcasting control, multi-agent system, transfer function, stabilization

Procedia PDF Downloads 385
6579 Microwave Heating and Catalytic Activity of Iron/Carbon Materials for H₂ Production from the Decomposition of Plastic Wastes

Authors: Peng Zhang, Cai Liang

Abstract:

The non-biodegradable plastic wastes have posed severe environmental and ecological contaminations. Numerous technologies, such as pyrolysis, incineration, and landfilling, have already been employed for the treatment of plastic waste. Compared with conventional methods, microwave has displayed unique advantages in the rapid production of hydrogen from plastic wastes. Understanding the interaction between microwave radiation and materials would promote the optimization of several parameters for the microwave reaction system. In this work, various carbon materials have been investigated to reveal microwave heating performance and the ensuing catalytic activity. Results showed that the diversity in the heating characteristic was mainly due to the dielectric properties and the individual microstructures. Furthermore, the gaps and steps among the surface of carbon materials would lead to the distortion of the electromagnetic field, which correspondingly induced plasma discharging. The intensity and location of local plasma were also studied. For high-yield H₂ production, iron nanoparticles were selected as the active sites, and a series of iron/carbon bifunctional catalysts were synthesized. Apart from the high catalytic activity, the iron particles in nano-size close to the microwave skin depth would transfer microwave irradiation to the heat, intensifying the decomposition of plastics. Under microwave radiation, iron is supported on activated carbon material with 10wt.% loading exhibited the best catalytic activity for H₂ production. Specifically, the plastics were rapidly heated up and subsequently converted into H₂ with a hydrogen efficiency of 85%. This work demonstrated a deep understanding of microwave reaction systems and provided the optimization for plastic treatment.

Keywords: plastic waste, recycling, hydrogen, microwave

Procedia PDF Downloads 77
6578 Microgravity, Hydrological and Metrological Monitoring of Shallow Ground Water Aquifer in Al-Ain, UAE

Authors: Serin Darwish, Hakim Saibi, Amir Gabr

Abstract:

The United Arab Emirates (UAE) is situated within an arid zone where the climate is arid and the recharge of the groundwater is very low. Groundwater is the primary source of water in the United Arab Emirates. However, rapid expansion, population growth, agriculture, and industrial activities have negatively affected these limited water resources. The shortage of water resources has become a serious concern due to the over-pumping of groundwater to meet demand. In addition to the deficit of groundwater, the UAE has one of the highest per capita water consumption rates in the world. In this study, a combination of time-lapse measurements of microgravity and depth to groundwater level in selected wells in Al Ain city was used to estimate the variations in groundwater storage. Al-Ain is the second largest city in Abu Dhabi Emirates and the third largest city in the UAE. The groundwater in this region has been overexploited. Relative gravity measurements were acquired using the Scintrex CG-6 Autograv. This latest generation gravimeter from Scintrex Ltd provides fast, precise gravity measurements and automated corrections for temperature, tide, instrument tilt and rejection of data noise. The CG-6 gravimeter has a resolution of 0.1μGal. The purpose of this study is to measure the groundwater storage changes in the shallow aquifers based on the application of microgravity method. The gravity method is a nondestructive technique that allows collection of data at almost any location over the aquifer. Preliminary results indicate a possible relationship between microgravity and water levels, but more work needs to be done to confirm this. The results will help to develop the relationship between monthly microgravity changes with hydrological and hydrogeological changes of shallow phreatic. The study will be useful in water management considerations and additional future investigations.

Keywords: Al-Ain, arid region, groundwater, microgravity

Procedia PDF Downloads 159
6577 Behavior of the RC Slab Subjected to Impact Loading According to the DIF

Authors: Yong Jae Yu, Jae-Yeol Cho

Abstract:

In the design of structural concrete for impact loading, design or model codes often employ a dynamic increase factor (DIF) to impose dynamic effect on static response. Dynamic increase factors that are obtained from laboratory material test results and that are commonly given as a function of strain rate only are quite different from each other depending on the design concept of design codes like ACI 349M-06, fib Model Code 2010 and ACI 370R-14. Because the dynamic increase factors currently adopted in the codes are too simple and limited to consider a variety of strength of materials, their application in practical design is questionable. In this study, the dynamic increase factors used in the three codes were validated through the finite element analysis of reinforced concrete slab elements which were tested and reported by other researcher. The test was intended to simulate a wall element of the containment building in nuclear power plants that is assumed to be subject to impact scenario that the Pentagon experienced on September 11, 2001. The finite element analysis was performed using the ABAQAUS 6.10 and the plasticity models were employed for the concrete, reinforcement. The dynamic increase factors given in the three codes were applied to the stress-strain curves of the materials. To estimate the dynamic increase factors, strain rate was adopted as a parameter. Comparison of the test and analysis was done with regard to perforation depth, maximum deflection, and surface crack area of the slab. Consequently, it was found that DIF has so great an effect on the behavior of the reinforced concrete structures that selection of DIF should be very careful. The result implies that DIF should be provided in design codes in more delicate format considering various influence factors.

Keywords: impact, strain rate, DIF, slab elements

Procedia PDF Downloads 299
6576 Trade and Economic Relations between Georgia and Germany – the Impediments Caused by the Pandemic and Future Prospects

Authors: Tamar Lazariashvili

Abstract:

There are a number of factors that determine the growth and development of the country's economy; however, trade and economic relations with other countries are the most important of all these factors. The paper analyzes the trade and economic relations between Georgia and Germany, identifies the impediments caused by the Covid pandemic, and substantiates the need for further economic cooperation between the countries. Research objectives. The objective of the research is to develop recommendations and reveal the prospects of further cooperation between Georgia and Germany based on identifying the problems in the field of trade and economy in the post-crisis situation. The research object is Georgian German economic relations. Germany is Georgia's largest trading partner in the European Union. Georgia and Germany actively cooperate within the framework of international organizations as well. The paper analyzes the multilateral and intensive economic relations between Germany and Georgia; evaluates the investments of German companies in Georgia and the activities of Georgian companies in Germany. Research methods. The paper uses general and specific research methods; in particular, analysis, synthesis, induction, deduction, comparison, statistical (selection, grouping, observation, trend), and other research methods.SWOT analysis is used to determine development opportunities between countries. As a result of the research economic ranking of Georgia and Germany are determined according to the above criteria, the causes of the impediments due to the pandemic are studied; the main problems in the field of trade and economy are identified. The paper provides conclusions on the problems in the trade relations between Georgia and Germany and suggests recommendations regarding the prospects for improving these relations.

Keywords: georgia-germany, trade and economic relations, economic ranking, perspective directions

Procedia PDF Downloads 163
6575 Most Recent Lifespan Estimate for the Itaipu Hydroelectric Power Plant Computed by Using Borland and Miller Method and Mass Balance in Brazil, Paraguay

Authors: Anderson Braga Mendes

Abstract:

Itaipu Hydroelectric Power Plant is settled on the Paraná River, which is a natural boundary between Brazil and Paraguay; thus, the facility is shared by both countries. Itaipu Power Plant is the biggest hydroelectric generator in the world, and provides clean and renewable electrical energy supply for 17% and 76% of Brazil and Paraguay, respectively. The plant started its generation in 1984. It counts on 20 Francis turbines and has installed capacity of 14,000 MWh. Its historic generation record occurred in 2016 (103,098,366 MWh), and since the beginning of its operation until the last day of 2016 the plant has achieved the sum of 2,415,789,823 MWh. The distinct sedimentologic aspects of the drainage area of Itaipu Power Plant, from its stretch upstream (Porto Primavera and Rosana dams) to downstream (Itaipu dam itself), were taken into account in order to best estimate the increase/decrease in the sediment yield by using data from 2001 to 2016. Such data are collected through a network of 14 automatic sedimentometric stations managed by the company itself and operating in an hourly basis, covering an area of around 136,000 km² (92% of the incremental drainage area of the undertaking). Since 1972, a series of lifespan studies for the Itaipu Power Plant have been made, being first assessed by Sir Hans Albert Einstein, at the time of the feasibility studies for the enterprise. From that date onwards, eight further studies were made through the last 44 years aiming to confer more precision upon the estimates based on more updated data sets. From the analysis of each monitoring station, it was clearly noticed strong increase tendencies in the sediment yield through the last 14 years, mainly in the Iguatemi, Ivaí, São Francisco Falso and Carapá Rivers, the latter situated in Paraguay, whereas the others are utterly in Brazilian territory. Five lifespan scenarios considering different sediment yield tendencies were simulated with the aid of the softwares SEDIMENT and DPOSIT, both developed by the author of the present work. Such softwares thoroughly follow the Borland & Miller methodology (empirical method of area-reduction). The soundest scenario out of the five ones under analysis indicated a lifespan foresight of 168 years, being the reservoir only 1.8% silted by the end of 2016, after 32 years of operation. Besides, the mass balance in the reservoir (water inflows minus outflows) between 1986 and 2016 shows that 2% of the whole Itaipu lake is silted nowadays. Owing to the convergence of both results, which were acquired by using different methodologies and independent input data, it is worth concluding that the mathematical modeling is satisfactory and calibrated, thus assigning credibility to this most recent lifespan estimate.

Keywords: Borland and Miller method, hydroelectricity, Itaipu Power Plant, lifespan, mass balance

Procedia PDF Downloads 279
6574 Evaluation of Initial Graft Tension during ACL Reconstruction Using a Three-Dimensional Computational Finite Element Simulation: Effect of the Combination of a Band of Gracilis with the Former Graft

Authors: S. Alireza Mirghasemi, Javad Parvizi, Narges R. Gabaran, Shervin Rashidinia, Mahdi M. Bijanabadi, Dariush G. Savadkoohi

Abstract:

Background: The anterior cruciate ligament is one of the most frequent ligament to be disrupted. Surgical reconstruction of the anterior cruciate ligament is a common practice to treat the disability or chronic instability of the knee. Several factors associated with success or failure of the ACL reconstruction including preoperative laxity of the knee, selection of the graft material, surgical technique, graft tension, and postoperative rehabilitation. We aimed to examine the biomechanical properties of any graft type and initial graft tensioning during ACL reconstruction using 3-dimensional computational finite element simulation. Methods: In this paper, 3-dimensional model of the knee was constructed to investigate the effect of graft tensioning on the knee joint biomechanics. Four different grafts were compared: 1) Bone-patellar tendon-bone graft (BPTB) 2) Hamstring tendon 3) BPTB and a band of gracilis4) Hamstring and a band of gracilis. The initial graft tension was set as “0, 20, 40, or 60N”. The anterior loading was set to 134 N. Findings: The resulting stress pattern and deflection in any of these models were compared to that of the intact knee. The obtained results showed that the combination of a band of gracilis with the former graft (BPTB or Hamstring) increases the structural stiffness of the knee. Conclusion: Required pretension during surgery decreases significantly by adding a band of gracilis to the proper graft.

Keywords: ACL reconstruction, deflection, finite element simulation, stress pattern

Procedia PDF Downloads 301
6573 Internet of Things Networks: Denial of Service Detection in Constrained Application Protocol Using Machine Learning Algorithm

Authors: Adamu Abdullahi, On Francisca, Saidu Isah Rambo, G. N. Obunadike, D. T. Chinyio

Abstract:

The paper discusses the potential threat of Denial of Service (DoS) attacks in the Internet of Things (IoT) networks on constrained application protocols (CoAP). As billions of IoT devices are expected to be connected to the internet in the coming years, the security of these devices is vulnerable to attacks, disrupting their functioning. This research aims to tackle this issue by applying mixed methods of qualitative and quantitative for feature selection, extraction, and cluster algorithms to detect DoS attacks in the Constrained Application Protocol (CoAP) using the Machine Learning Algorithm (MLA). The main objective of the research is to enhance the security scheme for CoAP in the IoT environment by analyzing the nature of DoS attacks and identifying a new set of features for detecting them in the IoT network environment. The aim is to demonstrate the effectiveness of the MLA in detecting DoS attacks and compare it with conventional intrusion detection systems for securing the CoAP in the IoT environment. Findings: The research identifies the appropriate node to detect DoS attacks in the IoT network environment and demonstrates how to detect the attacks through the MLA. The accuracy detection in both classification and network simulation environments shows that the k-means algorithm scored the highest percentage in the training and testing of the evaluation. The network simulation platform also achieved the highest percentage of 99.93% in overall accuracy. This work reviews conventional intrusion detection systems for securing the CoAP in the IoT environment. The DoS security issues associated with the CoAP are discussed.

Keywords: algorithm, CoAP, DoS, IoT, machine learning

Procedia PDF Downloads 85
6572 The Effect of Global Value Chain Participation on Environment

Authors: Piyaphan Changwatchai

Abstract:

Global value chain is important for current world economy through foreign direct investment. Multinational enterprises' efficient location seeking for each stage of production lead to global production network and more global value chain participation of several countries. Global value chain participation has several effects on participating countries in several aspects including the environment. The effect of global value chain participation on the environment is ambiguous. As a result, this research aims to study the effect of global value chain participation on countries' CO₂ emission and methane emission by using quantitative analysis with secondary panel data of sixty countries. The analysis is divided into two types of global value chain participation, which are forward global value chain participation and backward global value chain participation. The results show that, for forward global value chain participation, GDP per capita affects two types of pollutants in downward bell curve shape. Forward global value chain participation negatively affects CO₂ emission and methane emission. As for backward global value chain participation, GDP per capita affects two types of pollutants in downward bell curve shape. Backward global value chain participation negatively affects methane emission only. However, when considering Asian countries, forward global value chain participation positively affects CO₂ emission. The recommendations of this research are that countries participating in global value chain should promote production with effective environmental management in each stage of value chain. The examples of policies are providing incentives to private sectors, including domestic producers and MNEs, for green production technology and efficient environment management and engaging in international agreements in terms of green production. Furthermore, government should regulate each stage of production in value chain toward green production, especially for Asia countries.

Keywords: CO₂ emission, environment, global value chain participation, methane emission

Procedia PDF Downloads 195
6571 An Amended Method for Assessment of Hypertrophic Scars Viscoelastic Parameters

Authors: Iveta Bryjova

Abstract:

Recording of viscoelastic strain-vs-time curves with the aid of the suction method and a follow-up analysis, resulting into evaluation of standard viscoelastic parameters, is a significant technique for non-invasive contact diagnostics of mechanical properties of skin and assessment of its conditions, particularly in acute burns, hypertrophic scarring (the most common complication of burn trauma) and reconstructive surgery. For elimination of the skin thickness contribution, usable viscoelastic parameters deduced from the strain-vs-time curves are restricted to the relative ones (i.e. those expressed as a ratio of two dimensional parameters), like grosselasticity, net-elasticity, biological elasticity or Qu’s area parameters, in literature and practice conventionally referred to as R2, R5, R6, R7, Q1, Q2, and Q3. With the exception of parameters R2 and Q1, the remaining ones substantially depend on the position of inflection point separating the elastic linear and viscoelastic segments of the strain-vs-time curve. The standard algorithm implemented in commercially available devices relies heavily on the experimental fact that the inflection time comes about 0.1 sec after the suction switch-on/off, which depreciates credibility of parameters thus obtained. Although the Qu’s US 7,556,605 patent suggests a method of improving the precision of the inflection determination, there is still room for nonnegligible improving. In this contribution, a novel method of inflection point determination utilizing the advantageous properties of the Savitzky–Golay filtering is presented. The method allows computation of derivatives of smoothed strain-vs-time curve, more exact location of inflection and consequently more reliable values of aforementioned viscoelastic parameters. An improved applicability of the five inflection-dependent relative viscoelastic parameters is demonstrated by recasting a former study under the new method, and by comparing its results with those provided by the methods that have been used so far.

Keywords: Savitzky–Golay filter, scarring, skin, viscoelasticity

Procedia PDF Downloads 305
6570 Forecasting Equity Premium Out-of-Sample with Sophisticated Regression Training Techniques

Authors: Jonathan Iworiso

Abstract:

Forecasting the equity premium out-of-sample is a major concern to researchers in finance and emerging markets. The quest for a superior model that can forecast the equity premium with significant economic gains has resulted in several controversies on the choice of variables and suitable techniques among scholars. This research focuses mainly on the application of Regression Training (RT) techniques to forecast monthly equity premium out-of-sample recursively with an expanding window method. A broad category of sophisticated regression models involving model complexity was employed. The RT models include Ridge, Forward-Backward (FOBA) Ridge, Least Absolute Shrinkage and Selection Operator (LASSO), Relaxed LASSO, Elastic Net, and Least Angle Regression were trained and used to forecast the equity premium out-of-sample. In this study, the empirical investigation of the RT models demonstrates significant evidence of equity premium predictability both statistically and economically relative to the benchmark historical average, delivering significant utility gains. They seek to provide meaningful economic information on mean-variance portfolio investment for investors who are timing the market to earn future gains at minimal risk. Thus, the forecasting models appeared to guarantee an investor in a market setting who optimally reallocates a monthly portfolio between equities and risk-free treasury bills using equity premium forecasts at minimal risk.

Keywords: regression training, out-of-sample forecasts, expanding window, statistical predictability, economic significance, utility gains

Procedia PDF Downloads 110
6569 Mathematical and Fuzzy Logic in the Interpretation of the Quran

Authors: Morteza Khorrami

Abstract:

The logic as an intellectual infrastructure plays an essential role in the Islamic sciences. Hence, there are a few of the verses of the Holy Quran that their interpretation is not possible due to lack of proper logic. In many verses in the Quran, argument and the respondent has requested from the audience that shows the logic rule is in the Quran. The paper which use a descriptive and analytic method, tries to show the role of logic in understanding of the Quran reasoning methods and display some of Quranic statements with mathematical symbols and point that we can help these symbols for interesting and interpretation and answering to some questions and doubts. In this paper, this problem has been mentioned that the Quran did not use two-valued logic (Aristotelian) in all cases, but the fuzzy logic can also be searched in the Quran.

Keywords: aristotelian logic, fuzzy logic, interpretation, Holy Quran

Procedia PDF Downloads 686
6568 A Series Solution of Fuzzy Integro-Differential Equation

Authors: Maryam Mosleh, Mahmood Otadi

Abstract:

The hybrid differential equations have a wide range of applications in science and engineering. In this paper, the homotopy analysis method (HAM) is applied to obtain the series solution of the hybrid differential equations. Using the homotopy analysis method, it is possible to find the exact solution or an approximate solution of the problem. Comparisons are made between improved predictor-corrector method, homotopy analysis method and the exact solution. Finally, we illustrate our approach by some numerical example.

Keywords: Fuzzy number, parametric form of a fuzzy number, fuzzy integrodifferential equation, homotopy analysis method

Procedia PDF Downloads 564
6567 Impact of Pedagogical Techniques on the Teaching of Sports Sciences

Authors: Muhammad Saleem

Abstract:

Background: The teaching of sports sciences encompasses a broad spectrum of disciplines, including biomechanics, physiology, psychology, and coaching. Effective pedagogical techniques are crucial in imparting both theoretical knowledge and practical skills necessary for students to excel in the field. The impact of these techniques on students’ learning outcomes, engagement, and professional preparedness remains a vital area of study. Objective: This study aims to evaluate the effectiveness of various pedagogical techniques used in the teaching of sports sciences. It seeks to identify which methods most significantly enhance student learning, retention, engagement, and practical application of knowledge. Methods: A mixed-methods approach was employed, including both quantitative and qualitative analyses. The study involved a comparative analysis of traditional lecture-based teaching, experiential learning, problem-based learning (PBL), and technology-enhanced learning (TEL). Data were collected through surveys, interviews, and academic performance assessments from students enrolled in sports sciences programs at multiple universities. Statistical analysis was used to evaluate academic performance, while thematic analysis was applied to qualitative data to capture student experiences and perceptions. Results: The findings indicate that experiential learning and PBL significantly improve students' understanding and retention of complex sports science concepts compared to traditional lectures. TEL was found to enhance engagement and provide students with flexible learning opportunities, but its impact on deep learning varied depending on the quality of the digital resources. Overall, a combination of experiential learning, PBL, and TEL was identified as the most effective pedagogical approach, leading to higher student satisfaction and better preparedness for real-world applications. Conclusion: The study underscores the importance of adopting diverse and student-centered pedagogical techniques in the teaching of sports sciences. While traditional lectures remain useful for foundational knowledge, integrating experiential learning, PBL, and TEL can substantially improve student outcomes. These findings suggest that educators should consider a blended approach to pedagogy to maximize the effectiveness of sports science education.

Keywords: sport sciences, pedagogical techniques, health and physical education, problem-based learning, student engagement

Procedia PDF Downloads 34
6566 Reverse Logistics Information Management Using Ontological Approach

Authors: F. Lhafiane, A. Elbyed, M. Bouchoum

Abstract:

Reverse Logistics (RL) Process is considered as complex and dynamic network that involves many stakeholders such as: suppliers, manufactures, warehouse, retails, and costumers, this complexity is inherent in such process due to lack of perfect knowledge or conflicting information. Ontologies, on the other hand, can be considered as an approach to overcome the problem of sharing knowledge and communication among the various reverse logistics partners. In this paper, we propose a semantic representation based on hybrid architecture for building the Ontologies in an ascendant way, this method facilitates the semantic reconciliation between the heterogeneous information systems (ICT) that support reverse logistics Processes and product data.

Keywords: Reverse Logistics, information management, heterogeneity, ontologies, semantic web

Procedia PDF Downloads 494
6565 Generalized Central Paths for Convex Programming

Authors: Li-Zhi Liao

Abstract:

The central path has played the key role in the interior point method. However, the convergence of the central path may not be true even in some convex programming problems with linear constraints. In this paper, the generalized central paths are introduced for convex programming. One advantage of the generalized central paths is that the paths will always converge to some optimal solutions of the convex programming problem for any initial interior point. Some additional theoretical properties for the generalized central paths will be also reported.

Keywords: central path, convex programming, generalized central path, interior point method

Procedia PDF Downloads 330
6564 Investigation of the Role of Lipoprotein a rs10455872 Gene Polymorphism in Childhood Obesity

Authors: Mustafa M. Donma, Ayşen Haksayar, Bahadır Batar, Buse Tepe, Birol Topçu, Orkide Donma

Abstract:

Childhood obesity is an ever-increasing health problem. The Association of obesity with severe chronic diseases such as diabetes and cardiovascular diseases makes the problem life-threatening. Aside from psychological, societal and metabolic factors, genetic polymorphisms have gained importance concerning etiology in recent years. The aim of this study was to evaluate the relationship between rs10455872 gene polymorphism in the Lipoprotein (a) locus and the development of childhood obesity. This was a prospective study carried out according to the Helsinki Declarations. The study protocol was approved by the Institutional Ethics Committee. This study was supported by Tekirdag Namik Kemal University Rectorate, Scientific Research Projects Coordination Unit. Project No: NKUBAP.02.TU.20.278. A total of 180 children (103 obese (OB) and 77 healthy), aged 6-18 years, without any acute or chronic disease, participated in the study. Two different groups were created: OB and healthy control. Each group was divided into two further groups depending on the nature of the polymorphism. Anthropometric measurements were taken during the detailed physical examination. Laboratory tests and TANITA measurements were performed. For the statistical evaluations, SPSS version 28.0 was used. A P-value smaller than 0.05 was the statistical significance degree. The distribution of lipoprotein (a) rs10455872 gene polymorphism did not differ between OB and healthy children. Children with AG genotype in both OB and control groups had lower body mass index (BMI), diagnostic obesity notation model assessment index (DONMA II), body fat ratio (BFR), C-reactive protein (CRP), and metabolic syndrome index (MetS index) values compared to children with normal AA genotype. In the OB group, serum iron, vitamin B12, hemoglobin, MCV, and MCH values were found to be higher in the AG genotype group than those of children with the normal AA genotype. A significant correlation was found between the MetS index and BFR among OB children with normal homozygous genotype. MetS index increased as BFR increased in this group. However, such a correlation was not observed in the OB group with heterozygous AG genotype. To the best of our knowledge, the association of lipoprotein (a) rs10455872 gene polymorphism with the etiology of childhood obesity has not been studied yet. Therefore, this study was the first report suggesting polymorphism with AG genotype as a good risk factor for obesity.

Keywords: child, gene polymorphism, lipoprotein (a), obesity, rs10455872

Procedia PDF Downloads 85
6563 Development of Energy Benchmarks Using Mandatory Energy and Emissions Reporting Data: Ontario Post-Secondary Residences

Authors: C. Xavier Mendieta, J. J McArthur

Abstract:

Governments are playing an increasingly active role in reducing carbon emissions, and a key strategy has been the introduction of mandatory energy disclosure policies. These policies have resulted in a significant amount of publicly available data, providing researchers with a unique opportunity to develop location-specific energy and carbon emission benchmarks from this data set, which can then be used to develop building archetypes and used to inform urban energy models. This study presents the development of such a benchmark using the public reporting data. The data from Ontario’s Ministry of Energy for Post-Secondary Educational Institutions are being used to develop a series of building archetype dynamic building loads and energy benchmarks to fill a gap in the currently available building database. This paper presents the development of a benchmark for college and university residences within ASHRAE climate zone 6 areas in Ontario using the mandatory disclosure energy and greenhouse gas emissions data. The methodology presented includes data cleaning, statistical analysis, and benchmark development, and lessons learned from this investigation are presented and discussed to inform the development of future energy benchmarks from this larger data set. The key findings from this initial benchmarking study are: (1) the importance of careful data screening and outlier identification to develop a valid dataset; (2) the key features used to develop a model of the data are building age, size, and occupancy schedules and these can be used to estimate energy consumption; and (3) policy changes affecting the primary energy generation significantly affected greenhouse gas emissions, and consideration of these factors was critical to evaluate the validity of the reported data.

Keywords: building archetypes, data analysis, energy benchmarks, GHG emissions

Procedia PDF Downloads 309