Search results for: trimming threshold selection
2607 Solving LWE by Pregressive Pumps and Its Optimization
Authors: Leizhang Wang, Baocang Wang
Abstract:
General Sieve Kernel (G6K) is considered as currently the fastest algorithm for the shortest vector problem (SVP) and record holder of open SVP challenge. We study the lattice basis quality improvement effects of the Workout proposed in G6K, which is composed of a series of pumps to solve SVP. Firstly, we use a low-dimensional pump output basis to propose a predictor to predict the quality of high-dimensional Pumps output basis. Both theoretical analysis and experimental tests are performed to illustrate that it is more computationally expensive to solve the LWE problems by using a G6K default SVP solving strategy (Workout) than these lattice reduction algorithms (e.g. BKZ 2.0, Progressive BKZ, Pump, and Jump BKZ) with sieving as their SVP oracle. Secondly, the default Workout in G6K is optimized to achieve a stronger reduction and lower computational cost. Thirdly, we combine the optimized Workout and the Pump output basis quality predictor to further reduce the computational cost by optimizing LWE instances selection strategy. In fact, we can solve the TU LWE challenge (n = 65, q = 4225, = 0:005) 13.6 times faster than the G6K default Workout. Fourthly, we consider a combined two-stage (Preprocessing by BKZ- and a big Pump) LWE solving strategy. Both stages use dimension for free technology to give new theoretical security estimations of several LWE-based cryptographic schemes. The security estimations show that the securities of these schemes with the conservative Newhope’s core-SVP model are somewhat overestimated. In addition, in the case of LAC scheme, LWE instances selection strategy can be optimized to further improve the LWE-solving efficiency even by 15% and 57%. Finally, some experiments are implemented to examine the effects of our strategies on the Normal Form LWE problems, and the results demonstrate that the combined strategy is four times faster than that of Newhope.Keywords: LWE, G6K, pump estimator, LWE instances selection strategy, dimension for free
Procedia PDF Downloads 602606 Preventing the Drought of Lakes by Using Deep Reinforcement Learning in France
Authors: Farzaneh Sarbandi Farahani
Abstract:
Drought and decrease in the level of lakes in recent years due to global warming and excessive use of water resources feeding lakes are of great importance, and this research has provided a structure to investigate this issue. First, the information required for simulating lake drought is provided with strong references and necessary assumptions. Entity-Component-System (ECS) structure has been used for simulation, which can consider assumptions flexibly in simulation. Three major users (i.e., Industry, agriculture, and Domestic users) consume water from groundwater and surface water (i.e., streams, rivers and lakes). Lake Mead has been considered for simulation, and the information necessary to investigate its drought has also been provided. The results are presented in the form of a scenario-based design and optimal strategy selection. For optimal strategy selection, a deep reinforcement algorithm is developed to select the best set of strategies among all possible projects. These results can provide a better view of how to plan to prevent lake drought.Keywords: drought simulation, Mead lake, entity component system programming, deep reinforcement learning
Procedia PDF Downloads 902605 Using Hierarchical Methodology to Assist the Selection of New Business in Brazilian Companies Incubators
Authors: Izabel Cristina Zattar, Gilberto Passos Lima, Guilherme Schünemann de Oliveira
Abstract:
In Brazil, there are several institutions committed to the development of new businesses based on product innovation. Among them are business incubators, universities and science institutes. Business incubators can be defined as nurseries for new companies, which may be in the technology segment, discussed in this article. Business incubators provide services related to infrastructure, such as physical space and meeting rooms. Besides these services, incubators also offer assistance in the form of information and communication, access to finance, relationship networks and business monitoring and mentoring processes. Business incubators support not all technology companies. One of the business incubators tasks is to assess the nature and feasibility of new business proposals. To assist in this goal, this paper proposes a methodology for evaluating new business using the Analytic Hierarchy Process (AHP). This paper presents the concepts used in the assessing methodology application for new business, concepts that have been tested with positive results in practice. This study counts on three main steps: first, a hierarchy was built, based on new business manuals used by the business incubators. These books and manuals relate business selection requirements, such as the innovation status and other technological aspects. Then, a questionnaire was generated, in order to guide incubator experts in the parity comparisons at all hierarchy levels. The weights of each requirement are calculated from information obtained from the questionnaire responses. Finally, the proposed method was applied to evaluate five new business proposals, which were applying to be part of a company incubator. The main result is the classification of these new businesses, which helped the incubator experts to decide what companies were more eligible to work with. This classification may also be helpful to the decision-making process of business incubators in future selection processes.Keywords: Analytic Hierarchy Process (AHP), Brazilian companies incubators, technology companies, incubator
Procedia PDF Downloads 3992604 Measuring the Embodied Energy of Construction Materials and Their Associated Cost Through Building Information Modelling
Authors: Ahmad Odeh, Ahmad Jrade
Abstract:
Energy assessment is an evidently significant factor when evaluating the sustainability of structures especially at the early design stage. Today design practices revolve around the selection of material that reduces the operational energy and yet meets their displinary need. Operational energy represents a substantial part of the building lifecycle energy usage but the fact remains that embodied energy is an important aspect unaccounted for in the carbon footprint. At the moment, little or no consideration is given to embodied energy mainly due to the complexity of calculation and the various factors involved. The equipment used, the fuel needed, and electricity required for each material vary with location and thus the embodied energy will differ for each project. Moreover, the method and the technique used in manufacturing, transporting and putting in place will have a significant influence on the materials’ embodied energy. This anomaly has made it difficult to calculate or even bench mark the usage of such energies. This paper presents a model aimed at helping designers select the construction materials based on their embodied energy. Moreover, this paper presents a systematic approach that uses an efficient method of calculation and ultimately provides new insight into construction material selection. The model is developed in a BIM environment targeting the quantification of embodied energy for construction materials through the three main stages of their life: manufacturing, transportation and placement. The model contains three major databases each of which contains a set of the most commonly used construction materials. The first dataset holds information about the energy required to manufacture any type of materials, the second includes information about the energy required for transporting the materials while the third stores information about the energy required by tools and cranes needed to place an item in its intended location. The model provides designers with sets of all available construction materials and their associated embodied energies to use for the selection during the design process. Through geospatial data and dimensional material analysis, the model will also be able to automatically calculate the distance between the factories and the construction site. To remain within the sustainability criteria set by LEED, a final database is created and used to calculate the overall construction cost based on R.M.S. means cost data and then automatically recalculate the costs for any modifications. Design criteria including both operational and embodied energies will cause designers to revaluate the current material selection for cost, energy, and most importantly sustainability.Keywords: building information modelling, energy, life cycle analysis, sustainablity
Procedia PDF Downloads 2692603 Thread Lift: Classification, Technique, and How to Approach to the Patient
Authors: Panprapa Yongtrakul, Punyaphat Sirithanabadeekul, Pakjira Siriphan
Abstract:
Background: The thread lift technique has become popular because it is less invasive, requires a shorter operation, less downtime, and results in fewer postoperative complications. The advantage of the technique is that the thread can be inserted under the skin without the need for long incisions. Currently, there are a lot of thread lift techniques with respect to the specific types of thread used on specific areas, such as the mid-face, lower face, or neck area. Objective: To review the thread lift technique for specific areas according to type of thread, patient selection, and how to match the most appropriate to the patient. Materials and Methods: A literature review technique was conducted by searching PubMed and MEDLINE, then compiled and summarized. Result: We have divided our protocols into two sections: Protocols for short suture, and protocols for long suture techniques. We also created 3D pictures for each technique to enhance understanding and application in a clinical setting. Conclusion: There are advantages and disadvantages to short suture and long suture techniques. The best outcome for each patient depends on appropriate patient selection and determining the most suitable technique for the defect and area of patient concern.Keywords: thread lift, thread lift method, thread lift technique, thread lift procedure, threading
Procedia PDF Downloads 2632602 Achieving Environmentally Sustainable Supply Chain in Textile and Apparel Industries
Authors: Faisal Bin Alam
Abstract:
Most of the manufacturing entities cause negative footprint to nature that demand due attention. Textile industries have one of the longest supply chains and bear the liability of significant environmental impact to our planet. Issues of environmental safety, scarcity of energy and resources, and demand for eco-friendly products have driven research to search for safe and suitable alternatives in apparel processing. Consumer awareness, increased pressure from fashion brands and actions from local legislative authorities have somewhat been able to improve the practices. Objective of this paper is to reveal the best selection of raw materials and methods of production, taking environmental sustainability into account. Methodology used in this study is exploratory in nature based on personal experience, field visits in the factories of Bangladesh and secondary sources. Findings are limited to exploring better alternatives to conventional operations of a Readymade Garment manufacturing, from fibre selection to final product delivery, therefore showing some ways of achieving greener environment in the supply chain of a clothing industry.Keywords: textile and apparel, environmental sustainability, supply chain, production, clothing
Procedia PDF Downloads 1372601 A Strategic Partner Evaluation Model for the Project Based Enterprises
Authors: Woosik Jang, Seung H. Han
Abstract:
The optimal partner selection is one of the most important factors to pursue the project’s success. However, in practice, there is a gaps in perception of success depending on the role of the enterprises for the projects. This frequently makes a relations between the partner evaluation results and the project’s final performances, insufficiently. To meet this challenges, this study proposes a strategic partner evaluation model considering the perception gaps between enterprises. A total 3 times of survey was performed; factor selection, perception gap analysis, and case application. After then total 8 factors are extracted from independent sample t-test and Borich model to set-up the evaluation model. Finally, through the case applications, only 16 enterprises are re-evaluated to “Good” grade among the 22 “Good” grade from existing model. On the contrary, 12 enterprises are re-evaluated to “Good” grade among the 19 “Bad” grade from existing model. Consequently, the perception gaps based evaluation model is expected to improve the decision making quality and also enhance the probability of project’s success.Keywords: partner evaluation model, project based enterprise, decision making, perception gap, project performance
Procedia PDF Downloads 1562600 Spatiotemporal Evaluation of Climate Bulk Materials Production in Atmospheric Aerosol Loading
Authors: Mehri Sadat Alavinasab Ashgezari, Gholam Reza Nabi Bidhendi, Fatemeh Sadat Alavinasab Ashkezari
Abstract:
Atmospheric aerosol loading (AAL) from anthropogenic sources is an evidence in industrial development. The accelerated trends in material consumption at the global scale in recent years demonstrate consumption paradigms sensible to the planetary boundaries (PB). This paper is a statistical approach on recognizing the path of climate-relevant bulk materials production (CBMP) of steel, cement and plastics to AAL via an updated and validated spatiotemporal distribution. The methodology of statistical analysis used the most updated regional or global databases or instrumental technologies. This corresponded to a selection of processes and areas capable for tracking AAL within the last decade, analyzing the most validated data while leading to explore the behavior functions or models. The results also represented a correlation within socio economic metabolism idea between the materials specified as macronutrients of society and AAL as a PB with an unknown threshold. The selected country contributors of China, India, US and the sample country of Iran show comparable cumulative AAL values vs to the bulk materials domestic extraction and production rate in the study period of 2012 to 2022. Generally, there is a tendency towards gradual descend in the worldwide and regional aerosol concentration after 2015. As of our evaluation, a considerable share of human role, equivalent 20% from CBMP, is for the main anthropogenic species of aerosols, including sulfate, black carbon and organic particulate matters too. This study, in an innovative approach, also explores the potential role of AAL control mechanisms from the economy sectors where ordered and smoothing loading trends are accredited through the disordered phenomena of CBMP and aerosol precursor emissions. The equilibrium states envisioned is an approval to the well-established theory of Spin Glasses applicable in physical system like the Earth and here to AAL.Keywords: atmospheric aeroso loading, material flows, climate bulk materials, industrial ecology
Procedia PDF Downloads 802599 Enhanced Cluster Based Connectivity Maintenance in Vehicular Ad Hoc Network
Authors: Manverpreet Kaur, Amarpreet Singh
Abstract:
The demand of Vehicular ad hoc networks is increasing day by day, due to offering the various applications and marvelous benefits to VANET users. Clustering in VANETs is most important to overcome the connectivity problems of VANETs. In this paper, we proposed a new clustering technique Enhanced cluster based connectivity maintenance in vehicular ad hoc network. Our objective is to form long living clusters. The proposed approach is grouping the vehicles, on the basis of the longest list of neighbors to form clusters. The cluster formation and cluster head selection process done by the RSU that may results it reduces the chances of overhead on to the network. The cluster head selection procedure is the vehicle which has closest speed to average speed will elect as a cluster Head by the RSU and if two vehicles have same speed which is closest to average speed then they will be calculate by one of the new parameter i.e. distance to their respective destination. The vehicle which has largest distance to their destination will be choosing as a cluster Head by the RSU. Our simulation outcomes show that our technique performs better than the existing technique.Keywords: VANETs, clustering, connectivity, cluster head, intelligent transportation system (ITS)
Procedia PDF Downloads 2472598 Operating System Based Virtualization Models in Cloud Computing
Authors: Dev Ras Pandey, Bharat Mishra, S. K. Tripathi
Abstract:
Cloud computing is ready to transform the structure of businesses and learning through supplying the real-time applications and provide an immediate help for small to medium sized businesses. The ability to run a hypervisor inside a virtual machine is important feature of virtualization and it is called nested virtualization. In today’s growing field of information technology, many of the virtualization models are available, that provide a convenient approach to implement, but decision for a single model selection is difficult. This paper explains the applications of operating system based virtualization in cloud computing with an appropriate/suitable model with their different specifications and user’s requirements. In the present paper, most popular models are selected, and the selection was based on container and hypervisor based virtualization. Selected models were compared with a wide range of user’s requirements as number of CPUs, memory size, nested virtualization supports, live migration and commercial supports, etc. and we identified a most suitable model of virtualization.Keywords: virtualization, OS based virtualization, container based virtualization, hypervisor based virtualization
Procedia PDF Downloads 3282597 Explanatory Variables for Crash Injury Risk Analysis
Authors: Guilhermina Torrao
Abstract:
An extensive number of studies have been conducted to determine the factors which influence crash injury risk (CIR); however, uncertainties inherent to selected variables have been neglected. A review of existing literature is required to not only obtain an overview of the variables and measures but also ascertain the implications when comparing studies without a systematic view of variable taxonomy. Therefore, the aim of this literature review is to examine and report on peer-reviewed studies in the field of crash analysis and to understand the implications of broad variations in variable selection in CIR analysis. The objective of this study is to demonstrate the variance in variable selection and classification when modeling injury risk involving occupants of light vehicles by presenting an analytical review of the literature. Based on data collected from 64 journal publications reported over the past 21 years, the analytical review discusses the variables selected by each study across an organized list of predictors for CIR analysis and provides a better understanding of the contribution of accident and vehicle factors to injuries acquired by occupants of light vehicles. A cross-comparison analysis demonstrates that almost half the studies (48%) did not consider vehicle design specifications (e.g., vehicle weight), whereas, for those that did, the vehicle age/model year was the most selected explanatory variable used by 41% of the literature studies. For those studies that included speed risk factor in their analyses, the majority (64%) used the legal speed limit data as a ‘proxy’ of vehicle speed at the moment of a crash, imposing limitations for CIR analysis and modeling. Despite the proven efficiency of airbags in minimizing injury impact following a crash, only 22% of studies included airbag deployment data. A major contribution of this study is to highlight the uncertainty linked to explanatory variable selection and identify opportunities for improvements when performing future studies in the field of road injuries.Keywords: crash, exploratory, injury, risk, variables, vehicle
Procedia PDF Downloads 1352596 Planning for Location and Distribution of Regional Facilities Using Central Place Theory and Location-Allocation Model
Authors: Danjuma Bawa
Abstract:
This paper aimed at exploring the capabilities of Location-Allocation model in complementing the strides of the existing physical planning models in the location and distribution of facilities for regional consumption. The paper was designed to provide a blueprint to the Nigerian government and other donor agencies especially the Fertilizer Distribution Initiative (FDI) by the federal government for the revitalization of the terrorism ravaged regions. Theoretical underpinnings of central place theory related to spatial distribution, interrelationships, and threshold prerequisites were reviewed. The study showcased how Location-Allocation Model (L-AM) alongside Central Place Theory (CPT) was applied in Geographic Information System (GIS) environment to; map and analyze the spatial distribution of settlements; exploit their physical and economic interrelationships, and to explore their hierarchical and opportunistic influences. The study was purely spatial qualitative research which largely used secondary data such as; spatial location and distribution of settlements, population figures of settlements, network of roads linking them and other landform features. These were sourced from government ministries and open source consortium. GIS was used as a tool for processing and analyzing such spatial features within the dictum of CPT and L-AM to produce a comprehensive spatial digital plan for equitable and judicious location and distribution of fertilizer deports in the study area in an optimal way. Population threshold was used as yardstick for selecting suitable settlements that could stand as service centers to other hinterlands; this was accomplished using the query syntax in ArcMapTM. ArcGISTM’ network analyst was used in conducting location-allocation analysis for apportioning of groups of settlements around such service centers within a given threshold distance. Most of the techniques and models ever used by utility planners have been centered on straight distance to settlements using Euclidean distances. Such models neglect impedance cutoffs and the routing capabilities of networks. CPT and L-AM take into consideration both the influential characteristics of settlements and their routing connectivity. The study was undertaken in two terrorism ravaged Local Government Areas of Adamawa state. Four (4) existing depots in the study area were identified. 20 more depots in 20 villages were proposed using suitability analysis. Out of the 300 settlements mapped in the study area about 280 of such settlements where optimally grouped and allocated to the selected service centers respectfully within 2km impedance cutoff. This study complements the giant strides by the federal government of Nigeria by providing a blueprint for ensuring proper distribution of these public goods in the spirit of bringing succor to these terrorism ravaged populace. This will ardently at the same time help in boosting agricultural activities thereby lowering food shortage and raising per capita income as espoused by the government.Keywords: central place theory, GIS, location-allocation, network analysis, urban and regional planning, welfare economics
Procedia PDF Downloads 1472595 Household Earthquake Absorptive Capacity Impact on Food Security: A Case Study in Rural Costa Rica
Authors: Laura Rodríguez Amaya
Abstract:
The impact of natural disasters on food security can be devastating, especially in rural settings where livelihoods are closely tied to their productive assets. In hazards studies, absorptive capacity is seen as a threshold that impacts the degree of people’s recovery after a natural disaster. Increasing our understanding of households’ capacity to absorb natural disaster shocks can provide the international community with viable measurements for assessing at-risk communities’ resilience to food insecurities. The purpose of this study is to identify the most important factors in determining a household’s capacity to absorb the impact of a natural disaster. This is an empirical study conducted in six communities in Costa Rica affected by earthquakes. The Earthquake Impact Index was developed for the selection of the communities in this study. The households coded as total loss in the selected communities constituted the sampling frame from which the sample population was drawn. Because of the study area geographically dispersion over a large surface, the stratified clustered sampling hybrid technique was selected. Of the 302 households identified as total loss in the six communities, a total of 126 households were surveyed, constituting 42 percent of the sampling frame. A list of indicators compiled based on theoretical and exploratory grounds for the absorptive capacity construct served to guide the survey development. These indicators were included in the following variables: (1) use of informal safety nets, (2) Coping Strategy, (3) Physical Connectivity, and (4) Infrastructure Damage. A multivariate data analysis was conducted using Statistical Package for Social Sciences (SPSS). The results show that informal safety nets such as family and friends assistance exerted the greatest influence on the ability of households to absorb the impact of earthquakes. In conclusion, communities that experienced the highest environmental impact and human loss got disconnected from the social networks needed to absorb the shock’s impact. This resulted in higher levels of household food insecurity.Keywords: absorptive capacity, earthquake, food security, rural
Procedia PDF Downloads 2532594 Prey Selection of the Corallivorous Gastropod Drupella cornus in Jeddah Coast, Saudi Arabia
Authors: Gaafar Omer BaOmer, Abdulmohsin A. Al-Sofyani, Hassan A. Ramadan
Abstract:
Drupella is found on coral reefs throughout the tropical and subtropical shallow waters of the Indo-Pacific region. Drupella is muricid gastropod, obligate corallivorous and their population outbreak can cause significant coral mortality. Belt transect surveys were conducted at two sites (Bohairat and Baydah) in Jeddah coast, Saudi Arabia to assess prey preferences for D. cornus with respect to prey availability through resource selection ratios. Results revealed that there are different levels of prey preferences at the different age stages and at the different sites. Acropora species with a caespitose, corymbose and digitate growth forms were preferred prey for recruits and juveniles of Drupella cornus, whereas Acropora variolosa was avoided by D. cornus because of its arborescent colony growth form. Pocillopora, Stylophora, and Millipora were occupied by Drupella cornus less than expected, whereas massive corals genus Porites were avoided. High densities of D. cornus were observed on two fragments of Pocillopora damicornis which may because of the absence of coral guard crabs genus Trapezia. Mean densities of D. cornus per colony for each species showed significant differentiation between the two study sites. Low availability of Acropora colonies in Bayadah patch reef caused high mean density of D. cornus per colony to compare to that in Bohairat, whereas higher mean density of D. cornus per colony of Pocillopora in Bohairat than that in Bayadah may because of most of occupied Pocillopora colonies by D. cornus were physical broken by anchoring compare to those colonies in Bayadah. The results indicated that prey preferences seem to depend on both coral genus and colony shape, while mean densities of D. cornus depend on availability and status of coral colonies.Keywords: prey availability, resource selection, Drupella cornus, Jeddah, Saudi Arabia
Procedia PDF Downloads 1482593 A Comparative Analysis of Classification Models with Wrapper-Based Feature Selection for Predicting Student Academic Performance
Authors: Abdullah Al Farwan, Ya Zhang
Abstract:
In today’s educational arena, it is critical to understand educational data and be able to evaluate important aspects, particularly data on student achievement. Educational Data Mining (EDM) is a research area that focusing on uncovering patterns and information in data from educational institutions. Teachers, if they are able to predict their students' class performance, can use this information to improve their teaching abilities. It has evolved into valuable knowledge that can be used for a wide range of objectives; for example, a strategic plan can be used to generate high-quality education. Based on previous data, this paper recommends employing data mining techniques to forecast students' final grades. In this study, five data mining methods, Decision Tree, JRip, Naive Bayes, Multi-layer Perceptron, and Random Forest with wrapper feature selection, were used on two datasets relating to Portuguese language and mathematics classes lessons. The results showed the effectiveness of using data mining learning methodologies in predicting student academic success. The classification accuracy achieved with selected algorithms lies in the range of 80-94%. Among all the selected classification algorithms, the lowest accuracy is achieved by the Multi-layer Perceptron algorithm, which is close to 70.45%, and the highest accuracy is achieved by the Random Forest algorithm, which is close to 94.10%. This proposed work can assist educational administrators to identify poor performing students at an early stage and perhaps implement motivational interventions to improve their academic success and prevent educational dropout.Keywords: classification algorithms, decision tree, feature selection, multi-layer perceptron, Naïve Bayes, random forest, students’ academic performance
Procedia PDF Downloads 1662592 Material Properties Evolution Affecting Demisability for Space Debris Mitigation
Authors: Chetan Mahawar, Sarath Chandran, Sridhar Panigrahi, V. P. Shaji
Abstract:
The ever-growing advancement in space exploration has led to an alarming concern for space debris removal as it restricts further launch operations and adventurous space missions; hence numerous studies have come up with technologies for re-entry predictions and material selection processes for mitigating space debris. The selection of material and operating conditions is determined with the objective of lightweight structure and ability to demise faster subject to spacecraft survivability during its mission. Since the demisability of spacecraft depends on evolving thermal material properties such as emissivity, specific heat capacity, thermal conductivity, radiation intensity, etc. Therefore, this paper presents the analysis of evolving thermal material properties of spacecraft, which affect the demisability process and thus estimate demise time using the demisability model by incorporating evolving thermal properties for sensible heating followed by the complete or partial break-up of spacecraft. The demisability analysis thus concludes the best suitable spacecraft material is based on the least estimated demise time, which fulfills the criteria of design-for-survivability and as well as of design-for-demisability.Keywords: demisability, emissivity, lightweight, re-entry, survivability
Procedia PDF Downloads 1152591 Analyzing Safety Incidents using the Fatigue Risk Index Calculator as an Indicator of Fatigue within a UK Rail Franchise
Authors: Michael Scott Evans, Andrew Smith
Abstract:
The feeling of fatigue at work could potentially have devastating consequences. The aim of this study was to investigate whether the well-established objective indicator of fatigue – the Fatigue Risk Index (FRI) calculator used by the rail industry is an effective indicator to the number of safety incidents, in which fatigue could have been a contributing factor. The study received ethics approval from Cardiff University’s Ethics Committee (EC.16.06.14.4547). A total of 901 safety incidents were recorded from a single British rail franchise between 1st June 2010 – 31st December 2016, into the Safety Management Information System (SMIS). The safety incident types identified that fatigue could have been a contributing factor were: Signal Passed at Danger (SPAD), Train Protection & Warning System (TPWS) activation, Automatic Warning System (AWS) slow to cancel, failed to call, and station overrun. From the 901 recorded safety incidents, the scheduling system CrewPlan was used to extract the Fatigue Index (FI) score and Risk Index (RI) score of all train drivers on the day of the safety incident. Only the working rosters of 64.2% (N = 578) (550 men and 28 female) ranging in age from 24 – 65 years old (M = 47.13, SD = 7.30) were accessible for analyses. Analysis from all 578 train drivers who were involved in safety incidents revealed that 99.8% (N = 577) of Fatigue Index (FI) scores fell within or below the identified guideline threshold of 45 as well as 97.9% (N = 566) of Risk Index (RI) scores falling below the 1.6 threshold range. Their scores represent good practice within the rail industry. These findings seem to indicate that the current objective indicator, i.e. the FRI calculator used in this study by the British rail franchise was not an effective predictor of train driver’s FI scores and RI scores, as safety incidents in which fatigue could have been a contributing factor represented only 0.2% of FI scores and 2.1% of RI scores. Further research is needed to determine whether there are other contributing factors that could provide a better indication as to why there is such a significantly large proportion of train drivers who are involved in safety incidents, in which fatigue could have been a contributing factor have such low FI and RI scores.Keywords: fatigue risk index calculator, objective indicator of fatigue, rail industry, safety incident
Procedia PDF Downloads 1812590 Study for an Optimal Cable Connection within an Inner Grid of an Offshore Wind Farm
Authors: Je-Seok Shin, Wook-Won Kim, Jin-O Kim
Abstract:
The offshore wind farm needs to be designed carefully considering economics and reliability aspects. There are many decision-making problems for designing entire offshore wind farm, this paper focuses on an inner grid layout which means the connection between wind turbines as well as between wind turbines and an offshore substation. A methodology proposed in this paper determines the connections and the cable type for each connection section using K-clustering, minimum spanning tree and cable selection algorithms. And then, a cost evaluation is performed in terms of investment, power loss and reliability. Through the cost evaluation, an optimal layout of inner grid is determined so as to have the lowest total cost. In order to demonstrate the validity of the methodology, the case study is conducted on 240MW offshore wind farm, and the results show that it is helpful to design optimally offshore wind farm.Keywords: offshore wind farm, optimal layout, k-clustering algorithm, minimum spanning algorithm, cable type selection, power loss cost, reliability cost
Procedia PDF Downloads 3852589 Determinants of Sustainable Supplier Selection: An Exploratory Study of Manufacturing Tunisian’s SMEs
Authors: Ahlem Dhahri, Audrey Becuwe
Abstract:
This study examines the adoption of sustainable purchasing practices among Tunisian SMEs, with a focus on assessing how environmental and social sustainability maturity affects the implementation of sustainable supplier selection (SSS) criteria. Using institutional theory to classify coercive, normative, and mimetic pressures, as well as emerging drivers and barriers, this study explores the institutional factors influencing sustainable purchasing practices and the specific barriers faced by Tunisian SMEs in this area. An exploratory, abductive qualitative research design was adopted for this multiple case study, which involved 19 semi-structured interviews with owners and managers of 17 Tunisian manufacturing SMEs. The Gioia method was used to analyze the data, thus enabling the identification of key themes and relationships directly from the raw data. This approach facilitated a structured interpretation of the institutional factors influencing sustainable purchasing practices, with insights drawn from the participants' perspectives. The study reveals that Tunisian SMEs are at different levels of sustainability maturity, with a significant impact on their procurement practices. SMEs with advanced sustainability maturity integrate both environmental and social criteria into their supplier selection processes, while those with lower maturity levels rely on mostly traditional criteria such as cost, quality, and delivery. Key institutional drivers identified include regulatory pressure, market expectations, and stakeholder influence. Additional emerging drivers—such as certifications and standards, economic incentives, environmental commitment as a core value, and group-wide strategic alignment—also play a critical role in driving sustainable procurement. Conversely, the study reveals significant barriers, including economic constraints, limited awareness, and resource limitations. It also identifies three main categories of emerging barriers: (1) logistical and supply chain constraints, including retailer/intermediary dependency, tariff regulations, and a perceived lack of direct responsibility in B2B supply chains; (2) economic and financial constraints; and (3) operational barriers, such as unilateral environmental responsibility, a product-centric focus and the influence of personal relationships. Providing valuable insights into the role of sustainability maturity in supplier selection, this study is the first to explore sustainable procurement practices in the Tunisian SME context. Integrating an analysis of institutional drivers, including emerging incentives and barriers, provides practical implications for SMEs seeking to improve sustainability in procurement. The results highlight the need for stronger regulatory frameworks and support mechanisms to facilitate the adoption of sustainable practices among SMEs in Tunisia.Keywords: Tunisian SME, sustainable supplier selection, institutional theory, determinant, qualitative study
Procedia PDF Downloads 102588 Antioxidant Potential of Pomegranate Rind Extract Attenuates Pain, Inflammation and Bone Damage in Experimental Rats
Authors: Ritu Karwasra, Surender Singh
Abstract:
Inflammation is an important physiological response of the body’s self-defense system that helps in eliminating and protecting organism from harmful stimuli and in tissue repair. It is a highly regulated protective response which helps in eliminating the initial cause of cell injury, and initiates the process of repair. The present study was designed to evaluate the ameliorative effect of pomegranate rind extract on pain and inflammation. Hydroalcoholic standardized rind extract of pomegranate at doses 50, 100 and 200 mg/kg and indomethacin (3 mg/kg) was tested against eddy’s hot plate induced thermal algesia, carrageenan (acute inflammation) and Complete Freund’s Adjuvant (chronic inflammation) induced models in Wistar rats. Parameters analyzed were inhibition of paw edema, measurement of joint diameter, levels of GSH, TBARS, SOD, TNF-α, radiographic imaging, tissue histology and synovial expression of pro-inflammatory cytokine receptor (TNF-R1). Radiological and light microscopical analysis were carried out to find out the bone damage in CFA-induced chronic inflammatory model. Findings of the present study revealed that pomegranate rind extract at a dose of 200 mg/kg caused a significant (p<0.05) reduction in paw swelling in both the inflammatory models. Nociceptive threshold was also significantly (p<0.05) improved. Immunohistochemical analysis of TNF-R1 in CFA-induced group showed elevated level, whereas reduction in level of TNF-R1 was observed in pomegranate (200 mg/kg). Henceforth, we might say that pomegranate produced a dose-dependent reduction in inflammation and pain along with the reduction in levels of oxidative stress markers and tissue histology, and the effect was found to be comparable to that of indomethacin. Thus, it can be concluded that pomegranate is a potential therapeutic target in the pathogenesis of inflammation and pain, and punicalagin is the major constituents found in rind extract might be responsible for the activity.Keywords: carrageenan, inflammation, nociceptive-threshold, pomegranate, histopathology
Procedia PDF Downloads 2192587 Rainfall Estimation over Northern Tunisia by Combining Meteosat Second Generation Cloud Top Temperature and Tropical Rainfall Measuring Mission Microwave Imager Rain Rates
Authors: Saoussen Dhib, Chris M. Mannaerts, Zoubeida Bargaoui, Ben H. P. Maathuis, Petra Budde
Abstract:
In this study, a new method to delineate rain areas in northern Tunisia is presented. The proposed approach is based on the blending of the geostationary Meteosat Second Generation (MSG) infrared channel (IR) with the low-earth orbiting passive Tropical Rainfall Measuring Mission (TRMM) Microwave Imager (TMI). To blend this two products, we need to apply two main steps. Firstly, we have to identify the rainy pixels. This step is achieved based on a classification using MSG channel IR 10.8 and the water vapor WV 0.62, applying a threshold on the temperature difference of less than 11 Kelvin which is an approximation of the clouds that have a high likelihood of precipitation. The second step consists on fitting the relation between IR cloud top temperature with the TMI rain rates. The correlation coefficient of these two variables has a negative tendency, meaning that with decreasing temperature there is an increase in rainfall intensity. The fitting equation will be applied for the whole day of MSG 15 minutes interval images which will be summed. To validate this combined product, daily extreme rainfall events occurred during the period 2007-2009 were selected, using a threshold criterion for large rainfall depth (> 50 mm/day) occurring at least at one rainfall station. Inverse distance interpolation method was applied to generate rainfall maps for the drier summer season (from May to October) and the wet winter season (from November to April). The evaluation results of the estimated rainfall combining MSG and TMI was very encouraging where all the events were detected rainy and the correlation coefficients were much better than previous evaluated products over the study area such as MSGMPE and PERSIANN products. The combined product showed a better performance during wet season. We notice also an overestimation of the maximal estimated rain for many events.Keywords: combination, extreme, rainfall, TMI-MSG, Tunisia
Procedia PDF Downloads 1742586 Selection of Optimal Reduced Feature Sets of Brain Signal Analysis Using Heuristically Optimized Deep Autoencoder
Authors: Souvik Phadikar, Nidul Sinha, Rajdeep Ghosh
Abstract:
In brainwaves research using electroencephalogram (EEG) signals, finding the most relevant and effective feature set for identification of activities in the human brain is a big challenge till today because of the random nature of the signals. The feature extraction method is a key issue to solve this problem. Finding those features that prove to give distinctive pictures for different activities and similar for the same activities is very difficult, especially for the number of activities. The performance of a classifier accuracy depends on this quality of feature set. Further, more number of features result in high computational complexity and less number of features compromise with the lower performance. In this paper, a novel idea of the selection of optimal feature set using a heuristically optimized deep autoencoder is presented. Using various feature extraction methods, a vast number of features are extracted from the EEG signals and fed to the autoencoder deep neural network. The autoencoder encodes the input features into a small set of codes. To avoid the gradient vanish problem and normalization of the dataset, a meta-heuristic search algorithm is used to minimize the mean square error (MSE) between encoder input and decoder output. To reduce the feature set into a smaller one, 4 hidden layers are considered in the autoencoder network; hence it is called Heuristically Optimized Deep Autoencoder (HO-DAE). In this method, no features are rejected; all the features are combined into the response of responses of the hidden layer. The results reveal that higher accuracy can be achieved using optimal reduced features. The proposed HO-DAE is also compared with the regular autoencoder to test the performance of both. The performance of the proposed method is validated and compared with the other two methods recently reported in the literature, which reveals that the proposed method is far better than the other two methods in terms of classification accuracy.Keywords: autoencoder, brainwave signal analysis, electroencephalogram, feature extraction, feature selection, optimization
Procedia PDF Downloads 1142585 Investment Projects Selection Problem under Hesitant Fuzzy Environment
Authors: Irina Khutsishvili
Abstract:
In the present research, a decision support methodology for the multi-attribute group decision-making (MAGDM) problem is developed, namely for the selection of investment projects. The objective of the investment project selection problem is to choose the best project among the set of projects, seeking investment, or to rank all projects in descending order. The project selection is made considering a set of weighted attributes. To evaluate the attributes in our approach, expert assessments are used. In the proposed methodology, lingual expressions (linguistic terms) given by all experts are used as initial attribute evaluations, since they are the most natural and convenient representation of experts' evaluations. Then lingual evaluations are converted into trapezoidal fuzzy numbers, and the aggregate trapezoidal hesitant fuzzy decision matrix will be built. The case is considered when information on the attribute weights is completely unknown. The attribute weights are identified based on the De Luca and Termini information entropy concept, determined in the context of hesitant fuzzy sets. The decisions are made using the extended Technique for Order Performance by Similarity to Ideal Solution (TOPSIS) method under a hesitant fuzzy environment. Hence, a methodology is based on a trapezoidal valued hesitant fuzzy TOPSIS decision-making model with entropy weights. The ranking of alternatives is performed by the proximity of their distances to both the fuzzy positive-ideal solution (FPIS) and the fuzzy negative-ideal solution (FNIS). For this purpose, the weighted hesitant Hamming distance is used. An example of investment decision-making is shown that clearly explains the procedure of the proposed methodology.Keywords: In the present research, a decision support methodology for the multi-attribute group decision-making (MAGDM) problem is developed, namely for the selection of investment projects. The objective of the investment project selection problem is to choose the best project among the set of projects, seeking investment, or to rank all projects in descending order. The project selection is made considering a set of weighted attributes. To evaluate the attributes in our approach, expert assessments are used. In the proposed methodology, lingual expressions (linguistic terms) given by all experts are used as initial attribute evaluations since they are the most natural and convenient representation of experts' evaluations. Then lingual evaluations are converted into trapezoidal fuzzy numbers, and the aggregate trapezoidal hesitant fuzzy decision matrix will be built. The case is considered when information on the attribute weights is completely unknown. The attribute weights are identified based on the De Luca and Termini information entropy concept, determined in the context of hesitant fuzzy sets. The decisions are made using the extended Technique for Order Performance by Similarity to Ideal Solution (TOPSIS) method under a hesitant fuzzy environment. Hence, a methodology is based on a trapezoidal valued hesitant fuzzy TOPSIS decision-making model with entropy weights. The ranking of alternatives is performed by the proximity of their distances to both the fuzzy positive-ideal solution (FPIS) and the fuzzy negative-ideal solution (FNIS). For this purpose, the weighted hesitant Hamming distance is used. An example of investment decision-making is shown that clearly explains the procedure of the proposed methodology.
Procedia PDF Downloads 1172584 Development of a Multi-Locus DNA Metabarcoding Method for Endangered Animal Species Identification
Authors: Meimei Shi
Abstract:
Objectives: The identification of endangered species, especially simultaneous detection of multiple species in complex samples, plays a critical role in alleged wildlife crime incidents and prevents illegal trade. This study was to develop a multi-locus DNA metabarcoding method for endangered animal species identification. Methods: Several pairs of universal primers were designed according to the mitochondria conserved gene regions. Experimental mixtures were artificially prepared by mixing well-defined species, including endangered species, e.g., forest musk, bear, tiger, pangolin, and sika deer. The artificial samples were prepared with 1-16 well-characterized species at 1% to 100% DNA concentrations. After multiplex-PCR amplification and parameter modification, the amplified products were analyzed by capillary electrophoresis and used for NGS library preparation. The DNA metabarcoding was carried out based on Illumina MiSeq amplicon sequencing. The data was processed with quality trimming, reads filtering, and OTU clustering; representative sequences were blasted using BLASTn. Results: According to the parameter modification and multiplex-PCR amplification results, five primer sets targeting COI, Cytb, 12S, and 16S, respectively, were selected as the NGS library amplification primer panel. High-throughput sequencing data analysis showed that the established multi-locus DNA metabarcoding method was sensitive and could accurately identify all species in artificial mixtures, including endangered animal species Moschus berezovskii, Ursus thibetanus, Panthera tigris, Manis pentadactyla, Cervus nippon at 1% (DNA concentration). In conclusion, the established species identification method provides technical support for customs and forensic scientists to prevent the illegal trade of endangered animals and their products.Keywords: DNA metabarcoding, endangered animal species, mitochondria nucleic acid, multi-locus
Procedia PDF Downloads 1392583 Minimization of Seepage in Sandy Soil Using Different Grouting Types
Authors: Eng. M. Ahmed, A. Ibrahim, M. Ashour
Abstract:
One of the major concerns facing dam is the repair of their structures to prevent the seepage under them. In previous years, many existing dams have been treated by grouting, but with varying degrees of success. One of the major reasons for this erratic performance is the unsuitable selection of the grouting materials to reduce the seepage. Grouting is an effective way to improve the engineering properties of the soil and strengthen of the permeability of the soil to reduce the seepage. The purpose of this paper is to focus on the efficiency of current available grouting materials and techniques from construction, environmental and economical point of view. The seepage reduction usually accomplished by either chemical grouting or cementious grouting using ultrafine cement. In addition, the study shows a comparison between grouting materials according to their degree of permeability reduction and cost. The application of seepage reduction is based on the permeation grouting using grout curtain installation. The computer program (SEEP/W) is employed to model a dam rested on sandy soil, using grout curtain to reduce seepage quantity and hydraulic gradient by different grouting materials. This study presents a relationship that takes into account the permeability of the soil, grout curtain spacing and a new performance parameter that can be used to predict the best selection of grouting materials for seepage reduction.Keywords: seepage, sandy soil, grouting, permeability
Procedia PDF Downloads 3672582 Early Indications of the Success of Rehabilitating Degraded Lands through the Green Legacy Project Implemented in Ethiopia
Authors: Tamirat Solomon, Aberash Yohannis, Efrem Gulfo
Abstract:
The plantation of trees, which harmonizes the agroecology of the environment, has been implemented in Ethiopia with great concern for a noticeably degraded environment. This study was designed to evaluate the effectiveness of green legacy, species selection and, the rate of survival, and the management status in the study areas. A systematic sampling method was employed to collect the required data from 144 quadrants measuring a 15m radius with an interval of 40m apart. Additionally, 244 sample households were selected for the socioeconomic study in addition to secondary data collected from office recordings. The data collected was analyzed using multivariate analysis, considering exposure and outcome variables. The findings of this study indicated that four exotic tree species, namely; A. salgina, C. fistula, A. indica, and G. robusta, were commonly selected tree species for degraded land restoration in the study areas. Among the seedlings planted at the four study sites, a total of 79.9% survived, and A. salgina was the dominant and best performed species, A. indica was the least survived species in the entire study area. The age of the seedling before planting significantly (p = 0.05) affected the survival potential of most seedlings of species, and the majority (82%) of local communities expressed their positive attitudes and willingness to manage the restoration works in the study areas. It was recommended to consider the inclusion of native species in the restoration effort and evaluate the co-existence of native flora with exotic and its competition for nutrients, water, and light in addition to the invading potentials in the ecosystem. In general, before embarking on degraded land restoration, species selection, adequate preparation of seedlings, and species diversity composition that exactly fit the socioeconomic and ecological demands of the areas must get the attention for the success of the restoration.Keywords: plantation forest, degraded land, forest restoration, plantation survival, species selection
Procedia PDF Downloads 762581 Optimum Turbomachine Preliminary Selection for Power Regeneration in Vapor Compression Cool Production Plants
Authors: Sayyed Benyamin Alavi, Giovanni Cerri, Leila Chennaoui, Ambra Giovannelli, Stefano Mazzoni
Abstract:
Primary energy consumption and emissions of pollutants (including CO2) sustainability call to search methodologies to lower power absorption for unit of a given product. Cool production plants based on vapour compression are widely used for many applications: air conditioning, food conservation, domestic refrigerators and freezers, special industrial processes, etc. In the field of cool production, the amount of Yearly Consumed Primary Energy is enormous, thus, saving some percentage of it, leads to big worldwide impact in the energy consumption and related energy sustainability. Among various techniques to reduce power required by a Vapour Compression Cool Production Plant (VCCPP), the technique based on Power Regeneration by means of Internal Direct Cycle (IDC) will be considered in this paper. Power produced by IDC reduces power need for unit of produced Cool Power by the VCCPP. The paper contains basic concepts that lead to develop IDCs and the proposed options to use the IDC Power. Among various selections for using turbo machines, Best Economically Available Technologies (BEATs) have been explored. Based on vehicle engine turbochargers, they have been taken into consideration for this application. According to BEAT Database and similarity rules, the best turbo machine selection leads to the minimum nominal power required by VCCPP Main Compressor. Results obtained installing the prototype in “ad hoc” designed test bench will be discussed and compared with the expected performance. Forecasts for the upgrading VCCPP, various applications will be given and discussed. 4-6% saving is expected for air conditioning cooling plants and 15-22% is expected for cryogenic plants.Keywords: Refrigeration Plant, Vapour Pressure Amplifier, Compressor, Expander, Turbine, Turbomachinery Selection, Power Saving
Procedia PDF Downloads 4262580 The Impact of Modeling Method of Moisture Emission from the Swimming Pool on the Accuracy of Numerical Calculations of Air Parameters in Ventilated Natatorium
Authors: Piotr Ciuman, Barbara Lipska
Abstract:
The aim of presented research was to improve numerical predictions of air parameters distribution in the actual natatorium by the selection of calculation formula of mass flux of moisture emitted from the pool. Selected correlation should ensure the best compliance of numerical results with the measurements' results of these parameters in the facility. The numerical model of the natatorium was developed, for which boundary conditions were prepared on the basis of measurements' results carried out in the actual facility. Numerical calculations were carried out with the use of ANSYS CFX software, with six formulas being implemented, which in various ways made the moisture emission dependent on water surface temperature and air parameters in the natatorium. The results of calculations with the use of these formulas were compared for air parameters' distributions: Specific humidity, velocity and temperature in the facility. For the selection of the best formula, numerical results of these parameters in occupied zone were validated by comparison with the measurements' results carried out at selected points of this zone.Keywords: experimental validation, indoor swimming pool, moisture emission, natatorium, numerical calculations CFD, thermal and humidity conditions, ventilation
Procedia PDF Downloads 4112579 An Efficient Stud Krill Herd Framework for Solving Non-Convex Economic Dispatch Problem
Authors: Bachir Bentouati, Lakhdar Chaib, Saliha Chettih, Gai-Ge Wang
Abstract:
The problem of economic dispatch (ED) is the basic problem of power framework, its main goal is to find the most favorable generation dispatch to generate each unit, reduce the whole power generation cost, and meet all system limitations. A heuristic algorithm, recently developed called Stud Krill Herd (SKH), has been employed in this paper to treat non-convex ED problems. The proposed KH has been modified using Stud selection and crossover (SSC) operator, to enhance the solution quality and avoid local optima. We are demonstrated SKH effects in two case study systems composed of 13-unit and 40-unit test systems to verify its performance and applicability in solving the ED problems. In the above systems, SKH can successfully obtain the best fuel generator and distribute the load requirements for the online generators. The results showed that the use of the proposed SKH method could reduce the total cost of generation and optimize the fulfillment of the load requirements.Keywords: stud krill herd, economic dispatch, crossover, stud selection, valve-point effect
Procedia PDF Downloads 1982578 Comparison of Receiver Operating Characteristic Curve Smoothing Methods
Authors: D. Sigirli
Abstract:
The Receiver Operating Characteristic (ROC) curve is a commonly used statistical tool for evaluating the diagnostic performance of screening and diagnostic test with continuous or ordinal scale results which aims to predict the presence or absence probability of a condition, usually a disease. When the test results were measured as numeric values, sensitivity and specificity can be computed across all possible threshold values which discriminate the subjects as diseased and non-diseased. There are infinite numbers of possible decision thresholds along the continuum of the test results. The ROC curve presents the trade-off between sensitivity and the 1-specificity as the threshold changes. The empirical ROC curve which is a non-parametric estimator of the ROC curve is robust and it represents data accurately. However, especially for small sample sizes, it has a problem of variability and as it is a step function there can be different false positive rates for a true positive rate value and vice versa. Besides, the estimated ROC curve being in a jagged form, since the true ROC curve is a smooth curve, it underestimates the true ROC curve. Since the true ROC curve is assumed to be smooth, several smoothing methods have been explored to smooth a ROC curve. These include using kernel estimates, using log-concave densities, to fit parameters for the specified density function to the data with the maximum-likelihood fitting of univariate distributions or to create a probability distribution by fitting the specified distribution to the data nd using smooth versions of the empirical distribution functions. In the present paper, we aimed to propose a smooth ROC curve estimation based on the boundary corrected kernel function and to compare the performances of ROC curve smoothing methods for the diagnostic test results coming from different distributions in different sample sizes. We performed simulation study to compare the performances of different methods for different scenarios with 1000 repetitions. It is seen that the performance of the proposed method was typically better than that of the empirical ROC curve and only slightly worse compared to the binormal model when in fact the underlying samples were generated from the normal distribution.Keywords: empirical estimator, kernel function, smoothing, receiver operating characteristic curve
Procedia PDF Downloads 152