Search results for: data transfer optimization
26364 Computational Fluid Dynamics Analysis of a Biomass Burner Gas Chamber in OpenFOAM
Authors: Óscar Alfonso Gómez Sepúlveda, Julián Ernesto Jaramillo, Diego Camilo Durán
Abstract:
The global climate crisis has affected different aspects of human life, and in an effort to reverse the effects generated, we seek to optimize and improve the equipment and plants that produce high emissions of CO₂, being possible to achieve this through numerical simulations. These equipments include biomass combustion chambers. The objective of this research is to visualize the thermal behavior of a gas chamber that is used in the process of obtaining vegetable extracts. The simulation is carried out with OpenFOAM taking into account the conservation of energy, turbulence, and radiation; for the purposes of the simulation, combustion is omitted and replaced by heat generation. Within the results, the streamlines generated by the primary and secondary flows are analyzed in order to visualize whether they generate the expected effect, and the energy is used to the maximum. The inclusion of radiation seeks to compare its influence and also simplify the computational times to perform mesh analysis. An analysis is carried out with simplified geometries and with experimental data to corroborate the selection of the models to be used, and it is obtained that for turbulence, the appropriate one is the standard k - w. As a means of verification, a general energy balance is made and compared with the results of the numerical analysis, where the error is 1.67%, which is considered acceptable. From the approach to improvement options, it was found that with the implementation of fins, heat can be increased by up to 7.3%.Keywords: CFD analysis, biomass, heat transfer, radiation, OpenFOAM
Procedia PDF Downloads 11826363 Performance Evaluation of Distributed Deep Learning Frameworks in Cloud Environment
Authors: Shuen-Tai Wang, Fang-An Kuo, Chau-Yi Chou, Yu-Bin Fang
Abstract:
2016 has become the year of the Artificial Intelligence explosion. AI technologies are getting more and more matured that most world well-known tech giants are making large investment to increase the capabilities in AI. Machine learning is the science of getting computers to act without being explicitly programmed, and deep learning is a subset of machine learning that uses deep neural network to train a machine to learn features directly from data. Deep learning realizes many machine learning applications which expand the field of AI. At the present time, deep learning frameworks have been widely deployed on servers for deep learning applications in both academia and industry. In training deep neural networks, there are many standard processes or algorithms, but the performance of different frameworks might be different. In this paper we evaluate the running performance of two state-of-the-art distributed deep learning frameworks that are running training calculation in parallel over multi GPU and multi nodes in our cloud environment. We evaluate the training performance of the frameworks with ResNet-50 convolutional neural network, and we analyze what factors that result in the performance among both distributed frameworks as well. Through the experimental analysis, we identify the overheads which could be further optimized. The main contribution is that the evaluation results provide further optimization directions in both performance tuning and algorithmic design.Keywords: artificial intelligence, machine learning, deep learning, convolutional neural networks
Procedia PDF Downloads 21126362 Calculation of Pressure-Varying Langmuir and Brunauer-Emmett-Teller Isotherm Adsorption Parameters
Authors: Trevor C. Brown, David J. Miron
Abstract:
Gas-solid physical adsorption methods are central to the characterization and optimization of the effective surface area, pore size and porosity for applications such as heterogeneous catalysis, and gas separation and storage. Properties such as adsorption uptake, capacity, equilibrium constants and Gibbs free energy are dependent on the composition and structure of both the gas and the adsorbent. However, challenges remain, in accurately calculating these properties from experimental data. Gas adsorption experiments involve measuring the amounts of gas adsorbed over a range of pressures under isothermal conditions. Various constant-parameter models, such as Langmuir and Brunauer-Emmett-Teller (BET) theories are used to provide information on adsorbate and adsorbent properties from the isotherm data. These models typically do not provide accurate interpretations across the full range of pressures and temperatures. The Langmuir adsorption isotherm is a simple approximation for modelling equilibrium adsorption data and has been effective in estimating surface areas and catalytic rate laws, particularly for high surface area solids. The Langmuir isotherm assumes the systematic filling of identical adsorption sites to a monolayer coverage. The BET model is based on the Langmuir isotherm and allows for the formation of multiple layers. These additional layers do not interact with the first layer and the energetics are equal to the adsorbate as a bulk liquid. This BET method is widely used to measure the specific surface area of materials. Both Langmuir and BET models assume that the affinity of the gas for all adsorption sites are identical and so the calculated adsorbent uptake at the monolayer and equilibrium constant are independent of coverage and pressure. Accurate representations of adsorption data have been achieved by extending the Langmuir and BET models to include pressure-varying uptake capacities and equilibrium constants. These parameters are determined using a novel regression technique called flexible least squares for time-varying linear regression. For isothermal adsorption the adsorption parameters are assumed to vary slowly and smoothly with increasing pressure. The flexible least squares for pressure-varying linear regression (FLS-PVLR) approach assumes two distinct types of discrepancy terms, dynamic and measurement for all parameters in the linear equation used to simulate the data. Dynamic terms account for pressure variation in successive parameter vectors, and measurement terms account for differences between observed and theoretically predicted outcomes via linear regression. The resultant pressure-varying parameters are optimized by minimizing both dynamic and measurement residual squared errors. Validation of this methodology has been achieved by simulating adsorption data for n-butane and isobutane on activated carbon at 298 K, 323 K and 348 K and for nitrogen on mesoporous alumina at 77 K with pressure-varying Langmuir and BET adsorption parameters (equilibrium constants and uptake capacities). This modeling provides information on the adsorbent (accessible surface area and micropore volume), adsorbate (molecular areas and volumes) and thermodynamic (Gibbs free energies) variations of the adsorption sites.Keywords: Langmuir adsorption isotherm, BET adsorption isotherm, pressure-varying adsorption parameters, adsorbate and adsorbent properties and energetics
Procedia PDF Downloads 23426361 Predicting Data Center Resource Usage Using Quantile Regression to Conserve Energy While Fulfilling the Service Level Agreement
Authors: Ahmed I. Alutabi, Naghmeh Dezhabad, Sudhakar Ganti
Abstract:
Data centers have been growing in size and dema nd continuously in the last two decades. Planning for the deployment of resources has been shallow and always resorted to over-provisioning. Data center operators try to maximize the availability of their services by allocating multiple of the needed resources. One resource that has been wasted, with little thought, has been energy. In recent years, programmable resource allocation has paved the way to allow for more efficient and robust data centers. In this work, we examine the predictability of resource usage in a data center environment. We use a number of models that cover a wide spectrum of machine learning categories. Then we establish a framework to guarantee the client service level agreement (SLA). Our results show that using prediction can cut energy loss by up to 55%.Keywords: machine learning, artificial intelligence, prediction, data center, resource allocation, green computing
Procedia PDF Downloads 10826360 Baseline Study for Performance Evaluation of New Generation Solar Insulation Films for Windows: A Test Bed in Singapore
Authors: Priya Pawar, Rithika Susan Thomas, Emmanuel Blonkowski
Abstract:
Due to the solar geometry of Singapore, which lay within the geographical classification of equatorial tropics, there is a great deal of thermal energy transfer to the inside of the buildings. With changing face of economic development of cities like Singapore, more and more buildings are designed to be lightweight using transparent construction materials such as glass. Increased demand for energy efficiency and reduced cooling load demands make it important for building designer and operators to adopt new and non-invasive technologies to achieve building energy efficiency targets. A real time performance evaluation study was undertaken at School of Art Design and Media (SADM), Singapore, to determine the efficiency potential of a new generation solar insulation film. The building has a window to wall ratio (WWR) of 100% and is fitted with high performance (low emissivity) double glazed units. The empirical data collected was then used to calibrate a computerized simulation model to understand the annual energy consumption based on existing conditions (baseline performance). It was found that the correlations of various parameters such as solar irradiance, solar heat flux, and outdoor air-temperatures quantification are significantly important to determine the cooling load during a particular period of testing.Keywords: solar insulation film, building energy efficiency, tropics, cooling load
Procedia PDF Downloads 19326359 Prosperous Digital Image Watermarking Approach by Using DCT-DWT
Authors: Prabhakar C. Dhavale, Meenakshi M. Pawar
Abstract:
In this paper, everyday tons of data is embedded on digital media or distributed over the internet. The data is so distributed that it can easily be replicated without error, putting the rights of their owners at risk. Even when encrypted for distribution, data can easily be decrypted and copied. One way to discourage illegal duplication is to insert information known as watermark, into potentially valuable data in such a way that it is impossible to separate the watermark from the data. These challenges motivated researchers to carry out intense research in the field of watermarking. A watermark is a form, image or text that is impressed onto paper, which provides evidence of its authenticity. Digital watermarking is an extension of the same concept. There are two types of watermarks visible watermark and invisible watermark. In this project, we have concentrated on implementing watermark in image. The main consideration for any watermarking scheme is its robustness to various attacksKeywords: watermarking, digital, DCT-DWT, security
Procedia PDF Downloads 42326358 Machine Learning Data Architecture
Authors: Neerav Kumar, Naumaan Nayyar, Sharath Kashyap
Abstract:
Most companies see an increase in the adoption of machine learning (ML) applications across internal and external-facing use cases. ML applications vend output either in batch or real-time patterns. A complete batch ML pipeline architecture comprises data sourcing, feature engineering, model training, model deployment, model output vending into a data store for downstream application. Due to unclear role expectations, we have observed that scientists specializing in building and optimizing models are investing significant efforts into building the other components of the architecture, which we do not believe is the best use of scientists’ bandwidth. We propose a system architecture created using AWS services that bring industry best practices to managing the workflow and simplifies the process of model deployment and end-to-end data integration for an ML application. This narrows down the scope of scientists’ work to model building and refinement while specialized data engineers take over the deployment, pipeline orchestration, data quality, data permission system, etc. The pipeline infrastructure is built and deployed as code (using terraform, cdk, cloudformation, etc.) which makes it easy to replicate and/or extend the architecture to other models that are used in an organization.Keywords: data pipeline, machine learning, AWS, architecture, batch machine learning
Procedia PDF Downloads 6426357 Observation of a Phase Transition in Adsorbed Hydrogen at 101 Kelvin
Authors: Raina J. Olsen, Andrew K. Gillespie, John W. Taylor, Cristian I. Contescu, Peter Pfeifer, James R. Morris
Abstract:
While adsorbent surfaces such as graphite are known to increase the melting temperature of solid H2, this effect is normally rather small, increasing to 20 Kelvin (K) relative to 14 K in the bulk. An as-yet unidentified phase transition has been observed in a system of H2 adsorbed in a porous, locally graphitic, Saran carbon with sub-nanometer sized pores at temperatures (74-101 K) and pressures ( > 76 bar) well above the critical point of bulk H2 using hydrogen adsorption and neutron scattering experiments. Adsorption data shows a discontinuous pressure jump in the kinetics at 76 bar after nearly an hour of equilibration time, which is identified as an exothermic phase transition. This discontinuity is observed in the 87 K isotherm, but not the 77 K isotherm. At higher pressures, the measured isotherms show greater excess adsorption at 87 K than 77 K. Inelastic neutron scattering measurements also show a striking phase transition, with the amount of high angle scattering (corresponding to large momentum transfer/ large effective mass) increasing by up to a factor of 5 in the novel phase. During the course of the neutron scattering experiment, three of these reversible spectral phase transitions were observed to occur in response to only changes in sample temperature. The novel phase was observed by neutron scattering only at high H2 pressure (123 bar and 187 bar) and temperatures between 74-101 K in the sample of interest, but not at low pressure (30 bar), or in a control activated carbon at 186 bar of H2 pressure. Based on several of the more unusual observations, such as the slow equilibration and the presence of both an upper and lower temperature bound, a reasonable hypothesis is that this phase forms only in the presence of a high concentration of ortho-H2 (nuclear spin S=1). The increase in adsorption with temperature, temperatures which cross the lower temperature bound observed by neutron scattering, indicates that this novel phase is denser. Structural characterization data on the adsorbent shows that it may support a commensurate solid phase denser than those known to exist on graphite at much lower temperatures. Whatever this phase is eventually proven to be, these results show that surfaces can have a more striking effect on hydrogen phases than previously thought.Keywords: adsorbed phases, hydrogen, neutron scattering, nuclear spin
Procedia PDF Downloads 46626356 Blind Hybrid ARQ Retransmissions with Different Multiplexing between Time and Frequency for Ultra-Reliable Low-Latency Communications in 5G
Authors: Mohammad Tawhid Kawser, Ishrak Kabir, Sadia Sultana, Tanjim Ahmad
Abstract:
A promising service category of 5G, popularly known as Ultra-Reliable Low-Latency Communications (URLLC), is devoted to providing users with the staunchest fail-safe connections in the splits of a second. The reliability of data transfer, as offered by Hybrid ARQ (HARQ), should be employed as URLLC applications are highly error-sensitive. However, the delay added by HARQ ACK/NACK and retransmissions can degrade performance as URLLC applications are highly delay-sensitive too. To improve latency while maintaining reliability, this paper proposes the use of blind transmissions of redundancy versions exploiting the frequency diversity of wide bandwidth of 5G. The blind HARQ retransmissions proposed so far consider narrow bandwidth cases, for example, dedicated short range communication (DSRC), shared channels for device-to-device (D2D) communication, etc., and thus, do not gain much from the frequency diversity. The proposal also combines blind and ACK/NACK based retransmissions for different multiplexing options between time and frequency depending on the current radio channel quality and stringency of latency requirements. The wide bandwidth of 5G justifies that the proposed blind retransmission, without waiting for ACK/NACK, is not palpably extravagant. A simulation is performed to demonstrate the improvement in latency of the proposed scheme.Keywords: 5G, URLLC, HARQ, latency, frequency diversity
Procedia PDF Downloads 3726355 A Comparison of Image Data Representations for Local Stereo Matching
Authors: André Smith, Amr Abdel-Dayem
Abstract:
The stereo matching problem, while having been present for several decades, continues to be an active area of research. The goal of this research is to find correspondences between elements found in a set of stereoscopic images. With these pairings, it is possible to infer the distance of objects within a scene, relative to the observer. Advancements in this field have led to experimentations with various techniques, from graph-cut energy minimization to artificial neural networks. At the basis of these techniques is a cost function, which is used to evaluate the likelihood of a particular match between points in each image. While at its core, the cost is based on comparing the image pixel data; there is a general lack of consistency as to what image data representation to use. This paper presents an experimental analysis to compare the effectiveness of more common image data representations. The goal is to determine the effectiveness of these data representations to reduce the cost for the correct correspondence relative to other possible matches.Keywords: colour data, local stereo matching, stereo correspondence, disparity map
Procedia PDF Downloads 37026354 Business-Intelligence Mining of Large Decentralized Multimedia Datasets with a Distributed Multi-Agent System
Authors: Karima Qayumi, Alex Norta
Abstract:
The rapid generation of high volume and a broad variety of data from the application of new technologies pose challenges for the generation of business-intelligence. Most organizations and business owners need to extract data from multiple sources and apply analytical methods for the purposes of developing their business. Therefore, the recently decentralized data management environment is relying on a distributed computing paradigm. While data are stored in highly distributed systems, the implementation of distributed data-mining techniques is a challenge. The aim of this technique is to gather knowledge from every domain and all the datasets stemming from distributed resources. As agent technologies offer significant contributions for managing the complexity of distributed systems, we consider this for next-generation data-mining processes. To demonstrate agent-based business intelligence operations, we use agent-oriented modeling techniques to develop a new artifact for mining massive datasets.Keywords: agent-oriented modeling (AOM), business intelligence model (BIM), distributed data mining (DDM), multi-agent system (MAS)
Procedia PDF Downloads 43226353 Timing and Noise Data Mining Algorithm and Software Tool in Very Large Scale Integration (VLSI) Design
Authors: Qing K. Zhu
Abstract:
Very Large Scale Integration (VLSI) design becomes very complex due to the continuous integration of millions of gates in one chip based on Moore’s law. Designers have encountered numerous report files during design iterations using timing and noise analysis tools. This paper presented our work using data mining techniques combined with HTML tables to extract and represent critical timing/noise data. When we apply this data-mining tool in real applications, the running speed is important. The software employs table look-up techniques in the programming for the reasonable running speed based on performance testing results. We added several advanced features for the application in one industry chip design.Keywords: VLSI design, data mining, big data, HTML forms, web, VLSI, EDA, timing, noise
Procedia PDF Downloads 25426352 Scenario Analysis to Assess the Competitiveness of Hydrogen in Securing the Italian Energy System
Authors: Gianvito Colucci, Valeria Di Cosmo, Matteo Nicoli, Orsola Maria Robasto, Laura Savoldi
Abstract:
The hydrogen value chain deployment is likely to be boosted in the near term by the energy security measures planned by European countries to face the recent energy crisis. In this context, some countries are recognized to have a crucial role in the geopolitics of hydrogen as importers, consumers and exporters. According to the European Hydrogen Backbone Initiative, Italy would be part of one of the 5 corridors that will shape the European hydrogen market. However, the set targets are very ambitious and require large investments to rapidly develop effective hydrogen policies: in this regard, scenario analysis is becoming increasingly important to support energy planning, and energy system optimization models appear to be suitable tools to quantitively carry on that kind of analysis. The work aims to assess the competitiveness of hydrogen in contributing to the Italian energy security in the coming years, under different price and import conditions, using the energy system model TEMOA-Italy. A wide spectrum of hydrogen technologies is included in the analysis, covering the production, storage, delivery, and end-uses stages. National production from fossil fuels with and without CCS, as well as electrolysis and import of low-carbon hydrogen from North Africa, are the supply solutions that would compete with other ones, such as natural gas, biomethane and electricity value chains, to satisfy sectoral energy needs (transport, industry, buildings, agriculture). Scenario analysis is then used to study the competition under different price and import conditions. The use of TEMOA-Italy allows the work to catch the interaction between the economy and technological detail, which is much needed in the energy policies assessment, while the transparency of the analysis and of the results is ensured by the full accessibility of the TEMOA open-source modeling framework.Keywords: energy security, energy system optimization models, hydrogen, natural gas, open-source modeling, scenario analysis, TEMOA
Procedia PDF Downloads 11626351 Introduction of Electronic Health Records to Improve Data Quality in Emergency Department Operations
Authors: Anuruddha Jagoda, Samiddhi Samarakoon, Anil Jasinghe
Abstract:
In its simplest form, data quality can be defined as 'fitness for use' and it is a concept with multi-dimensions. Emergency Departments(ED) require information to treat patients and on the other hand it is the primary source of information regarding accidents, injuries, emergencies etc. Also, it is the starting point of various patient registries, databases and surveillance systems. This interventional study was carried out to improve data quality at the ED of the National Hospital of Sri Lanka (NHSL) by introducing an e health solution to improve data quality. The NHSL is the premier trauma care centre in Sri Lanka. The study consisted of three components. A research study was conducted to assess the quality of data in relation to selected five dimensions of data quality namely accuracy, completeness, timeliness, legibility and reliability. The intervention was to develop and deploy an electronic emergency department information system (eEDIS). Post assessment of the intervention confirmed that all five dimensions of data quality had improved. The most significant improvements are noticed in accuracy and timeliness dimensions.Keywords: electronic health records, electronic emergency department information system, emergency department, data quality
Procedia PDF Downloads 27526350 Data Presentation of Lane-Changing Events Trajectories Using HighD Dataset
Authors: Basma Khelfa, Antoine Tordeux, Ibrahima Ba
Abstract:
We present a descriptive analysis data of lane-changing events in multi-lane roads. The data are provided from The Highway Drone Dataset (HighD), which are microscopic trajectories in highway. This paper describes and analyses the role of the different parameters and their significance. Thanks to HighD data, we aim to find the most frequent reasons that motivate drivers to change lanes. We used the programming language R for the processing of these data. We analyze the involvement and relationship of different variables of each parameter of the ego vehicle and the four vehicles surrounding it, i.e., distance, speed difference, time gap, and acceleration. This was studied according to the class of the vehicle (car or truck), and according to the maneuver it undertook (overtaking or falling back).Keywords: autonomous driving, physical traffic model, prediction model, statistical learning process
Procedia PDF Downloads 26126349 Development of Dye Sensitized Solar Window by Physical Parameters Optimization
Authors: Tahsin Shameem, Chowdhury Sadman Jahan, Mohammad Alam
Abstract:
Interest about Net Zero Energy Buildings have gained traction in recent years following the need to sustain energy consumption with generations on site and to reduce dependence on grid supplied energy from large plants using fossil fuel. With this end in view, building integrated photovoltaics are being studied attempting to utilize all exterior facades of a building to generate power. In this paper, we have looked at the physical parameters defining a dye sensitized solar cell (DSSC) and discussed their impact on energy harvest. Following our discussion and experimental data obtained from literature, we have attempted to optimize these physical parameters accordingly so as to allow maximum light absorption for a given active layer thickness. We then modified a planer DSSC design with our optimized properties to allow adequate light transmission which demonstrated a high fill factor and an External Quantum Efficiency (EQE) of greater than 9% by computer aided design and simulation. In conclusion, a DSSC based solar window with such high output values even after such high light transmission through it definitely flags a promising future for this technology and our work elicits the need for further study and practical experimentation.Keywords: net zero energy building, integrated photovoltaics, dye sensitized solar cell, fill factor, External Quantum Efficiency
Procedia PDF Downloads 14126348 Powering Profits: A Dynamic Approach to Sales Marketing and Electronics
Authors: Muhammad Awais Kiani, Maryam Kiani
Abstract:
This abstract explores the confluence of these two domains and highlights the key factors driving success in sales marketing for electronics. The abstract begins by digging into the ever-evolving landscape of consumer electronics, emphasizing how technological advancements and the growth of smart devices have revolutionized the way people interact with electronics. This paradigm shift has created tremendous opportunities for sales and marketing professionals to engage with consumers on various platforms and channels. Next, the abstract discusses the pivotal role of effective sales marketing strategies in the electronics industry. It highlights the importance of understanding consumer behavior, market trends, and competitive landscapes and how this knowledge enables businesses to tailor their marketing efforts to specific target audiences. Furthermore, the abstract explores the significance of leveraging digital marketing techniques, such as social media advertising, search engine optimization, and influencer partnerships, to establish brand identity and drive sales in the electronics market. It emphasizes the power of storytelling and creating captivating content to engage with tech-savvy consumers. Additionally, the abstract emphasizes the role of customer relationship management (CRM) systems and data analytics in optimizing sales marketing efforts. It highlights the importance of leveraging customer insights and analyzing data to personalize marketing campaigns, enhance customer experience, and ultimately drive sales growth. Lastly, the abstract concludes by underlining the importance of adapting to the ever-changing landscape of the electronics industry. It encourages businesses to embrace innovation, stay informed about emerging technologies, and continuously evolve their sales marketing strategies to meet the evolving needs and expectations of consumers. Overall, this abstract sheds light on the captivating realm of sales marketing in the electronics industry, emphasizing the need for creativity, adaptability, and a deep understanding of consumers to succeed in this rapidly evolving market.Keywords: marketing industry, electronics, sales impact, e-commerce
Procedia PDF Downloads 7426347 Plotting of an Ideal Logic versus Resource Outflow Graph through Response Analysis on a Strategic Management Case Study Based Questionnaire
Authors: Vinay A. Sharma, Shiva Prasad H. C.
Abstract:
The initial stages of any project are often observed to be in a mixed set of conditions. Setting up the project is a tough task, but taking the initial decisions is rather not complex, as some of the critical factors are yet to be introduced into the scenario. These simple initial decisions potentially shape the timeline and subsequent events that might later be plotted on it. Proceeding towards the solution for a problem is the primary objective in the initial stages. The optimization in the solutions can come later, and hence, the resources deployed towards attaining the solution are higher than what they would have been in the optimized versions. A ‘logic’ that counters the problem is essentially the core of the desired solution. Thus, if the problem is solved, the deployment of resources has led to the required logic being attained. As the project proceeds along, the individuals working on the project face fresh challenges as a team and are better accustomed to their surroundings. The developed, optimized solutions are then considered for implementation, as the individuals are now experienced, and know better of the consequences and causes of possible failure, and thus integrate the adequate tolerances wherever required. Furthermore, as the team graduates in terms of strength, acquires prodigious knowledge, and begins its efficient transfer, the individuals in charge of the project along with the managers focus more on the optimized solutions rather than the traditional ones to minimize the required resources. Hence, as time progresses, the authorities prioritize attainment of the required logic, at a lower amount of dedicated resources. For empirical analysis of the stated theory, leaders and key figures in organizations are surveyed for their ideas on appropriate logic required for tackling a problem. Key-pointers spotted in successfully implemented solutions are noted from the analysis of the responses and a metric for measuring logic is developed. A graph is plotted with the quantifiable logic on the Y-axis, and the dedicated resources for the solutions to various problems on the X-axis. The dedicated resources are plotted over time, and hence the X-axis is also a measure of time. In the initial stages of the project, the graph is rather linear, as the required logic will be attained, but the consumed resources are also high. With time, the authorities begin focusing on optimized solutions, since the logic attained through them is higher, but the resources deployed are comparatively lower. Hence, the difference between consecutive plotted ‘resources’ reduces and as a result, the slope of the graph gradually increases. On an overview, the graph takes a parabolic shape (beginning on the origin), as with each resource investment, ideally, the difference keeps on decreasing, and the logic attained through the solution keeps increasing. Even if the resource investment is higher, the managers and authorities, ideally make sure that the investment is being made on a proportionally high logic for a larger problem, that is, ideally the slope of the graph increases with the plotting of each point.Keywords: decision-making, leadership, logic, strategic management
Procedia PDF Downloads 10826346 Evaluation of Golden Beam Data for the Commissioning of 6 and 18 MV Photons Beams in Varian Linear Accelerator
Authors: Shoukat Ali, Abdul Qadir Jandga, Amjad Hussain
Abstract:
Objective: The main purpose of this study is to compare the Percent Depth dose (PDD) and In-plane and cross-plane profiles of Varian Golden beam data to the measured data of 6 and 18 MV photons for the commissioning of Eclipse treatment planning system. Introduction: Commissioning of treatment planning system requires an extensive acquisition of beam data for the clinical use of linear accelerators. Accurate dose delivery require to enter the PDDs, Profiles and dose rate tables for open and wedges fields into treatment planning system, enabling to calculate the MUs and dose distribution. Varian offers a generic set of beam data as a reference data, however not recommend for clinical use. In this study, we compared the generic beam data with the measured beam data to evaluate the reliability of generic beam data to be used for the clinical purpose. Methods and Material: PDDs and Profiles of Open and Wedge fields for different field sizes and at different depths measured as per Varian’s algorithm commissioning guideline. The measurement performed with PTW 3D-scanning water phantom with semi-flex ion chamber and MEPHYSTO software. The online available Varian Golden Beam Data compared with the measured data to evaluate the accuracy of the golden beam data to be used for the commissioning of Eclipse treatment planning system. Results: The deviation between measured vs. golden beam data was in the range of 2% max. In PDDs, the deviation increases more in the deeper depths than the shallower depths. Similarly, profiles have the same trend of increasing deviation at large field sizes and increasing depths. Conclusion: Study shows that the percentage deviation between measured and golden beam data is within the acceptable tolerance and therefore can be used for the commissioning process; however, verification of small subset of acquired data with the golden beam data should be mandatory before clinical use.Keywords: percent depth dose, flatness, symmetry, golden beam data
Procedia PDF Downloads 48926345 Examining Electroencephalographic Activity Differences Between Goalkeepers and Forwards in Professional Football Players
Authors: Ruhollah Basatnia, Ali Reza Aghababa, Mehrdad Anbarian, Sara Akbari, Mohammad Khazaee
Abstract:
Introduction: The investigation of brain activity in sports has become a subject of interest for researchers. Several studies have examined the patterns or differences in brain activity during different sports situations. Previous studies have suggested that the pattern of cortical activity may differ between different football positions, such as goalkeepers and other players. This study aims to investigate the differences in electroencephalographic (EEG) activity between the positions of goalkeeper and forward in professional football players. Methods: Fourteen goalkeepers and twelve forwards, all males between 19-28 years old, participated in the study. EEG activity was recorded while participants were sitting with their eyes closed for 5 minutes. The mean relative power of EEG activity for each frequency band was compared between the two groups using independent samples t-test. Findings: The study found significant differences in the relative power of EEG activity between different frequency bands and electrodes. Notably, significant differences were observed in the mean relative power of EEG activity between the two groups for certain frequency bands and electrodes. These findings suggest that EEG activity can serve as a sensory indicator for cognitive and performance differences between goalkeepers and forwards in football players. Discussion: The results of this study suggest that EEG activity can be used to identify cognitive and performance differences between goalkeepers and forwards in football players. However, further research is needed to establish the relationship between EEG activity and actual performance in the field. Future studies should investigate the potential influence of other factors, such as fatigue and stress, on the EEG activity of football players. Additionally, the use of real-time EEG feedback could be explored as a tool for training and performance optimization in football players. Further research is required to fully understand the potential of EEG activity as a sensory indicator for cognitive and performance differences between football player positions and to explore its potential applications for training and performance optimization in football and other sports.Keywords: football, brain activity, EEG, goalkeepers, forwards
Procedia PDF Downloads 8426344 Variable-Fidelity Surrogate Modelling with Kriging
Authors: Selvakumar Ulaganathan, Ivo Couckuyt, Francesco Ferranti, Tom Dhaene, Eric Laermans
Abstract:
Variable-fidelity surrogate modelling offers an efficient way to approximate function data available in multiple degrees of accuracy each with varying computational cost. In this paper, a Kriging-based variable-fidelity surrogate modelling approach is introduced to approximate such deterministic data. Initially, individual Kriging surrogate models, which are enhanced with gradient data of different degrees of accuracy, are constructed. Then these Gradient enhanced Kriging surrogate models are strategically coupled using a recursive CoKriging formulation to provide an accurate surrogate model for the highest fidelity data. While, intuitively, gradient data is useful to enhance the accuracy of surrogate models, the primary motivation behind this work is to investigate if it is also worthwhile incorporating gradient data of varying degrees of accuracy.Keywords: Kriging, CoKriging, Surrogate modelling, Variable- fidelity modelling, Gradients
Procedia PDF Downloads 55826343 Robust Barcode Detection with Synthetic-to-Real Data Augmentation
Authors: Xiaoyan Dai, Hsieh Yisan
Abstract:
Barcode processing of captured images is a huge challenge, as different shooting conditions can result in different barcode appearances. This paper proposes a deep learning-based barcode detection using synthetic-to-real data augmentation. We first augment barcodes themselves; we then augment images containing the barcodes to generate a large variety of data that is close to the actual shooting environments. Comparisons with previous works and evaluations with our original data show that this approach achieves state-of-the-art performance in various real images. In addition, the system uses hybrid resolution for barcode “scan” and is applicable to real-time applications.Keywords: barcode detection, data augmentation, deep learning, image-based processing
Procedia PDF Downloads 17026342 Thermoelectric Generators as Alternative Source for Electric Power
Authors: L. C. Ding, Bradley G. Orr, K. Rahauoi, S. Truza, A. Date, A. Akbarzadeh
Abstract:
The research on thermoelectric has been a blooming field of research for the latest decade, owing to large amount of heat source available to be harvested, being eco-friendly and static in operation. This paper provides the performance of thermoelectric generator (TEG) with bulk material of bismuth telluride, Bi2Te3. Later, the performance of the TEGs is evaluated by considering attaching the TEGs on a plastic (polyethylene sheet) in contrast to the common method of attaching the TEGs on the metal surface.Keywords: electric power, heat transfer, renewable energy, thermoelectric generator
Procedia PDF Downloads 28226341 Power Control of a Doubly-Fed Induction Generator Used in Wind Turbine by RST Controller
Authors: A. Boualouch, A. Frigui, T. Nasser, A. Essadki, A.Boukhriss
Abstract:
This work deals with the vector control of the active and reactive powers of a Double-Fed Induction generator DFIG used as a wind generator by the polynomial RST controller. The control of the statoric power transfer between the machine and the grid is achieved by acting on the rotor parameters and control is provided by the polynomial controller RST. The performance and robustness of the controller are compared with PI controller and evaluated by simulation results in MATLAB/simulink.Keywords: DFIG, RST, vector control, wind turbine
Procedia PDF Downloads 65826340 Characterization of Fatty Acid Glucose Esters as Os9BGlu31 Transglucosidase Substrates in Rice
Authors: Juthamath Komvongsa, Bancha Mahong, Kannika Phasai, Sukanya Luang, Jong-Seong Jeon, James Ketudat-Cairns
Abstract:
Os9BGlu31 is a rice transglucosidase that transfers glucosyl moieties to various acceptors such as carboxylic acids and alcohols, including phenolic acids and flavonoids, in vitro. The role of Os9BGlu31 transglucosidase in rice plant metabolism has not been reported to date. Methanolic extracts of rice bran and flag leaves were found to contain substrates to which Os9BGlu31 could transfer glucose from 4-nitrophenyl β -D-glucopyranoside donor. The semi-purified substrate from rice bran was found to contain oleic acid and linoleic acid and the pure fatty acids were found to act as acceptor substrates for Os9BGlu31 transglucosidase to form 1-O-acyl glucose esters. Os9BGlu31 showed higher activity with oleic acid (18:1) and linoleic acid (18:2) than stearic acid (18:0), and had both higher kcat and higher Km for linoleic than oleic acid in the presence of 8 mM 4NPGlc donor. This transglucosidase reaction is reversible, Os9bglu31 knockout rice lines of flag leaves were found to have higher amounts of fatty acid glucose esters than wild type control lines, these data conclude that fatty acid glucose esters act as glucosyl donor substrates for Os9BGlu31 transglucosidase in rice.Keywords: fatty acid, fatty acid glucose ester, transglucosidase, rice flag leaf, homologous knockout lines, tandam mass spectrometry
Procedia PDF Downloads 36626339 Oblique Radiative Solar Nano-Polymer Gel Coating Heat Transfer and Slip Flow: Manufacturing Simulation
Authors: Anwar Beg, Sireetorn Kuharat, Rashid Mehmood, Rabil Tabassum, Meisam Babaie
Abstract:
Nano-polymeric solar paints and sol-gels have emerged as a major new development in solar cell/collector coatings offering significant improvements in durability, anti-corrosion and thermal efficiency. They also exhibit substantial viscosity variation with temperature which can be exploited in solar collector designs. Modern manufacturing processes for such nano-rheological materials frequently employ stagnation flow dynamics under high temperature which invokes radiative heat transfer. Motivated by elaborating in further detail the nanoscale heat, mass and momentum characteristics of such sol gels, the present article presents a mathematical and computational study of the steady, two-dimensional, non-aligned thermo-fluid boundary layer transport of copper metal-doped water-based nano-polymeric sol gels under radiative heat flux. To simulate real nano-polymer boundary interface dynamics, thermal slip is analysed at the wall. A temperature-dependent viscosity is also considered. The Tiwari-Das nanofluid model is deployed which features a volume fraction for the nanoparticle concentration. This approach also features a Maxwell-Garnet model for the nanofluid thermal conductivity. The conservation equations for mass, normal and tangential momentum and energy (heat) are normalized via appropriate transformations to generate a multi-degree, ordinary differential, non-linear, coupled boundary value problem. Numerical solutions are obtained via the stable, efficient Runge-Kutta-Fehlberg scheme with shooting quadrature in MATLAB symbolic software. Validation of solutions is achieved with a Variational Iterative Method (VIM) utilizing Langrangian multipliers. The impact of key emerging dimensionless parameters i.e. obliqueness parameter, radiation-conduction Rosseland number (Rd), thermal slip parameter (α), viscosity parameter (m), nanoparticles volume fraction (ϕ) on non-dimensional normal and tangential velocity components, temperature, wall shear stress, local heat flux and streamline distributions is visualized graphically. Shear stress and temperature are boosted with increasing radiative effect whereas local heat flux is reduced. Increasing wall thermal slip parameter depletes temperatures. With greater volume fraction of copper nanoparticles temperature and thermal boundary layer thickness is elevated. Streamlines are found to be skewed markedly towards the left with positive obliqueness parameter.Keywords: non-orthogonal stagnation-point heat transfer, solar nano-polymer coating, MATLAB numerical quadrature, Variational Iterative Method (VIM)
Procedia PDF Downloads 13526338 Relevance Feedback within CBIR Systems
Authors: Mawloud Mosbah, Bachir Boucheham
Abstract:
We present here the results for a comparative study of some techniques, available in the literature, related to the relevance feedback mechanism in the case of a short-term learning. Only one method among those considered here is belonging to the data mining field which is the K-Nearest Neighbours Algorithm (KNN) while the rest of the methods is related purely to the information retrieval field and they fall under the purview of the following three major axes: Shifting query, Feature Weighting and the optimization of the parameters of similarity metric. As a contribution, and in addition to the comparative purpose, we propose a new version of the KNN algorithm referred to as an incremental KNN which is distinct from the original version in the sense that besides the influence of the seeds, the rate of the actual target image is influenced also by the images already rated. The results presented here have been obtained after experiments conducted on the Wang database for one iteration and utilizing colour moments on the RGB space. This compact descriptor, Colour Moments, is adequate for the efficiency purposes needed in the case of interactive systems. The results obtained allow us to claim that the proposed algorithm proves good results; it even outperforms a wide range of techniques available in the literature.Keywords: CBIR, category search, relevance feedback, query point movement, standard Rocchio’s formula, adaptive shifting query, feature weighting, original KNN, incremental KNN
Procedia PDF Downloads 28026337 Present Status, Driving Forces and Pattern Optimization of Territory in Hubei Province, China
Abstract:
“National Territorial Planning (2016-2030)” was issued by the State Council of China in 2017. As an important initiative of putting it into effect, territorial planning at provincial level makes overall arrangement of territorial development, resources and environment protection, comprehensive renovation and security system construction. Hubei province, as the pivot of the “Rise of Central China” national strategy, is now confronted with great opportunities and challenges in territorial development, protection, and renovation. Territorial spatial pattern experiences long time evolution, influenced by multiple internal and external driving forces. It is not clear what are the main causes of its formation and what are effective ways of optimizing it. By analyzing land use data in 2016, this paper reveals present status of territory in Hubei. Combined with economic and social data and construction information, driving forces of territorial spatial pattern are then analyzed. Research demonstrates that the three types of territorial space aggregate distinctively. The four aspects of driving forces include natural background which sets the stage for main functions, population and economic factors which generate agglomeration effect, transportation infrastructure construction which leads to axial expansion and significant provincial strategies which encourage the established path. On this basis, targeted strategies for optimizing territory spatial pattern are then put forward. Hierarchical protection pattern should be established based on development intensity control as respect for nature. By optimizing the layout of population and industry and improving the transportation network, polycentric network-based development pattern could be established. These findings provide basis for Hubei Territorial Planning, and reference for future territorial planning in other provinces.Keywords: driving forces, Hubei, optimizing strategies, spatial pattern, territory
Procedia PDF Downloads 10526336 Numerical Solution of Portfolio Selecting Semi-Infinite Problem
Authors: Alina Fedossova, Jose Jorge Sierra Molina
Abstract:
SIP problems are part of non-classical optimization. There are problems in which the number of variables is finite, and the number of constraints is infinite. These are semi-infinite programming problems. Most algorithms for semi-infinite programming problems reduce the semi-infinite problem to a finite one and solve it by classical methods of linear or nonlinear programming. Typically, any of the constraints or the objective function is nonlinear, so the problem often involves nonlinear programming. An investment portfolio is a set of instruments used to reach the specific purposes of investors. The risk of the entire portfolio may be less than the risks of individual investment of portfolio. For example, we could make an investment of M euros in N shares for a specified period. Let yi> 0, the return on money invested in stock i for each dollar since the end of the period (i = 1, ..., N). The logical goal here is to determine the amount xi to be invested in stock i, i = 1, ..., N, such that we maximize the period at the end of ytx value, where x = (x1, ..., xn) and y = (y1, ..., yn). For us the optimal portfolio means the best portfolio in the ratio "risk-return" to the investor portfolio that meets your goals and risk ways. Therefore, investment goals and risk appetite are the factors that influence the choice of appropriate portfolio of assets. The investment returns are uncertain. Thus we have a semi-infinite programming problem. We solve a semi-infinite optimization problem of portfolio selection using the outer approximations methods. This approach can be considered as a developed Eaves-Zangwill method applying the multi-start technique in all of the iterations for the search of relevant constraints' parameters. The stochastic outer approximations method, successfully applied previously for robotics problems, Chebyshev approximation problems, air pollution and others, is based on the optimal criteria of quasi-optimal functions. As a result we obtain mathematical model and the optimal investment portfolio when yields are not clear from the beginning. Finally, we apply this algorithm to a specific case of a Colombian bank.Keywords: outer approximation methods, portfolio problem, semi-infinite programming, numerial solution
Procedia PDF Downloads 30926335 Particle Size Analysis of Itagunmodi Southwestern Nigeria Alluvial Gold Ore Sample by Gaudin Schumann Method
Authors: Olaniyi Awe, Adelana R. Adetunji, Abraham Adeleke
Abstract:
Mining of alluvial gold ore by artisanal miners has been going on for decades at Itagunmodi, Southwestern Nigeria. In order to optimize the traditional panning gravity separation method commonly used in the area, a mineral particle size analysis study is critical. This study analyzed alluvial gold ore samples collected at identified five different locations in the area with a view to determine the ore particle size distributions. 500g measured of as-received alluvial gold ore sample was introduced into the uppermost sieve of an electrical sieve shaker consisting of sieves arranged in the order of decreasing nominal apertures of 5600μm, 3350μm, 2800μm, 355μm, 250μm, 125μm and 90μm, and operated for 20 minutes. The amount of material retained on each sieve was measured and tabulated for analysis. A screen analysis graph using the Gaudin Schuman method was drawn for each of the screen tests on the alluvial samples. The study showed that the percentages of fine particle size -125+90 μm fraction were 45.00%, 36.00%, 39.60%, 43.00% and 36.80% for the selected samples. These primary ore characteristic results provide reference data for the alluvial gold ore processing method selection, process performance measurement and optimization.Keywords: alluvial gold ore, sieve shaker, particle size, Gaudin Schumann
Procedia PDF Downloads 64