Search results for: computer application
5270 GPU-Accelerated Triangle Mesh Simplification Using Parallel Vertex Removal
Authors: Thomas Odaker, Dieter Kranzlmueller, Jens Volkert
Abstract:
We present an approach to triangle mesh simplification designed to be executed on the GPU. We use a quadric error metric to calculate an error value for each vertex of the mesh and order all vertices based on this value. This step is followed by the parallel removal of a number of vertices with the lowest calculated error values. To allow for the parallel removal of multiple vertices we use a set of per-vertex boundaries that prevent mesh foldovers even when simplification operations are performed on neighbouring vertices. We execute multiple iterations of the calculation of the vertex errors, ordering of the error values and removal of vertices until either a desired number of vertices remains in the mesh or a minimum error value is reached. This parallel approach is used to speed up the simplification process while maintaining mesh topology and avoiding foldovers at every step of the simplification.Keywords: computer graphics, half edge collapse, mesh simplification, precomputed simplification, topology preserving
Procedia PDF Downloads 3695269 Factors Predicting Individual Health among Pilgrims of Kurdistan County: An Application of Health Belief Model
Authors: Arsalan Ghaderi, Behzad Karami Matin, Abdolrahim Afkhamzadeh, Abouzar Keshavarzi, Parvin Nokhasi
Abstract:
Background: Lack of individual health as one of the major health problems among the pilgrims can be followed by several complications. The main aim of this study was to determine factors predicting individual health among pilgrims of Kurdistan County; in the west of Iran and health belief model (HBM) was applied as theoretical framework. Methods: A cross-sectional study was conducted among 100 pilgrims who referred in the red crescent of Kurdistan County, the west of Iran which was randomly selected for participation in this study. A structured questionnaire was applied for collecting data and data were analyzed by SPSS version 21 using bivariate correlations and linear regression statistical tests. Results: The mean age of respondents was 59.45 years [SD: 11.56], ranged from 50 to 73 years. The HBM predictor variables accounted for 47% of the variation in the outcome measure of the individual health. The best predictors for individual health were perceived severity and cause to action. Conclusion: Based on our result, it seems that designing and implementation of educational programs to increase seriousness about complications of lack of individual health and increasing cause to action among the pilgrims may be useful in order to promote individual health among pilgrims.Keywords: individual health, pilgrims, Iran, health belief model
Procedia PDF Downloads 5335268 Financial Sources and Instruments for Public Grants and Financial Facilities of SMEs in Eu
Authors: Simeon Karafolas, Maciej Woźniak
Abstract:
Mostly of public financing programs at national and regional level are funded from European Union sources. EU can participate directly to a national and regional program (example LEADER initiative, URBAN…) or indirectly by funding regional or national funds. Funds from European Union are provided from EU multiannual financial framework form which the annual budget is programmed. The adjusted program 2007-2013 of the EU considered commitments of almost 1 trillion Euros for the EU-28 countries. Provisions of the new program 2014-2020 consider commitments of more than 1 trillion Euros. Sustainable growth, divided to Cohesion and Competitiveness for Growth an Employment, is one of the two principal categories; the other is the preservation and management of natural resources. Through this financing process SMEs benefited of EU and public sources by receiving grants for their investments. Most of the financial instruments are available indirectly through the national financial intermediaries. Part of them is managed by the European Investment Fund. The paper focuses on the public financing to SMEs by examining case studies on divers forms of public help. It tries to distinguish the efficiency of the examined good practices and therefore try to have some conclusions on the possibility of application to other regions.Keywords: DIFASS, grants, SMEs, public financing
Procedia PDF Downloads 3125267 3D Reconstruction of Human Body Based on Gender Classification
Authors: Jiahe Liu, Hongyang Yu, Feng Qian, Miao Luo
Abstract:
SMPL-X was a powerful parametric human body model that included male, neutral, and female models, with significant gender differences between these three models. During the process of 3D human body reconstruction, the correct selection of standard templates was crucial for obtaining accurate results. To address this issue, we developed an efficient gender classification algorithm to automatically select the appropriate template for 3D human body reconstruction. The key to this gender classification algorithm was the precise analysis of human body features. By using the SMPL-X model, the algorithm could detect and identify gender features of the human body, thereby determining which standard template should be used. The accuracy of this algorithm made the 3D reconstruction process more accurate and reliable, as it could adjust model parameters based on individual gender differences. SMPL-X and the related gender classification algorithm have brought important advancements to the field of 3D human body reconstruction. By accurately selecting standard templates, they have improved the accuracy of reconstruction and have broad potential in various application fields. These technologies continue to drive the development of the 3D reconstruction field, providing us with more realistic and accurate human body models.Keywords: gender classification, joint detection, SMPL-X, 3D reconstruction
Procedia PDF Downloads 725266 Application of Electrical Resistivity Tomography to Image the Subsurface Structure of a Sinkhole, a Case Study in Southwestern Missouri
Authors: Shishay T. Kidanu
Abstract:
The study area is located in Southwestern Missouri and is mainly underlain by Mississippian Age limestone which is highly susceptible to karst processes. The area is known for the presence of various karst features like caves, springs and more importantly Sinkholes. Sinkholes are one of the most common karst features and the primary hazard in karst areas. Investigating the subsurface structure and development mechanism of existing sinkholes enables to understand their long-term impact and chance of reactivation and also helps to provide effective mitigation measures. In this study ERT (Electrical Resistivity Tomography), MASW (Multichannel Analysis of Surface Waves) and borehole control data have been used to image the subsurface structure and investigate the development mechanism of a sinkhole in Southwestern Missouri. The study shows that the main process responsible for the development of the sinkhole is the downward piping of fine grained soils. Furthermore, the study reveals that the sinkhole developed along a north-south oriented vertical joint set characterized by a vertical zone of water seepage and associated fine grained soil piping into preexisting fractures.Keywords: ERT, Karst, MASW, sinkhole
Procedia PDF Downloads 2175265 Molecular Docking Study of Quinazoline and Quinoline Derivatives against EGFR
Authors: Asli Faiza, Khamouli Saida
Abstract:
With the development of computer tools over the past 20 years. Molecular modeling and, more precisely, molecular docking has very quickly entered field of pharmaceutical research. EGFR enzyme involved in cancer disease.Our work consists of studying the inhibition of EGFR (1M17) with deferent inhibitors derived from quinazoline and quinoline by molecular docking. The values of ligands L148 and L177 are the best ligands for inhibit the activity of 1M17 since it forms a stable complex with this enzyme by better binding to the active site. The results obtained show that the ligands L148 and L177 give weak interactions with the active site residues EGFR (1M17), which stabilize the complexes formed of this ligands, which gives a better binding at the level of the active site, and an RMSD of L148 [1,9563 Å] and of L177 [ 1,2483 Å]. [1, 9563, 1.2483] ÅKeywords: docking, EGFR, quinazoline, quinoliène, MOE
Procedia PDF Downloads 745264 Optimal Design of Propellant Grain Shape Based on Structural Strength Analysis
Authors: Chen Xiong, Tong Xin, Li Hao, Xu Jin-Sheng
Abstract:
Experiment and simulation researches on the structural integrity of propellant grain in solid rocket motor (SRM) with high volumetric fraction were conducted. First, by using SRM parametric modeling functions with secondary development tool Python of ABAQUS, the three dimensional parameterized modeling programs of star shaped grain, wheel shaped grain and wing cylindrical grain were accomplished. Then, the mechanical properties under different loads for star shaped grain were obtained with the application of automatically established finite element model in ABAQUS. Next, several optimization algorithms are introduced to optimize the star shaped grain, wheel shaped grain and wing cylindrical grain. After meeting the demands of burning surface changes and volumetric fraction, the optimum three dimensional shapes of grain were obtained. Finally, by means of parametric modeling functions, pressure data of SRM’s cold pressurization test was directly applied to simulation of grain in terms of mechanical performance. The results verify the reliability and practical of parameterized modeling program of SRM.Keywords: cold pressurization test, ğarametric modeling, structural integrity, propellant grain, SRM
Procedia PDF Downloads 3665263 Application of Reception Theory to Analyze the Translation as a Continuous Reception
Authors: Mina Darabi Amin
Abstract:
In 1972, Hans Robert Jauss introduced the Reception Theory a version of Reader-response criticism, that suggests the literary critics to re-examine the relationship between the author, the work and the reader. The revealing of these relationships has shown that, besides the creation, the reception and the reading of the text have different levels which exempt it from a continuous reference to the meaning intended by the artist and could lead to multiplicity of possible interpretations according to the ‘Horizon of Expectations’. This theory could be associated with another intellectual process called ‘translation’, a process that is always confronted by different levels of readers in the target language and different levels of reception by these readers. By adopting the perspective of Reception theory in translation, we could ignore a particular kind of translation and consider the initiation to a literary text, its translation and its reception as a continuous process. Just like the creation of the text, the translation and its reception, are not made once and for all; they are confronted with different levels of reception and interpretation which are made and remade endlessly. After having known and crossing the first levels, the Horizons of Expectation could be extended and the reader could be initiated to the higher levels. On the other hand, we could say that the faithful and free translation are not opposed to each other, but depending on the type of reception by the readers and in a particular moment, the existence of both is necessary. In fact, it is the level of reception in readers and their Horizon of Expectations that determine the degree of fidelity and freedom of translation.Keywords: reception theory, reading, literary translation, horizons of expectation, reader
Procedia PDF Downloads 1835262 Wireless Sensor Network to Help Low Incomes Farmers to Face Drought Impacts
Authors: Fantazi Walid, Ezzedine Tahar, Bargaoui Zoubeida
Abstract:
This research presents the main ideas to implement an intelligent system composed by communicating wireless sensors measuring environmental data linked to drought indicators (such as air temperature, soil moisture , etc...). On the other hand, the setting up of a spatio temporal database communicating with a Web mapping application for a monitoring in real time in activity 24:00 /day, 7 days/week is proposed to allow the screening of the drought parameters time evolution and their extraction. Thus this system helps detecting surfaces touched by the phenomenon of drought. Spatio-temporal conceptual models seek to answer the users who need to manage soil water content for irrigating or fertilizing or other activities pursuing crop yield augmentation. Effectively, spatio-temporal conceptual models enable users to obtain a diagram of readable and easy data to apprehend. Based on socio-economic information, it helps identifying people impacted by the phenomena with the corresponding severity especially that this information is accessible by farmers and stakeholders themselves. The study will be applied in Siliana watershed Northern Tunisia.Keywords: WSN, database spatio-temporal, GIS, web mapping, indicator of drought
Procedia PDF Downloads 4975261 Introducing Future Smart Transport Solution for Women with Disabilities: A Review with Chongqing as the Focal Example
Authors: Xinyi Gao, Xiaoyun Feng, Ruijie Liu, Yumin Xia, Min Shao, Xinqing Wang
Abstract:
This paper outlines the travel challenges, the absence of society, and studies around disabled women and chooses the Chongqing area as a case study to explore how terrain characteristics and city construction influence our subject's travel choice. It also highlights future transport options and the necessity of addressing the difficult travel position of women with disabilities. This study focuses on the travel demands of women with disabilities, illustrating what their ideal method of travel would be. An analysis of related smart cities like Hong Kong illustrates the aspects to consider in the reconstruction of Chongqing. Finally, relying on current smart city modelling approaches, several design ideas for assistive tools are suggested for the safety of women with disabilities during travel.Keywords: future smart city, disabled women, Chongqing, inclusive design, human-computer interaction
Procedia PDF Downloads 1245260 Acceleration of Lagrangian and Eulerian Flow Solvers via Graphics Processing Units
Authors: Pooya Niksiar, Ali Ashrafizadeh, Mehrzad Shams, Amir Hossein Madani
Abstract:
There are many computationally demanding applications in science and engineering which need efficient algorithms implemented on high performance computers. Recently, Graphics Processing Units (GPUs) have drawn much attention as compared to the traditional CPU-based hardware and have opened up new improvement venues in scientific computing. One particular application area is Computational Fluid Dynamics (CFD), in which mature CPU-based codes need to be converted to GPU-based algorithms to take advantage of this new technology. In this paper, numerical solutions of two classes of discrete fluid flow models via both CPU and GPU are discussed and compared. Test problems include an Eulerian model of a two-dimensional incompressible laminar flow case and a Lagrangian model of a two phase flow field. The CUDA programming standard is used to employ an NVIDIA GPU with 480 cores and a C++ serial code is run on a single core Intel quad-core CPU. Up to two orders of magnitude speed up is observed on GPU for a certain range of grid resolution or particle numbers. As expected, Lagrangian formulation is better suited for parallel computations on GPU although Eulerian formulation represents significant speed up too.Keywords: CFD, Eulerian formulation, graphics processing units, Lagrangian formulation
Procedia PDF Downloads 4235259 The History and Plausible Future of Assistive Technology and What It Might Mean for Singapore Students With Disabilities
Authors: Thomas Chong, Irene Victor
Abstract:
This paper discusses the history and plausible future of assistive technology and what it means for students with disabilities in Singapore, a country known for its high quality of education in the world. Over more than a century, students with disabilities have benefitted from relatively low-tech assistive technology (like eye-glasses, Braille, magnifiers and wheelchairs) to high-tech assistive technology including electronic mobility switches, alternative keyboards, computer-screen enlargers, text-to-speech readers, electronic sign-language dictionaries and signing avatars for individuals with hearing impairments. Driven by legislation, the use of assistive technology in many countries is becoming so ubiquitous that more and more students with disabilities are able to perform as well as if not better than their counterparts. Yet in many other learning environments where assistive technology is not affordable or mandated, the learning gaps can be quite significant. Without stronger legislation, Singapore may still have a long way to go in levelling the playing field for its students with disabilities.Keywords: assistive technology, students with disabilities, disability laws in Singapore, inclusiveness
Procedia PDF Downloads 795258 Factors Affecting the Results of in vitro Gas Production Technique
Authors: O. Kahraman, M. S. Alatas, O. B. Citil
Abstract:
In determination of values of feeds which, are used in ruminant nutrition, different methods are used like in vivo, in vitro, in situ or in sacco. Generally, the most reliable results are taken from the in vivo studies. But because of the disadvantages like being hard, laborious and expensive, time consuming, being hard to keep the experiment conditions under control and too much samples are needed, the in vitro techniques are more preferred. The most widely used in vitro techniques are two-staged digestion technique and gas production technique. In vitro gas production technique is based on the measurement of the CO2 which is released as a result of microbial fermentation of the feeds. In this review, the factors affecting the results obtained from in vitro gas production technique (Hohenheim Feed Test) were discussed. Some factors must be taken into consideration when interpreting the findings obtained in these studies and also comparing the findings reported by different researchers for the same feeds. These factors were discussed in 3 groups: factors related to animal, factors related to feeds and factors related with differences in the application of method. These factors and their effects on the results were explained. Also it can be concluded that the use of in vitro gas production technique in feed evaluation routinely can be contributed to the comprehensive feed evaluation, but standardization is needed in this technique to attain more reliable results.Keywords: In vitro, gas production technique, Hohenheim feed test, standardization
Procedia PDF Downloads 6065257 N Doped Multiwall Carbon Nanotubes Growth over a Ni Catalyst Substrate
Authors: Angie Quevedo, Juan Bussi, Nestor Tancredi, Juan Fajardo-Díaz, Florentino López-Urías, Emilio Muñóz-Sandoval
Abstract:
In this work, we study the carbon nanotubes (CNTs) formation by catalytic chemical vapor deposition (CCVD) over a catalyst with 20 % of Ni supported over La₂Zr₂O₇ (Ni20LZO). The high C solubility of Ni made it one of the most used in CNTs synthesis. Nevertheless, Ni presents also sintering and coalescence at high temperature. These troubles can be reduced by choosing a suitable support. We propose La₂Zr₂O₇ as for this matter since the incorporation of Ni by co-precipitation and calcination at 900 °C allows a good dispersion and interaction of the active metal (in the oxidized form, NiO) with this support. The CCVD was performed using 1 g of Ni20LZO at 950 °C during 30 min in Ar:H₂ atmosphere (2.5 L/min). The precursor, benzylamine, was added by a nebulizer-sprayer. X ray diffraction study shows the phase separation of NiO and La₂Zr₂O₇ after the calcination and the reduction to Ni after the synthesis. Raman spectra show D and G bands with a ID/IG ratio of 0.75. Elemental study verifies the incorporation of 1% of N. Thermogravimetric analysis shows the oxidation process start at around 450 °C. Future studies will determine the application potential of the samples.Keywords: N doped carbon nanotubes, catalytic chemical vapor deposition, nickel catalyst, bimetallic oxide
Procedia PDF Downloads 1685256 Effects of Plasma Treatment on Seed Germination
Authors: Yong Ho Jeon, Youn Mi Lee, Yong Yoon Lee
Abstract:
Effects of cold plasma treatment on various plant seed germination were studied. The seeds of hot pepper, cucumber, tomato and arabidopsis were exposed to plasma and the plasma was generated in various devices. The germination speed was evaluated compared to an unexposed control. A positive effect on germination speed was observed in all tested seeds but the effects strongly depended on the type of the used plasma device (Argon-DBD, surface-DBD or MARX generator), time of exposure (6s~10min or 1~10shots) and kind of seeds. The SEM images showed that arrays of gold particles along the cell wall were observed on the surface of cucumber seeds showed a germination-accelerating effect by plasma treatment, which was the same as untreated. However, when treated with the high dose plasma, gold particles were not arrayed at the seed surface, it seems that due to the surface etching. This may suggest that the germination is not promoted by etching or damage of surface caused by the plasma treatment. Seedling growth improvement was also observed by indirect plasma treatment. These lead to an important conclusion that the effect of charged particles on plasma play the essential role in plant germination and indirect plasma treatment offers new perspectives for large scale application.Keywords: cold plasma, cucumber, germination, SEM
Procedia PDF Downloads 3185255 Effects of Post-Emergence Herbicides on Soil Micro-Flora and Nitrogen Fixing Bacteria in Pea Field
Authors: Ali M. Zaid, Muftah Mayouf, Yahya Said Farouj
Abstract:
The effect of post emergence herbicides on soil micro-flora and nitrogen fixing bacteria was studied in pea field. Pea (Pisum sativum) was grown and treated with one or a mixture of two of several herbicides 2 weeks after sowing. Soil samples were collected 2 weeks after herbicides application. Average number of colony forming units per gram of soil of bacteria, actinomycetes and fungi were determined. Average number of nodules per plant was obtained at the end of the growing season. The results of the study showed MCPB, Bentazon, MCPB+Fluozifop-p-butyl, Bentazon+Fluozifop-p-butyl, Metribuzin, Flouzifop-p-butyl+Metribuzin, Cycloxydin, and Sethoxydin increased the population of soil fungi, with 4 to 10 times compared with the control. The herbicides used showed no significant effects on nitrogen fixing bacteria. The effects of herbicides on soil bacteria and actinomycetes were different. The study showed the use of herbicides could influence the biological balance of soil microflora, which has an important role in soil fertility and microbial ecosystem.Keywords: herbicides, post emergence, nitrogen fixing bacteria, environmental systems
Procedia PDF Downloads 4075254 High Temperature Oxidation Resistance of NiCrAl Bond Coat Produced by Spark Plasma Sintering as Thermal Barrier Coatings
Authors: Folorunso Omoniyi, Peter Olubambi, Rotimi Sadiku
Abstract:
Thermal barrier coating (TBC) system is used in both aero engines and other gas turbines to offer oxidation protection to superalloy substrate component. In the present work, it shows the ability of a new fabrication technique to develop rapidly new coating composition and microstructure. The compact powders were prepared by Powder Metallurgy method involving powder mixing and the bond coat was synthesized through the application of Spark Plasma Sintering (SPS) at 10500C to produce a fully dense (97%) NiCrAl bulk samples. The influence of sintering temperature on the hardness of NiCrAl, done by Micro Vickers hardness tester, was investigated. And Oxidation test was carried out at 1100oC for 20h, 40h, and 100h. The resulting coat was characterized with optical microscopy, scanning electron microscopy (SEM), energy dispersive x-ray analysis (EDAX) and x-ray diffraction (XRD). Micro XRD analysis after the oxidation test revealed the formation of protective oxides and non-protective oxides.Keywords: high-temperature oxidation, powder metallurgy, spark plasma sintering, thermal barrier coating
Procedia PDF Downloads 5115253 Characterization of Internet Exchange Points by Using Quantitative Data
Authors: Yamba Dabone, Tounwendyam Frédéric Ouedraogo, Pengwendé Justin Kouraogo, Oumarou Sie
Abstract:
Reliable data transport over the Internet is one of the goals of researchers in the field of computer science. Data such as videos and audio files are becoming increasingly large. As a result, transporting them over the Internet is becoming difficult. Therefore, it has been important to establish a method to locally interconnect autonomous systems (AS) with each other to facilitate traffic exchange. It is in this context that Internet Exchange Points (IXPs) are set up to facilitate local and even regional traffic. They are now the lifeblood of the Internet. Therefore, it is important to think about the factors that can characterize IXPs. However, other more quantifiable characteristics can help determine the quality of an IXP. In addition, these characteristics may allow ISPs to have a clearer view of the exchange node and may also convince other networks to connect to an IXP. To that end, we define five new IXP characteristics: the attraction rate (τₐₜₜᵣ); and the peering rate (τₚₑₑᵣ); the target rate of an IXP (Objₐₜₜ); the number of IXP links (Nₗᵢₙₖ); the resistance rate τₑ𝒻𝒻 and the attraction failure rate (τ𝒻).Keywords: characteristic, autonomous system, internet service provider, internet exchange point, rate
Procedia PDF Downloads 1015252 Model of a Context-Aware Middleware for Mobile Workers
Authors: Esraa Moustafa, Gaetan Rey, Stephane Lavirotte, Jean-Yves Tigli
Abstract:
With the development of Internet of Things and Web of Things, computing becomes more pervasive, invisible and present everywhere. In fact, in our environment, we are surrounded by multiple devices that deliver (web) services that meet the needs of the users. However, the mobility of these devices as the users has important repercussions that challenge software design of these applications because the variability of the environment cannot be anticipated at the design time. Thus, it will be interesting to dynamically discover the environment and adapt the application during its execution to the new contextual conditions. We, therefore, propose a model of a context-aware middleware that can address this issue through a monitoring service that is capable of reasoning and observation channels capable of calculating the context during the runtime. The monitoring service evaluates the pre-defined X-Query predicates in the context manager and uses Prolog to deduce the services needed to respond back. An independent Observation Channel for each different predicate is then dynamically generated by the monitoring service depending on the current state of the environment. Each channel sends its result directly to the context manager which consequently calculates the context based on all the predicates’ results while preserving the reactivity of the self-adaptive system.Keywords: auto-adaptation, context-awareness, middleware, reasoning engine
Procedia PDF Downloads 2535251 Evaluation of Long Term Evolution Mobile Signal Propagation Models and Vegetation Attenuation in the Livestock Department at Escuela Superior Politécnica de Chimborazo
Authors: Cinthia Campoverde, Mateo Benavidez, Victor Arias, Milton Torres
Abstract:
This article evaluates and compares three propagation models: the Okumura-Hata model, the Ericsson 9999 model, and the SUI model. The inclusion of vegetation attenuation in the area is also taken into account. These mathematical models aim to predict the power loss between a transmitting antenna (Tx) and a receiving antenna (Rx). The study was conducted in the open areas of the Livestock Department at the Escuela Superior Politécnica de Chimborazo (ESPOCH) University, located in the city of Riobamba, Ecuador. The necessary parameters for each model were calculated, considering LTE technology. The transmitting antenna belongs to the mobile phone company ”TUENTI” in Band 2, operating at a frequency of 1940 MHz. The reception power data in the area were empirically measured using the ”Network Cell Info” application. A total of 170 samples were collected, distributed across 19 radius, forming concentric circles around the transmitting antenna. The results demonstrate that the Okumura Hata urban model provides the best fit to the measured data.Keywords: propagation models, reception power, LTE, power losses, correction factor
Procedia PDF Downloads 875250 Separation of Rare-Earth Metals from E-Wastes
Authors: Gulsara Akanova, Akmaral Ismailova, Duisek Kamysbayev
Abstract:
The separation of rare earth metals (REM) from a neodymium magnet has been widely studied in the last year. The waste of computer hard disk contains 25.41 % neodymium, 64.09 % iron, and <<1 % boron. To further the separation of rare-earth metals, the magnet dissolved in open and closed systems with nitric acid. In the closed system, the magnet was dissolved in a microwave sample preparation system at different temperatures and pressures and the dissolution process lasted 1 hour. In the open system, the acid dissolution of the magnet was conducted at room temperature and the process lasted 30-40 minutes. To remove the iron in the magnet, oxalic acid was used and precipitated as oxalates under both conditions. For separation of rare earth metals (Nd, Pr and Dy) from magnet waste is used sorption method.Keywords: dissolution of the magnet, Neodymium magnet, rare earth metals, separation, Sorption
Procedia PDF Downloads 2135249 Quantum Decision Making with Small Sample for Network Monitoring and Control
Authors: Tatsuya Otoshi, Masayuki Murata
Abstract:
With the development and diversification of applications on the Internet, applications that require high responsiveness, such as video streaming, are becoming mainstream. Application responsiveness is not only a matter of communication delay but also a matter of time required to grasp changes in network conditions. The tradeoff between accuracy and measurement time is a challenge in network control. We people make countless decisions all the time, and our decisions seem to resolve tradeoffs between time and accuracy. When making decisions, people are known to make appropriate choices based on relatively small samples. Although there have been various studies on models of human decision-making, a model that integrates various cognitive biases, called ”quantum decision-making,” has recently attracted much attention. However, the modeling of small samples has not been examined much so far. In this paper, we extend the model of quantum decision-making to model decision-making with a small sample. In the proposed model, the state is updated by value-based probability amplitude amplification. By analytically obtaining a lower bound on the number of samples required for decision-making, we show that decision-making with a small number of samples is feasible.Keywords: quantum decision making, small sample, MPEG-DASH, Grover's algorithm
Procedia PDF Downloads 845248 Artificial Intelligence in the Design of High-Strength Recycled Concrete
Authors: Hadi Rouhi Belvirdi, Davoud Beheshtizadeh
Abstract:
The increasing demand for sustainable construction materials has led to a growing interest in high-strength recycled concrete (HSRC). Utilizing recycled materials not only reduces waste but also minimizes the depletion of natural resources. This study explores the application of artificial intelligence (AI) techniques to model and predict the properties of HSRC. In the past two decades, the production levels in various industries and, consequently, the amount of waste have increased significantly. Continuing this trend will undoubtedly cause irreparable damage to the environment. For this reason, engineers have been constantly seeking practical solutions for recycling industrial waste in recent years. This research utilized the results of the compressive strength of 90-day high-strength recycled concrete. The method for creating recycled concrete involved replacing sand with crushed glass and using glass powder instead of cement. Subsequently, a feedforward artificial neural network was employed to model the compressive strength results for 90 days. The regression and error values obtained indicate that this network is suitable for modeling the compressive strength data.Keywords: high-strength recycled concrete, feedforward artificial neural network, regression, construction materials
Procedia PDF Downloads 225247 Application of FT-NIR Spectroscopy and Electronic Nose in On-line Monitoring of Dough Proofing
Authors: Madhuresh Dwivedi, Navneet Singh Deora, Aastha Deswal, H. N. Mishra
Abstract:
FT-NIR spectroscopy and electronic nose was used to study the kinetics of dough proofing. Spectroscopy was conducted with an optic probe in the diffuse reflectance mode. The dough leavening was carried out at different temperatures (25 and 35°C) and constant RH (80%). Spectra were collected in the range of wave numbers from 12,000 to 4,000 cm-1 directly on the samples, every 5 min during proofing, up to 2 hours. NIR spectra were corrected for scatter effect and second order derivatization was done to transform the spectra. Principal component analysis (PCA) was applied for the leavening process and process kinetics was calculated. PCA was performed on data set and loadings were calculated. For leavening, four absorption zones (8,950-8,850, 7,200-6,800, 5,250-5,150 and 4,700-4,250 cm-1) were involved in describing the process. Simultaneously electronic nose was also used for understanding the development of odour compounds during fermentation. The electronic nose was able to differential the sample on the basis of aroma generation at different time during fermentation. In order to rapidly differentiate samples based on odor, a Principal component analysis is performed and successfully demonstrated in this study. The result suggests that electronic nose and FT-NIR spectroscopy can be utilized for the online quality control of the fermentation process during leavening of bread dough.Keywords: FT-NIR, dough, e-nose, proofing, principal component analysis
Procedia PDF Downloads 3965246 Optimization of the Flexural Strength of Biocomposites Samples Reinforced with Resin for Engineering Applications
Authors: Stephen Akong Takim
Abstract:
This study focused on the optimization of the flexural strength of bio-composite samples of palm kernel, whelks, clams, periwinkles shells and bamboo fiber reinforced with resin for engineering applications. The aim of the study was to formulate different samples of bio-composite reinforced with resin for engineering applications and to evaluate the flexural strength of the fabricated composite. The hand lay-up technique was used for the composites produced by incorporating different percentage compositions of the shells/fiber (10%, 15%, 20%, 25% and 30%) into varied proportions of epoxy resin and catalyst. The cured samples, after 24 hours, were subjected to tensile, impact, flexural and water absorption tests. The experiments were conducted using the Taguchi optimization method L25 (5x5) with five design parameters and five level combinations in Minitab 18 statistical software. The results showed that the average value of flexural was 114.87MPa when compared to the unreinforced 72.33MPa bio-composite. The study recommended that agricultural waste, like palm kernel shells, whelk shells, clams, periwinkle shells and bamboo fiber, should be converted into important engineering applications.Keywords: bio-composite, resin, palm kernel shells, welk shells, periwinkle shells, bamboo fiber, Taguchi techniques and engineering application
Procedia PDF Downloads 805245 An Algorithm for Removal of Noise from X-Ray Images
Authors: Sajidullah Khan, Najeeb Ullah, Wang Yin Chai, Chai Soo See
Abstract:
In this paper, we propose an approach to remove impulse and Poisson noise from X-ray images. Many filters have been used for impulse noise removal from color and gray scale images with their own strengths and weaknesses but X-ray images contain Poisson noise and unfortunately there is no intelligent filter which can detect impulse and Poisson noise from X-ray images. Our proposed filter uses the upgraded layer discrimination approach to detect both Impulse and Poisson noise corrupted pixels in X-ray images and then restores only those detected pixels with a simple efficient and reliable one line equation. Our Proposed algorithms are very effective and much more efficient than all existing filters used only for Impulse noise removal. The proposed method uses a new powerful and efficient noise detection method to determine whether the pixel under observation is corrupted or noise free. Results from computer simulations are used to demonstrate pleasing performance of our proposed method.Keywords: X-ray image de-noising, impulse noise, poisson noise, PRWF
Procedia PDF Downloads 3865244 Investigation of Microstructure of Differently Sub-Zero Treated Vanadis 6 Steel
Authors: J. Ptačinová, J. Ďurica, P. Jurči, M Kusý
Abstract:
Ledeburitic tool steel Vanadis 6 has been subjected to sub-zero treatment (SZT) at -140 °C and -196 °C, for different durations up to 48 h. The microstructure and hardness have been examined with reference to the same material after room temperature quenching, by using the light microscopy, scanning electron microscopy, X-ray diffraction, and Vickers hardness testing method. The microstructure of the material consists of the martensitic matrix with certain amount of retained austenite, and of several types of carbides – eutectic carbides, secondary carbides, and small globular carbides. SZT reduces the retained austenite amount – this is more effective at -196 °C than at -140 °C. Alternatively, the amount of small globular carbides increases more rapidly after SZT at -140 °C than after the treatment at -140 °C. The hardness of sub-zero treated material is higher than that of conventionally treated steel when tempered at low temperature. Compressive hydrostatic stresses are developed in the retained austenite due to the application of SZT, as a result of more complete martensitic transformation. This is also why the population density of small globular carbides is substantially increased due to the SZT. In contrast, the hardness of sub-zero treated samples decreases more rapidly compared to that of conventionally treated steel, and in addition, sub-zero treated material induces a loss the secondary hardening peak.Keywords: microstructure, Vanadis 6 tool steel, sub-zero treatment, carbides
Procedia PDF Downloads 1655243 Times2D: A Time-Frequency Method for Time Series Forecasting
Authors: Reza Nematirad, Anil Pahwa, Balasubramaniam Natarajan
Abstract:
Time series data consist of successive data points collected over a period of time. Accurate prediction of future values is essential for informed decision-making in several real-world applications, including electricity load demand forecasting, lifetime estimation of industrial machinery, traffic planning, weather prediction, and the stock market. Due to their critical relevance and wide application, there has been considerable interest in time series forecasting in recent years. However, the proliferation of sensors and IoT devices, real-time monitoring systems, and high-frequency trading data introduce significant intricate temporal variations, rapid changes, noise, and non-linearities, making time series forecasting more challenging. Classical methods such as Autoregressive integrated moving average (ARIMA) and Exponential Smoothing aim to extract pre-defined temporal variations, such as trends and seasonality. While these methods are effective for capturing well-defined seasonal patterns and trends, they often struggle with more complex, non-linear patterns present in real-world time series data. In recent years, deep learning has made significant contributions to time series forecasting. Recurrent Neural Networks (RNNs) and their variants, such as Long short-term memory (LSTMs) and Gated Recurrent Units (GRUs), have been widely adopted for modeling sequential data. However, they often suffer from the locality, making it difficult to capture local trends and rapid fluctuations. Convolutional Neural Networks (CNNs), particularly Temporal Convolutional Networks (TCNs), leverage convolutional layers to capture temporal dependencies by applying convolutional filters along the temporal dimension. Despite their advantages, TCNs struggle with capturing relationships between distant time points due to the locality of one-dimensional convolution kernels. Transformers have revolutionized time series forecasting with their powerful attention mechanisms, effectively capturing long-term dependencies and relationships between distant time points. However, the attention mechanism may struggle to discern dependencies directly from scattered time points due to intricate temporal patterns. Lastly, Multi-Layer Perceptrons (MLPs) have also been employed, with models like N-BEATS and LightTS demonstrating success. Despite this, MLPs often face high volatility and computational complexity challenges in long-horizon forecasting. To address intricate temporal variations in time series data, this study introduces Times2D, a novel framework that parallelly integrates 2D spectrogram and derivative heatmap techniques. The spectrogram focuses on the frequency domain, capturing periodicity, while the derivative patterns emphasize the time domain, highlighting sharp fluctuations and turning points. This 2D transformation enables the utilization of powerful computer vision techniques to capture various intricate temporal variations. To evaluate the performance of Times2D, extensive experiments were conducted on standard time series datasets and compared with various state-of-the-art algorithms, including DLinear (2023), TimesNet (2023), Non-stationary Transformer (2022), PatchTST (2023), N-HiTS (2023), Crossformer (2023), MICN (2023), LightTS (2022), FEDformer (2022), FiLM (2022), SCINet (2022a), Autoformer (2021), and Informer (2021) under the same modeling conditions. The initial results demonstrated that Times2D achieves consistent state-of-the-art performance in both short-term and long-term forecasting tasks. Furthermore, the generality of the Times2D framework allows it to be applied to various tasks such as time series imputation, clustering, classification, and anomaly detection, offering potential benefits in any domain that involves sequential data analysis.Keywords: derivative patterns, spectrogram, time series forecasting, times2D, 2D representation
Procedia PDF Downloads 485242 Short-Term Load Forecasting Based on Variational Mode Decomposition and Least Square Support Vector Machine
Authors: Jiangyong Liu, Xiangxiang Xu, Bote Luo, Xiaoxue Luo, Jiang Zhu, Lingzhi Yi
Abstract:
To address the problems of non-linearity and high randomness of the original power load sequence causing the degradation of power load forecasting accuracy, a short-term load forecasting method is proposed. The method is based on the Least Square Support Vector Machine optimized by an Improved Sparrow Search Algorithm combined with the Variational Mode Decomposition proposed in this paper. The application of the variational mode decomposition technique decomposes the raw power load data into a series of Intrinsic Mode Functions components, which can reduce the complexity and instability of the raw data while overcoming modal confounding; the proposed improved sparrow search algorithm can solve the problem of difficult selection of learning parameters in the least Square Support Vector Machine. Finally, through comparison experiments, the results show that the method can effectively improve prediction accuracy.Keywords: load forecasting, variational mode decomposition, improved sparrow search algorithm, least square support vector machine
Procedia PDF Downloads 1145241 A Complex Network Approach to Structural Inequality of Educational Deprivation
Authors: Harvey Sanchez-Restrepo, Jorge Louca
Abstract:
Equity and education are major focus of government policies around the world due to its relevance for addressing the sustainable development goals launched by Unesco. In this research, we developed a primary analysis of a data set of more than one hundred educational and non-educational factors associated with learning, coming from a census-based large-scale assessment carried on in Ecuador for 1.038.328 students, their families, teachers, and school directors, throughout 2014-2018. Each participating student was assessed by a standardized computer-based test. Learning outcomes were calibrated through item response theory with two-parameters logistic model for getting raw scores that were re-scaled and synthetized by a learning index (LI). Our objective was to develop a network for modelling educational deprivation and analyze the structure of inequality gaps, as well as their relationship with socioeconomic status, school financing, and student's ethnicity. Results from the model show that 348 270 students did not develop the minimum skills (prevalence rate=0.215) and that Afro-Ecuadorian, Montuvios and Indigenous students exhibited the highest prevalence with 0.312, 0.278 and 0.226, respectively. Regarding the socioeconomic status of students (SES), modularity class shows clearly that the system is out of equilibrium: the first decile (the poorest) exhibits a prevalence rate of 0.386 while rate for decile ten (the richest) is 0.080, showing an intense negative relationship between learning and SES given by R= –0.58 (p < 0.001). Another interesting and unexpected result is the average-weighted degree (426.9) for both private and public schools attending Afro-Ecuadorian students, groups that got the highest PageRank (0.426) and pointing out that they suffer the highest educational deprivation due to discrimination, even belonging to the richest decile. The model also found the factors which explain deprivation through the highest PageRank and the greatest degree of connectivity for the first decile, they are: financial bonus for attending school, computer access, internet access, number of children, living with at least one parent, books access, read books, phone access, time for homework, teachers arriving late, paid work, positive expectations about schooling, and mother education. These results provide very accurate and clear knowledge about the variables affecting poorest students and the inequalities that it produces, from which it might be defined needs profiles, as well as actions on the factors in which it is possible to influence. Finally, these results confirm that network analysis is fundamental for educational policy, especially linking reliable microdata with social macro-parameters because it allows us to infer how gaps in educational achievements are driven by students’ context at the time of assigning resources.Keywords: complex network, educational deprivation, evidence-based policy, large-scale assessments, policy informatics
Procedia PDF Downloads 128