Search results for: large language models
172 Automatic Distance Compensation for Robust Voice-based Human-Computer Interaction
Authors: Randy Gomez, Keisuke Nakamura, Kazuhiro Nakadai
Abstract:
Distant-talking voice-based HCI system suffers from performance degradation due to mismatch between the acoustic speech (runtime) and the acoustic model (training). Mismatch is caused by the change in the power of the speech signal as observed at the microphones. This change is greatly influenced by the change in distance, affecting speech dynamics inside the room before reaching the microphones. Moreover, as the speech signal is reflected, its acoustical characteristic is also altered by the room properties. In general, power mismatch due to distance is a complex problem. This paper presents a novel approach in dealing with distance-induced mismatch by intelligently sensing instantaneous voice power variation and compensating model parameters. First, the distant-talking speech signal is processed through microphone array processing, and the corresponding distance information is extracted. Distance-sensitive Gaussian Mixture Models (GMMs), pre-trained to capture both speech power and room property are used to predict the optimal distance of the speech source. Consequently, pre-computed statistic priors corresponding to the optimal distance is selected to correct the statistics of the generic model which was frozen during training. Thus, model combinatorics are post-conditioned to match the power of instantaneous speech acoustics at runtime. This results to an improved likelihood in predicting the correct speech command at farther distances. We experiment using real data recorded inside two rooms. Experimental evaluation shows voice recognition performance using our method is more robust to the change in distance compared to the conventional approach. In our experiment, under the most acoustically challenging environment (i.e., Room 2: 2.5 meters), our method achieved 24.2% improvement in recognition performance against the best-performing conventional method.
Keywords: Human Machine Interaction, Human Computer Interaction, Voice Recognition, Acoustic Model Compensation, Acoustic Speech Enhancement.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1884171 Identifying a Drug Addict Person Using Artificial Neural Networks
Authors: Mustafa Al Sukar, Azzam Sleit, Abdullatif Abu-Dalhoum, Bassam Al-Kasasbeh
Abstract:
Use and abuse of drugs by teens is very common and can have dangerous consequences. The drugs contribute to physical and sexual aggression such as assault or rape. Some teenagers regularly use drugs to compensate for depression, anxiety or a lack of positive social skills. Teen resort to smoking should not be minimized because it can be "gateway drugs" for other drugs (marijuana, cocaine, hallucinogens, inhalants, and heroin). The combination of teenagers' curiosity, risk taking behavior, and social pressure make it very difficult to say no. This leads most teenagers to the questions: "Will it hurt to try once?" Nowadays, technological advances are changing our lives very rapidly and adding a lot of technologies that help us to track the risk of drug abuse such as smart phones, Wireless Sensor Networks (WSNs), Internet of Things (IoT), etc. This technique may help us to early discovery of drug abuse in order to prevent an aggravation of the influence of drugs on the abuser. In this paper, we have developed a Decision Support System (DSS) for detecting the drug abuse using Artificial Neural Network (ANN); we used a Multilayer Perceptron (MLP) feed-forward neural network in developing the system. The input layer includes 50 variables while the output layer contains one neuron which indicates whether the person is a drug addict. An iterative process is used to determine the number of hidden layers and the number of neurons in each one. We used multiple experiment models that have been completed with Log-Sigmoid transfer function. Particularly, 10-fold cross validation schemes are used to access the generalization of the proposed system. The experiment results have obtained 98.42% classification accuracy for correct diagnosis in our system. The data had been taken from 184 cases in Jordan according to a set of questions compiled from Specialists, and data have been obtained through the families of drug abusers.
Keywords: Artificial Neural Network, Decision Support System, drug abuse, drug addiction, Multilayer Perceptron.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1680170 Transforming Health Information from Manual to Digital (Electronic) World–Reference and Guide
Authors: S. Karthikeyan, Naveen Bindra
Abstract:
Introduction: To update ourselves and understand the concept of latest electronic formats available for Health care providers and how it could be used and developed as per standards. The idea is to correlate between the patients Manual Medical Records keeping and maintaining patients Electronic Information in a Health care setup in this world. Furthermore, this stands with adapting to the right technology depending upon the organization and improve our quality and quantity of Healthcare providing skills. Objective: The concept and theory is to explain the terms of Electronic Medical Record (EMR), Electronic Health Record (EHR) and Personal Health Record (PHR) and selecting the best technical among the available Electronic sources and software before implementing. It is to guide and make sure the technology used by the end users without any doubts and difficulties. The idea is to evaluate is to admire the uses and barriers of EMR-EHR-PHR. Aim and Scope: The target is to achieve the health care providers like Physicians, Nurses, Therapists, Medical Bill reimbursements, Insurances and Government to assess the patient’s information on easy and systematic manner without diluting the confidentiality of patient’s information. Method: Health Information Technology can be implemented with the help of Organisations providing with legal guidelines and help to stand by the health care provider. The main objective is to select the correct embedded and affordable database management software and generating large-scale data. The parallel need is to know how the latest software available in the market. Conclusion: The question lies here is implementing the Electronic information system with healthcare providers and organization. The clinicians are the main users of the technology and manage us to “go paperless”. The fact is that day today changing technologically is very sound and up to date. Basically, the idea is to tell how to store the data electronically safe and secure. All three exemplifies the fact that an electronic format has its own benefit as well as barriers.
Keywords: Medical records, digital records, health information, electronic record system.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1361169 Performance Evaluation and Plugging Characteristics of Controllable Self-Aggregating Colloidal Particle Profile Control Agent
Authors: Zhiguo Yang, Xiangan Yue, Minglu Shao, Yang Yue, Tianqi Yue
Abstract:
In low permeability reservoirs, the reservoir pore throat is small and the micro heterogeneity is prominent. Conventional microsphere profile control agents generally have good injectability but poor plugging effect; however, profile control agents with good plugging effect generally have poor injectability, which makes it difficult for agent to realize deep profile control of reservoir. To solve this problem, styrene and acrylamide were used as monomers in the laboratory. Emulsion polymerization was used to prepare the Controllable Self-Aggregating Colloidal Particle (CSA), which was rich in amide group. The CSA microsphere dispersion solution with a particle diameter smaller than the pore throat diameter was injected into the reservoir to ensure that the profile control agent had good inject ability. After dispersing the CSA microsphere to the deep part of the reservoir, the CSA microspheres dispersed in static for a certain period of time will self-aggregate into large-sized particle clusters to achieve plugging of hypertonic channels. The CSA microsphere has the characteristics of low expansion and avoids shear fracture in the process of migration. It can be observed by transmission electron microscope that CSA microspheres still maintain regular and uniform spherical and core-shell heterogeneous structure after aging at 100 ºC for 35 days, and CSA microspheres have good thermal stability. The results of bottle test showed that with the increase of cation concentration, the aggregation time of CSA microspheres gradually shortened, and the influence of divalent cations was greater than that of monovalent ions. Physical simulation experiments show that CSA microspheres have good injectability, and the aggregated CSA particle clusters can produce effective plugging and migrate to the deep part of the reservoir for profile control.
Keywords: Heterogeneous reservoir, deep profile control, emulsion polymerization, colloidal particles, plugging characteristic.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 486168 Simulation of an Auto-Tuning Bicycle Suspension Fork with Quick Releasing Valves
Authors: Y. C. Mao, G. S. Chen
Abstract:
Bicycle configuration is not as large as those of motorcycles or automobiles, while it indeed composes a complicated dynamic system. People-s requirements on comfortability, controllability and safety grow higher as the research and development technologies improve. The shock absorber affects the vehicle suspension performances enormously. The absorber takes the vibration energy and releases it at a suitable time, keeping the wheel under a proper contact condition with road surface, maintaining the vehicle chassis stability. Suspension design for mountain bicycles is more difficult than that of city bikes since it encounters dynamic variations on road and loading conditions. Riders need a stiff damper as they exert to tread on the pedals when climbing, while a soft damper when they descend downhill. Various switchable shock absorbers are proposed in markets, however riders have to manually switch them among soft, hard and lock positions. This study proposes a novel design of the bicycle shock absorber, which provides automatic smooth tuning of the damping coefficient, from a predetermined lower bound to theoretically unlimited. An automatic quick releasing valve is involved in this design so that it can release the peak pressure when the suspension fork runs into a square-wave type obstacle and prevent the chassis from damage, avoiding the rider skeleton from injury. This design achieves the automatic tuning process by innovative plunger valve and fluidic passage arrangements without any electronic devices. Theoretical modelling of the damper and spring are established in this study. Design parameters of the valves and fluidic passages are determined. Relations between design parameters and shock absorber performances are discussed in this paper. The analytical results give directions to the shock absorber manufacture.
Keywords: Modelling, Simulation, Bicycle, Shock Absorber, Damping, Releasing Valve
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2889167 A Simple and Empirical Refraction Correction Method for UAV-Based Shallow-Water Photogrammetry
Authors: I GD Yudha Partama, A. Kanno, Y. Akamatsu, R. Inui, M. Goto, M. Sekine
Abstract:
The aerial photogrammetry of shallow water bottoms has the potential to be an efficient high-resolution survey technique for shallow water topography, thanks to the advent of convenient UAV and automatic image processing techniques Structure-from-Motion (SfM) and Multi-View Stereo (MVS)). However, it suffers from the systematic overestimation of the bottom elevation, due to the light refraction at the air-water interface. In this study, we present an empirical method to correct for the effect of refraction after the usual SfM-MVS processing, using common software. The presented method utilizes the empirical relation between the measured true depth and the estimated apparent depth to generate an empirical correction factor. Furthermore, this correction factor was utilized to convert the apparent water depth into a refraction-corrected (real-scale) water depth. To examine its effectiveness, we applied the method to two river sites, and compared the RMS errors in the corrected bottom elevations with those obtained by three existing methods. The result shows that the presented method is more effective than the two existing methods: The method without applying correction factor and the method utilizes the refractive index of water (1.34) as correction factor. In comparison with the remaining existing method, which used the additive terms (offset) after calculating correction factor, the presented method performs well in Site 2 and worse in Site 1. However, we found this linear regression method to be unstable when the training data used for calibration are limited. It also suffers from a large negative bias in the correction factor when the apparent water depth estimated is affected by noise, according to our numerical experiment. Overall, the good accuracy of refraction correction method depends on various factors such as the locations, image acquisition, and GPS measurement conditions. The most effective method can be selected by using statistical selection (e.g. leave-one-out cross validation).Keywords: Bottom elevation, multi-view stereo, river, structure-from-motion.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1578166 Microstructure and Corrosion Behavior of Laser Welded Magnesium Alloys with Silver Nanoparticles
Authors: M. Ishak, K. Yamasaki, K. Maekawa
Abstract:
Magnesium alloys have gained increased attention in recent years in automotive, electronics, and medical industry. This because of magnesium alloys have better properties than aluminum alloys and steels in respects of their low density and high strength to weight ratio. However, the main problems of magnesium alloy welding are the crack formation and the appearance of porosity during the solidification. This paper proposes a unique technique to weld two thin sheets of AZ31B magnesium alloy using a paste containing Ag nanoparticles. The paste containing Ag nanoparticles of 5 nm in average diameter and an organic solvent was used to coat the surface of AZ31B thin sheet. The coated sheet was heated at 100 °C for 60 s to evaporate the solvent. The dried sheet was set as a lower AZ31B sheet on the jig, and then lap fillet welding was carried out by using a pulsed Nd:YAG laser in a closed box filled with argon gas. The characteristics of the microstructure and the corrosion behavior of the joints were analyzed by opticalmicroscopy (OM), energy dispersive spectrometry (EDS), electron probe micro-analyzer (EPMA), scanning electron microscopy (SEM), and immersion corrosion test. The experimental results show that the wrought AZ31B magnesium alloy can be joined successfully using Ag nanoparticles. Ag nanoparticles insert promote grain refinement, narrower the HAZ width and wider bond width compared to weld without and insert. Corrosion rate of welded AZ31B with Ag nanoparticles reduced up to 44 % compared to base metal. The improvement of corrosion resistance of welded AZ31B with Ag nanoparticles due to finer grains and large grain boundaries area which consist of high Al content. β-phase Mg17Al12 could serve as effective barrier and suppressed further propagation of corrosion. Furthermore, Ag distribution in fusion zone provide much more finer grains and may stabilize the magnesium solid solution making it less soluble or less anodic in aqueous
Keywords: Laser welding, magnesium alloys, nanoparticles, mechanical property
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2072165 Systematic Identification and Quantification of Substrate Specificity Determinants in Human Protein Kinases
Authors: Manuel A. Alonso-Tarajano, Roberto Mosca, Patrick Aloy
Abstract:
Protein kinases participate in a myriad of cellular processes of major biomedical interest. The in vivo substrate specificity of these enzymes is a process determined by several factors, and despite several years of research on the topic, is still far from being totally understood. In the present work, we have quantified the contributions to the kinase substrate specificity of i) the phosphorylation sites and their surrounding residues in the sequence and of ii) the association of kinases to adaptor or scaffold proteins. We have used position-specific scoring matrices (PSSMs), to represent the stretches of sequences phosphorylated by 93 families of kinases. We have found negative correlations between the number of sequences from which a PSSM is generated and the statistical significance and the performance of that PSSM. Using a subset of 22 statistically significant PSSMs, we have identified specificity determinant residues (SDRs) for 86% of the corresponding kinase families. Our results suggest that different SDRs can function as positive or negative elements of substrate recognition by the different families of kinases. Additionally, we have found that human proteins with known function as adaptors or scaffolds (kAS) tend to interact with a significantly large fraction of the substrates of the kinases to which they associate. Based on this characteristic we have identified a set of 279 potential adaptors/scaffolds (pAS) for human kinases, which is enriched in Pfam domains and functional terms tightly related to the proposed function. Moreover, our results show that for 74.6% of the kinase–pAS association found, the pAS colocalize with the substrates of the kinases they are associated to. Finally, we have found evidence suggesting that the association of kinases to adaptors and scaffolds, may contribute significantly to diminish the in vivo substrate crossed-specificity of protein kinases. In general, our results indicate the relevance of several SDRs for both the positive and negative selection of phosphorylation sites by kinase families and also suggest that the association of kinases to pAS proteins may be an important factor for the localization of the enzymes with their set of substrates.
Keywords: Kinase, phosphorylation, substrate specificity, adaptors, scaffolds, cellular colocalization.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1535164 Study on Optimization of Air Infiltration at Entrance of a Commercial Complex in Zhejiang Province
Authors: Yujie Zhao, Jiantao Weng
Abstract:
In the past decade, with the rapid development of China's economy, the purchasing power and physical demand of residents have been improved, which results in the vast emergence of public buildings like large shopping malls. However, the architects usually focus on the internal functions and streamlines of these buildings, ignoring the impact of the environment on the subjective feelings of building users. Only in Zhejiang province, the infiltration of cold air in winter frequently occurs at the entrance of sizeable commercial complex buildings that have been in operation, which will affect the environmental comfort of the building lobby and internal public spaces. At present, to reduce these adverse effects, it is usually adopted to add active equipment, such as setting air curtains to block air exchange or adding heating air conditioners. From the perspective of energy consumption, the infiltration of cold air into the entrance will increase the heat consumption of indoor heating equipment, which will indirectly cause considerable economic losses during the whole winter heating stage. Therefore, it is of considerable significance to explore the suitable entrance forms for improving the environmental comfort of commercial buildings and saving energy. In this paper, a commercial complex with apparent cold air infiltration problem in Hangzhou is selected as the research object to establish a model. The environmental parameters of the building entrance, including temperature, wind speed, and infiltration air volume, are obtained by Computational Fluid Dynamics (CFD) simulation, from which the heat consumption caused by the natural air infiltration in the winter and its potential economic loss is estimated as the objective metric. This study finally obtains the optimization direction of the building entrance form of the commercial complex by comparing the simulation results of other local commercial complex projects with different entrance forms. The conclusions will guide the entrance design of the same type of commercial complex in this area.
Keywords: Air infiltration, commercial complex, heat consumption, CFD simulation.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 764163 Comparison of Data Reduction Algorithms for Image-Based Point Cloud Derived Digital Terrain Models
Authors: M. Uysal, M. Yilmaz, I. Tiryakioğlu
Abstract:
Digital Terrain Model (DTM) is a digital numerical representation of the Earth's surface. DTMs have been applied to a diverse field of tasks, such as urban planning, military, glacier mapping, disaster management. In the expression of the Earth' surface as a mathematical model, an infinite number of point measurements are needed. Because of the impossibility of this case, the points at regular intervals are measured to characterize the Earth's surface and DTM of the Earth is generated. Hitherto, the classical measurement techniques and photogrammetry method have widespread use in the construction of DTM. At present, RADAR, LiDAR, and stereo satellite images are also used for the construction of DTM. In recent years, especially because of its superiorities, Airborne Light Detection and Ranging (LiDAR) has an increased use in DTM applications. A 3D point cloud is created with LiDAR technology by obtaining numerous point data. However recently, by the development in image mapping methods, the use of unmanned aerial vehicles (UAV) for photogrammetric data acquisition has increased DTM generation from image-based point cloud. The accuracy of the DTM depends on various factors such as data collection method, the distribution of elevation points, the point density, properties of the surface and interpolation methods. In this study, the random data reduction method is compared for DTMs generated from image based point cloud data. The original image based point cloud data set (100%) is reduced to a series of subsets by using random algorithm, representing the 75, 50, 25 and 5% of the original image based point cloud data set. Over the ANS campus of Afyon Kocatepe University as the test area, DTM constructed from the original image based point cloud data set is compared with DTMs interpolated from reduced data sets by Kriging interpolation method. The results show that the random data reduction method can be used to reduce the image based point cloud datasets to 50% density level while still maintaining the quality of DTM.
Keywords: DTM, unmanned aerial vehicle, UAV, random, Kriging.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 810162 Evaluation of Natural Drainage Flow Pattern, Necessary for Flood Control, Using Digitized Topographic Information: A Case Study of Bayelsa State Nigeria
Authors: Collins C. Chiemeke
Abstract:
The need to evaluate and understand the natural drainage pattern in a flood prone, and fast developing environment is of paramount importance. This information will go a long way to help the town planners to determine the drainage pattern, road networks and areas where prominent structures are to be located. This research work was carried out with the aim of studying the Bayelsa landscape topography using digitized topographic information, and to model the natural drainage flow pattern that will aid the understanding and constructions of workable drainages. To achieve this, digitize information of elevation and coordinate points were extracted from a global imagery map. The extracted information was modeled into 3D surfaces. The result revealed that the average elevation for Bayelsa State is 12 m above sea level. The highest elevation is 28 m, and the lowest elevation 0 m, along the coastline. In Yenagoa the capital city of Bayelsa were a detail survey was carried out showed that average elevation is 15 m, the highest elevation is 25 m and lowest is 3 m above the mean sea level. The regional elevation in Bayelsa, showed a gradation decrease from the North Eastern zone to the South Western Zone. Yenagoa showed an observed elevation lineament, were low depression is flanked by high elevation that runs from the North East to the South west. Hence, future drainages in Yenagoa should be directed from the high elevation, from South East toward the North West and from the North West toward South East, to the point of convergence which is at the center that flows from South East toward the North West. Bayelsa when considered on a regional Scale, the flow pattern is from the North East to the South West, and also North South. It is recommended that in the event of any large drainage construction at municipal scale, it should be directed from North East to the South West or from North to South. Secondly, detail survey should be carried out to ascertain the local topography and the drainage pattern before the design and construction of any drainage system in any part of Bayelsa.
Keywords: Bayelsa, Digitized Topographic Information, Drainage, Flood.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2262161 Time Temperature Dependence of Long Fiber Reinforced Polypropylene Manufactured by Direct Long Fiber Thermoplastic Process
Authors: K. A. Weidenmann, M. Grigo, B. Brylka, P. Elsner, T. Böhlke
Abstract:
In order to reduce fuel consumption, the weight of automobiles has to be reduced. Fiber reinforced polymers offer the potential to reach this aim because of their high stiffness to weight ratio. Additionally, the use of fiber reinforced polymers in automotive applications has to allow for an economic large-scale production. In this regard, long fiber reinforced thermoplastics made by direct processing offer both mechanical performance and processability in injection moulding and compression moulding. The work presented in this contribution deals with long glass fiber reinforced polypropylene directly processed in compression moulding (D-LFT). For the use in automotive applications both the temperature and the time dependency of the materials properties have to be investigated to fulfill performance requirements during crash or the demands of service temperatures ranging from -40 °C to 80 °C. To consider both the influence of temperature and time, quasistatic tensile tests have been carried out at different temperatures. These tests have been complemented by high speed tensile tests at different strain rates. As expected, the increase in strain rate results in an increase of the elastic modulus which correlates to an increase of the stiffness with decreasing service temperature. The results are in good accordance with results determined by dynamic mechanical analysis within the range of 0.1 to 100 Hz. The experimental results from different testing methods were grouped and interpreted by using different time temperature shift approaches. In this regard, Williams-Landel-Ferry and Arrhenius approach based on kinetics have been used. As the theoretical shift factor follows an arctan function, an empirical approach was also taken into consideration. It could be shown that this approach describes best the time and temperature superposition for glass fiber reinforced polypropylene manufactured by D-LFT processing.
Keywords: Composite, long fiber reinforced thermoplastics, mechanical properties, dynamic mechanical analysis, time temperature superposition.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1699160 Applying Participatory Design for the Reuse of Deserted Community Spaces
Authors: Wei-Chieh Yeh, Yung-Tang Shen
Abstract:
The concept of community building started in 1994 in Taiwan. After years of development, it fostered the notion of active local resident participation in community issues as co-operators, instead of minions. Participatory design gives participants more control in the decision-making process, helps to reduce the friction caused by arguments and assists in bringing different parties to consensus. This results in an increase in the efficiency of projects run in the community. Therefore, the participation of local residents is key to the success of community building. This study applied participatory design to develop plans for the reuse of deserted spaces in the community from the first stage of brainstorming for design ideas, making creative models to be employed later, through to the final stage of construction. After conducting a series of participatory designed activities, it aimed to integrate the different opinions of residents, develop a sense of belonging and reach a consensus. Besides this, it also aimed at building the residents’ awareness of their responsibilities for the environment and related issues of sustainable development. By reviewing relevant literature and understanding the history of related studies, the study formulated a theory. It took the “2012-2014 Changhua County Community Planner Counseling Program” as a case study to investigate the implementation process of participatory design. Research data are collected by document analysis, participants’ observation and in-depth interviews. After examining the three elements of “Design Participation”, “Construction Participation”, and” Follow–up Maintenance Participation” in the case, the study emerged with a promising conclusion: Maintenance works were carried out better compared to common public works. Besides this, maintenance costs were lower. Moreover, the works that residents were involved in were more creative. Most importantly, the community characteristics could be easy be recognized.
Keywords: Participatory design, Deserted spaces, Community building, Reuse.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1299159 Best Combination of Design Parameters for Buildings with Buckling-Restrained Braces
Authors: Ángel de J. López-Pérez, Sonia E. Ruiz, Vanessa A. Segovia
Abstract:
Buildings vulnerability due to seismic activity has been highly studied since the middle of last century. As a solution to the structural and non-structural damage caused by intense ground motions, several seismic energy dissipating devices, such as buckling-restrained braces (BRB), have been proposed. BRB have shown to be effective in concentrating a large portion of the energy transmitted to the structure by the seismic ground motion. A design approach for buildings with BRB elements, which is based on a seismic Displacement-Based formulation, has recently been proposed by the coauthors in this paper. It is a practical and easy design method which simplifies the work of structural engineers. The method is used here for the design of the structure-BRB damper system. The objective of the present study is to extend and apply a methodology to find the best combination of design parameters on multiple-degree-of-freedom (MDOF) structural frame – BRB systems, taking into account simultaneously: 1) initial costs and 2) an adequate engineering demand parameter. The design parameters considered here are: the stiffness ratio (α = Kframe/Ktotal), and the strength ratio (γ = Vdamper/Vtotal); where K represents structural stiffness and V structural strength; and the subscripts "frame", "damper" and "total" represent: the structure without dampers, the BRB dampers and the total frame-damper system, respectively. The selection of the best combination of design parameters α and γ is based on an initial costs analysis and on the structural dynamic response of the structural frame-damper system. The methodology is applied to a 12-story 5-bay steel building with BRB, which is located on the intermediate soil of Mexico City. It is found the best combination of design parameters α and γ for the building with BRB under study.
Keywords: Best combination of design parameters, BRB, buildings with energy dissipating devices, buckling-restrained braces, initial costs.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1184158 Computer Models of the Vestibular Head Tilt Response, and Their Relationship to EVestG and Meniere's Disease
Authors: Daniel Heibert, Brian Lithgow, Kerry Hourigan
Abstract:
This paper attempts to explain response components of Electrovestibulography (EVestG) using a computer simulation of a three-canal model of the vestibular system. EVestG is a potentially new diagnostic method for Meniere's disease. EVestG is a variant of Electrocochleography (ECOG), which has been used as a standard method for diagnosing Meniere's disease - it can be used to measure the SP/AP ratio, where an SP/AP ratio greater than 0.4-0.5 is indicative of Meniere-s Disease. In EVestG, an applied head tilt replaces the acoustic stimulus of ECOG. The EVestG output is also an SP/AP type plot, where SP is the summing potential, and AP is the action potential amplitude. AP is thought of as being proportional to the size of a population of afferents in an excitatory neural firing state. A simulation of the fluid volume displacement in the vestibular labyrinth in response to various types of head tilts (ipsilateral, backwards and horizontal rotation) was performed, and a simple neural model based on these simulations developed. The simple neural model shows that the change in firing rate of the utricle is much larger in magnitude than the change in firing rates of all three semi-circular canals following a head tilt (except in a horizontal rotation). The data suggests that the change in utricular firing rate is a minimum 2-3 orders of magnitude larger than changes in firing rates of the canals during ipsilateral/backward tilts. Based on these results, the neural response recorded by the electrode in our EVestG recordings is expected to be dominated by the utricle in ipsilateral/backward tilts (It is important to note that the effect of the saccule and efferent signals were not taken into account in this model). If the utricle response dominates the EVestG recordings as the modeling results suggest, then EVestG has the potential to diagnose utricular hair cell damage due to a viral infection (which has been cited as one possible cause of Meniere's Disease).
Keywords: Diagnostic, endolymph hydrops, Meniere's disease, modeling.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1516157 Exploring Socio-Economic Barriers of Green Entrepreneurship in Iran and Their Interactions Using Interpretive Structural Modeling
Authors: Younis Jabarzadeh, Rahim Sarvari, Negar Ahmadi Alghalandis
Abstract:
Entrepreneurship at both individual and organizational level is one of the most driving forces in economic development and leads to growth and competition, job generation and social development. Especially in developing countries, the role of entrepreneurship in economic and social prosperity is more emphasized. But the effect of global economic development on the environment is undeniable, especially in negative ways, and there is a need to rethink current business models and the way entrepreneurs act to introduce new businesses to address and embed environmental issues in order to achieve sustainable development. In this paper, green or sustainable entrepreneurship is addressed in Iran to identify challenges and barriers entrepreneurs in the economic and social sectors face in developing green business solutions. Sustainable or green entrepreneurship has been gaining interest among scholars in recent years and addressing its challenges and barriers need much more attention to fill the gap in the literature and facilitate the way those entrepreneurs are pursuing. This research comprised of two main phases: qualitative and quantitative. At qualitative phase, after a thorough literature review, fuzzy Delphi method is utilized to verify those challenges and barriers by gathering a panel of experts and surveying them. In this phase, several other contextually related factors were added to the list of identified barriers and challenges mentioned in the literature. Then, at the quantitative phase, Interpretive Structural Modeling is applied to construct a network of interactions among those barriers identified at the previous phase. Again, a panel of subject matter experts comprised of academic and industry experts was surveyed. The results of this study can be used by policymakers in both the public and industry sector, to introduce more systematic solutions to eliminate those barriers and help entrepreneurs overcome challenges of sustainable entrepreneurship. It also contributes to the literature as the first research in this type which deals with the barriers of sustainable entrepreneurship and explores their interaction.
Keywords: Green entrepreneurship, barriers, Fuzzy Delphi Method, interpretive structural modeling.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1408156 Identifying Game Variables from Students’ Surveys for Prototyping Games for Learning
Authors: N. Ismail, O. Thammajinda, U. Thongpanya
Abstract:
Games-based learning (GBL) has become increasingly important in teaching and learning. This paper explains the first two phases (analysis and design) of a GBL development project, ending up with a prototype design based on students’ and teachers’ perceptions. The two phases are part of a full cycle GBL project aiming to help secondary school students in Thailand in their study of Comprehensive Sex Education (CSE). In the course of the study, we invited 1,152 students to complete questionnaires and interviewed 12 secondary school teachers in focus groups. This paper found that GBL can serve students in their learning about CSE, enabling them to gain understanding of their sexuality, develop skills, including critical thinking skills and interact with others (peers, teachers, etc.) in a safe environment. The objectives of this paper are to outline the development of GBL variables from the research question(s) into the developers’ flow chart, to be responsive to the GBL beneficiaries’ preferences and expectations, and to help in answering the research questions. This paper details the steps applied to generate GBL variables that can feed into a game flow chart to develop a GBL prototype. In our approach, we detailed two models: (1) Game Elements Model (GEM) and (2) Game Object Model (GOM). There are three outcomes of this research – first, to achieve the objectives and benefits of GBL in learning, game design has to start with the research question(s) and the challenges to be resolved as research outcomes. Second, aligning the educational aims with engaging GBL end users (students) within the data collection phase to inform the game prototype with the game variables is essential to address the answer/solution to the research question(s). Third, for efficient GBL to bridge the gap between pedagogy and technology and in order to answer the research questions via technology (i.e. GBL) and to minimise the isolation between the pedagogists “P” and technologist “T”, several meetings and discussions need to take place within the team.
Keywords: Games-based learning, design, engagement, pedagogy, preferences, prototype, variables.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 737155 Design, Fabrication and Evaluation of MR Damper
Authors: A. Ashfak, A. Saheed, K. K. Abdul Rasheed, J. Abdul Jaleel
Abstract:
This paper presents the design, fabrication and evaluation of magneto-rheological damper. Semi-active control devices have received significant attention in recent years because they offer the adaptability of active control devices without requiring the associated large power sources. Magneto-Rheological (MR) dampers are semi- active control devices that use MR fluids to produce controllable dampers. They potentially offer highly reliable operation and can be viewed as fail-safe in that they become passive dampers if the control hardware malfunction. The advantage of MR dampers over conventional dampers are that they are simple in construction, compromise between high frequency isolation and natural frequency isolation, they offer semi-active control, use very little power, have very quick response, has few moving parts, have a relax tolerances and direct interfacing with electronics. Magneto- Rheological (MR) fluids are Controllable fluids belonging to the class of active materials that have the unique ability to change dynamic yield stress when acted upon by an electric or magnetic field, while maintaining viscosity relatively constant. This property can be utilized in MR damper where the damping force is changed by changing the rheological properties of the fluid magnetically. MR fluids have a dynamic yield stress over Electro-Rheological fluids (ER) and a broader operational temperature range. The objective of this papert was to study the application of an MR damper to vibration control, design the vibration damper using MR fluids, test and evaluate its performance. In this paper the Rheology and the theory behind MR fluids and their use on vibration control were studied. Then a MR vibration damper suitable for vehicle suspension was designed and fabricated using the MR fluid. The MR damper was tested using a dynamic test rig and the results were obtained in the form of force vs velocity and the force vs displacement plots. The results were encouraging and greatly inspire further research on the topic.Keywords: Magneto-rheological Fluid, MR Damper, Semiactive controller, Electro-rheological fluid.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 5955154 A Framework for an Automated Decision Support System for Selecting Safety-Conscious Contractors
Authors: Rawan A. Abdelrazeq, Ahmed M. Khalafallah, Nabil A. Kartam
Abstract:
Selection of competent contractors for construction projects is usually accomplished through competitive bidding or negotiated contracting in which the contract bid price is the basic criterion for selection. The evaluation of contractor’s safety performance is still not a typical criterion in the selection process, despite the existence of various safety prequalification procedures. There is a critical need for practical and automated systems that enable owners and decision makers to evaluate contractor safety performance, among other important contractor selection criteria. These systems should ultimately favor safety-conscious contractors to be selected by the virtue of their past good safety records and current safety programs. This paper presents an exploratory sequential mixed-methods approach to develop a framework for an automated decision support system that evaluates contractor safety performance based on a multitude of indicators and metrics that have been identified through a comprehensive review of construction safety research, and a survey distributed to domain experts. The framework is developed in three phases: (1) determining the indicators that depict contractor current and past safety performance; (2) soliciting input from construction safety experts regarding the identified indicators, their metrics, and relative significance; and (3) designing a decision support system using relational database models to integrate the identified indicators and metrics into a system that assesses and rates the safety performance of contractors. The proposed automated system is expected to hold several advantages including: (1) reducing the likelihood of selecting contractors with poor safety records; (2) enhancing the odds of completing the project safely; and (3) encouraging contractors to exert more efforts to improve their safety performance and practices in order to increase their bid winning opportunities which can lead to significant safety improvements in the construction industry. This should prove useful to decision makers and researchers, alike, and should help improve the safety record of the construction industry.Keywords: Construction safety, contractor selection, decision support system, relational database.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1584153 Ideal Disinfectant Characteristics According Data in Published Literature
Authors: Saimir Heta, Ilma Robo, Rialda Xhizdari, Kers Kapaj
Abstract:
The stability of an ideal disinfectant should be constant regardless of the change in the atmospheric conditions of the environment where it is kept. If the conditions such as temperature or humidity change, it is understood that it will also be necessary to approach possible changes in the holding materials such as plastic or glass bottles with the aim of protecting the disinfectant, for example, from the excessive lighting of the environment, which can also be translated as an increase in the temperature of disinfectant as a fluid. In this study, an attempt was made to find the most recent published data about the best possible combination of disinfectants indicated for use after dental procedures. This purpose of the study was realized by comparing the basic literature that is studied in the field of dentistry by students with the most published data in the literature of recent years about this topic. Each disinfectant is represented by a number called the disinfectant count, in which different factors can influence the increase or reduction of variables whose production remains a specific statistic for a specific disinfectant. The changes in the atmospheric conditions where the disinfectant is deposited and stored in the environment are known to affect the stability of the disinfectant as a fluid; this fact is known and even cited in the leaflets accompanying the manufactured boxes of disinfectants. It is these cares, in the form of advice, which are based not only on the preservation of the disinfectant but also on the application in order to have the desired clinical result. Aldehydes have the highest constant among the types of disinfectants, followed by acids. The lowest value of the constant belongs to the class of glycols, the predecessors of which were the halogens, in which class there are some representatives with disinfection applications. The class of phenols and acids have almost the same intervals of constants. If the goal were to find the ideal disinfectant among the large variety of disinfectants produced, a good starting point would be to find something unchanging or a fixed, unchanging element on the basis of which the comparison can be made properties of different disinfectants. Precisely based on the results of this study, the role of the specific constant according to the specific disinfectant is highlighted. Finding an ideal disinfectant, like finding a medication or the ideal antibiotic, is an ongoing but unattainable goal.
Keywords: Different disinfectants, phenols, aldehydes, specific constant, dental procedures.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 42152 Organization of the Purchasing Function for Innovation
Authors: Jasna Prester, Ivana Rašić Bakarić, Božidar Matijević
Abstract:
Innovations not only contribute to competitiveness of the company but have also positive effects on revenues. On average, product innovations account to 14 percent of companies’ sales. Innovation management has substantially changed during the last decade, because of growing reliance on external partners. As a consequence, a new task for purchasing arises, as firms need to understand which suppliers actually do have high potential contributing to the innovativeness of the firm and which do not. Proper organization of the purchasing function is important since for the majority of manufacturing companies deal with substantial material costs which pass through the purchasing function. In the past the purchasing function was largely seen as a transaction-oriented, clerical function but today purchasing is the intermediate with supply chain partners contributing to innovations, be it product or process innovations. Therefore, purchasing function has to be organized differently to enable firm innovation potential. However, innovations are inherently risky. There are behavioral risk (that some partner will take advantage of the other party), technological risk in terms of complexity of products and processes of manufacturing and incoming materials and finally market risks, which in fact judge the value of the innovation. These risks are investigated in this work. Specifically, technological risks which deal with complexity of the products, and processes will be investigated more thoroughly. Buying components or such high edge technologies necessities careful investigation of technical features and therefore is usually conducted by a team of experts. Therefore it is hypothesized that higher the technological risk, higher will be the centralization of the purchasing function as an interface with other supply chain members. Main contribution of this research lies is in the fact that analysis was performed on a large data set of 1493 companies, from 25 countries collected in the GMRG 4 survey. Most analyses of purchasing function are done by case study analysis of innovative firms. Therefore this study contributes with empirical evaluations that can be generalized.
Keywords: Purchasing function organization, innovation, technological risk, GMRG 4 survey.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3721151 Wind Energy Status in Turkey
Authors: Mustafa Engin Başoğlu, Bekir Çakir
Abstract:
Since large part of electricity is generated by using fossil based resources, energy is an important agenda for countries. In this context, renewable energy sources are alternative to conventional sources due to the depletion of fossil resources, increasing awareness of climate change and global warming concerns. Solar, wind and hydropower energy are the main renewable energy sources. Among of them, since installed capacity of wind power has increased approximately eight times between 2008 - November of 2014, wind energy is a promising source for Turkey. Furthermore, signing of Kyoto Protocol can be accepted as a milestone for Turkey's energy policy. Turkish Government has announced Vision 2023 (energy targets by 2023) in 2010-2014 Strategic Plan prepared by Ministry of Energy and Natural Resources (MENR). Energy targets in this plan can be summarized as follows: Share of renewable energy sources in electricity generation is 30% of total electricity generation by 2023. Installed capacity of wind energy will be 20 GW by 2023. Other renewable energy sources such as solar, hydropower and geothermal are encouraged with new incentive mechanisms. Dependence on foreign energy is reduced for sustainability and energy security. On the other hand, since Turkey is surrounded by three coastal areas, wind energy potential is convenient for wind power application. As of November of 2014, total installed capacity of wind power plants is 3.51 GW and a lot of wind power plants are under construction with capacity 1.16 GW. Turkish government also encourages the locally manufactured equipments. In this context, one of the projects funded by private sector, universities and TUBİTAK names as MILRES is an important project aimed to promote the use wind energy in electricity generation. Within this project, wind turbine with 500 kW power has been produced and will be installed at the beginning of the 2015. After that, by using the experience obtained from the first phase of the project, a wind turbine with 2.5 MW power will be manufactured in an industrial scale.
Keywords: Wind energy, wind speed, Vision 2023, MILRES (national wind energy system), wind energy potential, Turkey.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3271150 Efficacy of Gamma Radiation on the Productivity of Bactrocera oleae Gmelin (Diptera: Tephritidae)
Authors: Mehrdad Ahmadi, Mohamad Babaie, Shiva Osouli, Bahareh Salehi, Nadia Kalantaraian
Abstract:
The olive fruit fly, Bactrocera oleae Gmelin (Diptera: Tephritidae), is one of the most serious pests in olive orchards in growing province in Iran. The female lay eggs in green olive fruit and larvae hatch inside the fruit, where they feed upon the fruit matters. One of the main ecologically friendly and species-specific systems of pest control is the sterile insect technique (SIT) which is based on the release of large numbers of sterilized insects. The objective of our work was to develop a SIT against B. oleae by using of gamma radiation for the laboratory and field trial in Iran. Oviposition of female mated by irradiated males is one of the main parameters to determine achievement of SIT. To conclude the sterile dose, pupae were placed under 0 to 160 Gy of gamma radiation. The main factor in SIT is the productivity of females which are mated by irradiated males. The emerged adults from irradiated pupae were mated with untreated adults of the same age by confining them inside the transparent cages. The fecundity of the irradiated males mated with non-irradiated females was decreased with the increasing radiation dose level. It was observed that the number of eggs and also the percentage of the egg hatching was significantly (P < 0.05) affected in either IM x NF crosses compared with NM x NF crosses in F1 generation at all doses. Also, the statistical analysis showed a significant difference (P < 0.05) in the mean number of eggs laid between irradiated and non-irradiated females crossed with irradiated males, which suggests that the males were susceptible to gamma radiation. The egg hatching percentage declined markedly with the increase of the radiation dose of the treated males in mating trials which demonstrated that egg hatch rate was dose dependent. Our results specified that gamma radiation affects the longevity of irradiated B. oleae larvae (established from irradiated pupae) and significantly increased their larval duration. Results show the gamma radiation, and SIT can be used successfully against olive fruit flies.Keywords: Fertility, olive fruit fly, radiation, SIT.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1103149 Modeling Stress-Induced Regulatory Cascades with Artificial Neural Networks
Authors: Maria E. Manioudaki, Panayiota Poirazi
Abstract:
Yeast cells live in a constantly changing environment that requires the continuous adaptation of their genomic program in order to sustain their homeostasis, survive and proliferate. Due to the advancement of high throughput technologies, there is currently a large amount of data such as gene expression, gene deletion and protein-protein interactions for S. Cerevisiae under various environmental conditions. Mining these datasets requires efficient computational methods capable of integrating different types of data, identifying inter-relations between different components and inferring functional groups or 'modules' that shape intracellular processes. This study uses computational methods to delineate some of the mechanisms used by yeast cells to respond to environmental changes. The GRAM algorithm is first used to integrate gene expression data and ChIP-chip data in order to find modules of coexpressed and co-regulated genes as well as the transcription factors (TFs) that regulate these modules. Since transcription factors are themselves transcriptionally regulated, a three-layer regulatory cascade consisting of the TF-regulators, the TFs and the regulated modules is subsequently considered. This three-layer cascade is then modeled quantitatively using artificial neural networks (ANNs) where the input layer corresponds to the expression of the up-stream transcription factors (TF-regulators) and the output layer corresponds to the expression of genes within each module. This work shows that (a) the expression of at least 33 genes over time and for different stress conditions is well predicted by the expression of the top layer transcription factors, including cases in which the effect of up-stream regulators is shifted in time and (b) identifies at least 6 novel regulatory interactions that were not previously associated with stress-induced changes in gene expression. These findings suggest that the combination of gene expression and protein-DNA interaction data with artificial neural networks can successfully model biological pathways and capture quantitative dependencies between distant regulators and downstream genes.
Keywords: gene modules, artificial neural networks, yeast, stress
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1464148 Atherosclerosis Prevalence within Populations of the Southeastern United States
Authors: Samuel P. Prahlow, Anthony Sciuva, Katherine Bombly, Emily Wilson, Shiv Dhiman, Savita Arya
Abstract:
A prevalence cohort study of atherosclerotic lesions within cadavers was performed to better understand and characterize the prevalence of atherosclerosis among Georgia residents within body donors in the Philadelphia College of Osteopathic Medicine (PCOM) - Georgia body donor program. We procured specimens from cadavers used for medical student, physical therapy student, and biomedical science student cadaveric anatomical dissection at PCOM - South Georgia and PCOM - Georgia. Tissues were prepared using hematoxylin and eosin (H&E) stain as histological slides by Colquitt Regional Medical Center Laboratory Services. One section from each of the following arteries was taken after cadaveric dissection at the site of most calcification palpated grossly (if present): left anterior descending coronary artery, left internal carotid artery, abdominal aorta, splenic artery, and hepatic artery. All specimens were graded and categorized according to the American Heart Association’s Modified and Conventional Standards for Atherosclerotic Lesions using x4, x10, x40 microscopic magnification. Our study cohort included 22 cadavers, with 16 females and 6 males. The average age was 72.54 and median age was 72, with a range of 52 to 90 years old. The cause of death determination listing vascular and/or cardiovascular causes were present on 6 of the 22 death certificates. 19 of 22 (86%) cadavers had at least a single artery grading > 5. Of the cadavers with at least a single artery graded at greater than 5, only 5 of 19 (26%) cadavers had a vascular or cardiovascular cause of death reported. Malignancy was listed as a cause of death on 7 (32%) of death certificates. The average atherosclerosis grading of the common hepatic, splenic and left internal carotid arteries (2.15, 3.05, and 3.36 respectively) were lower than the left anterior descending artery and the abdominal aorta (5.16 and 5.86 respectively). This prevalence study characterizes atherosclerosis found in five medium and large systemic arteries within cadavers from the state of Georgia.
Keywords: Atherosclerosis, cardiovascular, histology, pathology.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 564147 The Importance of Changing the Traditional Mode of Higher Education in Bangladesh: Creating Huge Job Opportunities for Home and Abroad
Authors: M. M. Shahidul Hassan, Omiya Hassan
Abstract:
Bangladesh has set its goal to reach upper middle-income country status by 2024. To attain this status, the country must satisfy the World Bank requirement of achieving minimum Gross National Income (GNI). Number of youth job seekers in the country is increasing. University graduates are looking for decent jobs. So, the vital issue of this country is to understand how the GNI and jobs can be increased. The objective of this paper is to address these issues and find ways to create more job opportunities for youths at home and abroad which will increase the country’s GNI. The paper studies proportion of different goods Bangladesh exported, and also the percentage of employment in different sectors. The data used here for the purpose of analysis have been collected from the available literature. These data are then plotted and analyzed. Through these studies, it is concluded that growth in sectors like agricultural, ready-made garments (RMG), jute industries and fisheries are declining and the business community is not interested in setting up capital-intensive industries. Under this situation, the country needs to explore other business opportunities for a higher economic growth rate. Knowledge can substitute the physical resource. Since the country consists of the large youth population, higher education will play a key role in economic development. It now needs graduates with higher-order skills with innovative quality. Such dispositions demand changes in a university’s curriculum, teaching and assessment method which will function young generations as active learners and creators. By bringing these changes in higher education, a knowledge-based society can be created. The application of such knowledge and creativity will then become the commodity of Bangladesh which will help to reach its goal as an upper middle-income country.
Keywords: Bangladesh, economic sectors, economic growth, higher education, knowledge-based economy, massifcation of higher education, teaching and learning, universities’ role in society.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 979146 Demonstration of Land Use Changes Simulation Using Urban Climate Model
Authors: Barbara Vojvodikova, Katerina Jupova, Iva Ticha
Abstract:
Cities in their historical evolution have always adapted their internal structure to the needs of society (for example protective city walls during classicism era lost their defense function, became unnecessary, were demolished and gave space for new features such as roads, museums or parks). Today it is necessary to modify the internal structure of the city in order to minimize the impact of climate changes on the environment of the population. This article discusses the results of the Urban Climate model owned by VITO, which was carried out as part of a project from the European Union's Horizon grant agreement No 730004 Pan-European Urban Climate Services Climate-Fit city. The use of the model was aimed at changes in land use and land cover in cities related to urban heat islands (UHI). The task of the application was to evaluate possible land use change scenarios in connection with city requirements and ideas. Two pilot areas in the Czech Republic were selected. One is Ostrava and the other Hodonín. The paper provides a demonstration of the application of the model for various possible future development scenarios. It contains an assessment of the suitability or inappropriateness of scenarios of future development depending on the temperature increase. Cities that are preparing to reconstruct the public space are interested in eliminating proposals that would lead to an increase in temperature stress as early as in the assignment phase. If they have evaluation on the unsuitability of some type of design, they can limit it into the proposal phases. Therefore, especially in the application of models on Local level - in 1 m spatial resolution, it was necessary to show which type of proposals would create a significant temperature island in its implementation. Such a type of proposal is considered unsuitable. The model shows that the building itself can create a shady place and thus contribute to the reduction of the UHI. If it sensitively approaches the protection of existing greenery, this new construction may not pose a significant problem. More massive interventions leading to the reduction of existing greenery create a new heat island space.
Keywords: Heat islands, land use, urban climate model.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 839145 Improved Computational Efficiency of Machine Learning Algorithms Based on Evaluation Metrics to Control the Spread of Coronavirus in the UK
Authors: Swathi Ganesan, Nalinda Somasiri, Rebecca Jeyavadhanam, Gayathri Karthick
Abstract:
The COVID-19 crisis presents a substantial and critical hazard to worldwide health. Since the occurrence of the disease in late January 2020 in the UK, the number of infected people confirmed to acquire the illness has increased tremendously across the country, and the number of individuals affected is undoubtedly considerably high. The purpose of this research is to figure out a predictive machine learning (ML) archetypal that could forecast the COVID-19 cases within the UK. This study concentrates on the statistical data collected from 31st January 2020 to 31st March 2021 in the United Kingdom. Information on total COVID-19 cases registered, new cases encountered on a daily basis, total death registered, and patients’ death per day due to Coronavirus is collected from World Health Organization (WHO). Data preprocessing is carried out to identify any missing values, outliers, or anomalies in the dataset. The data are split into 8:2 ratio for training and testing purposes to forecast future new COVID-19 cases. Support Vector Machine (SVM), Random Forest (RF), and linear regression (LR) algorithms are chosen to study the model performance in the prediction of new COVID-19 cases. From the evaluation metrics such as r-squared value and mean squared error, the statistical performance of the model in predicting the new COVID-19 cases is evaluated. RF outperformed the other two ML algorithms with a training accuracy of 99.47% and testing accuracy of 98.26% when n = 30. The mean square error obtained for RF is 4.05e11, which is lesser compared to the other predictive models used for this study. From the experimental analysis, RF algorithm can perform more effectively and efficiently in predicting the new COVID-19 cases, which could help the health sector to take relevant control measures for the spread of the virus.
Keywords: COVID-19, machine learning, supervised learning, unsupervised learning, linear regression, support vector machine, random forest.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 172144 A Multi-Criteria Decision Method for the Recruitment of Academic Personnel Based on the Analytical Hierarchy Process and the Delphi Method in a Neutrosophic Environment
Authors: Antonios Paraskevas, Michael Madas
Abstract:
For a university to maintain its international competitiveness in education, it is essential to recruit qualitative academic staff as it constitutes its most valuable asset. This selection demonstrates a significant role in achieving strategic objectives, particularly by emphasizing a firm commitment to exceptional student experience and innovative teaching and learning practices of high quality. In this vein, the appropriate selection of academic staff establishes a very important factor of competitiveness, efficiency and reputation of an academic institute. Within this framework, our work demonstrates a comprehensive methodological concept that emphasizes on the multi-criteria nature of the problem and on how decision makers could utilize our approach in order to proceed to the appropriate judgment. The conceptual framework introduced in this paper is built upon a hybrid neutrosophic method based on the Neutrosophic Analytical Hierarchy Process (N-AHP), which uses the theory of neutrosophy sets and is considered suitable in terms of significant degree of ambiguity and indeterminacy observed in decision-making process. To this end, our framework extends the N-AHP by incorporating the Neutrosophic Delphi Method (N-DM). By applying the N-DM, we can take into consideration the importance of each decision-maker and their preferences per evaluation criterion. To the best of our knowledge, the proposed model stands out within the realm of related literature as one of the few studies to employ N-DM in the context of academic staff selection. As a case study, it was decided to use our method to a real problem of academic personnel selection, having as main goal to enhance the algorithm proposed in previous scholars’ work, and thus taking care of the inherit ineffectiveness which becomes apparent in traditional multi-criteria decision-making methods when dealing with situations alike. As a further result, we prove that our method demonstrates greater applicability and reliability when compared to other decision models.
Keywords: Analytical Hierarchy Process, Delphi Method, Multi-criteria decision making methods, neutrosophic set theory, personnel recruitment.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 38143 Microscopic Analysis of Interfacial Transition Zone of Cementitious Composites Prepared by Various Mixing Procedures
Authors: Josef Fládr, Jiří Němeček, Veronika Koudelková, Petr Bílý
Abstract:
Mechanical parameters of cementitious composites differ quite significantly based on the composition of cement matrix. They are also influenced by mixing times and procedure. The research presented in this paper was aimed at identification of differences in microstructure of normal strength (NSC) and differently mixed high strength (HSC) cementitious composites. Scanning electron microscopy (SEM) investigation together with energy dispersive X-ray spectroscopy (EDX) phase analysis of NSC and HSC samples was conducted. Evaluation of interfacial transition zone (ITZ) between the aggregate and cement matrix was performed. Volume share, thickness, porosity and composition of ITZ were studied. In case of HSC, samples obtained by several different mixing procedures were compared in order to find the most suitable procedure. In case of NSC, ITZ was identified around 40-50% of aggregate grains and its thickness typically ranged between 10 and 40 µm. Higher porosity and lower share of clinker was observed in this area as a result of increased water-to-cement ratio (w/c) and the lack of fine particles improving the grading curve of the aggregate. Typical ITZ with lower content of Ca was observed only in one HSC sample, where it was developed around less than 15% of aggregate grains. The typical thickness of ITZ in this sample was similar to ITZ in NSC (between 5 and 40 µm). In the remaining four HSC samples, no ITZ was observed. In general, the share of ITZ in HSC samples was found to be significantly smaller than in NSC samples. As ITZ is the weakest part of the material, this result explains to large extent the improved mechanical properties of HSC compared to NSC. Based on the comparison of characteristics of ITZ in HSC samples prepared by different mixing procedures, the most suitable mixing procedure from the point of view of properties of ITZ was identified.
Keywords: Energy dispersive X-ray spectroscopy, high strength concrete, interfacial transition zone, mixing procedure, normal strength concrete, scanning electron microscopy.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1274