Search results for: covering machine
606 TimeTune: Personalized Study Plans Generation with Google Calendar Integration
Authors: Chevon Fernando, Banuka Athuraliya
Abstract:
The purpose of this research is to provide a solution to the students’ time management, which usually becomes an issue because students must study and manage their personal commitments. "TimeTune," an AI-based study planner that provides an opportunity to maneuver study timeframes by incorporating modern machine learning algorithms with calendar applications, is unveiled as the ideal solution. The research is focused on the development of LSTM models that connect to the Google Calendar API in the process of developing learning paths that would be fit for a unique student's daily life experience and study history. A key finding of this research is the success in building the LSTM model to predict optimal study times, which, integrating with the real-time data of Google Calendar, will generate the timetables automatically in a personalized and customized manner. The methodology encompasses Agile development practices and Object-Oriented Analysis and Design (OOAD) principles, focusing on user-centric design and iterative development. By adopting this method, students can significantly reduce the tension associated with poor study habits and time management. In conclusion, "TimeTune" displays an advanced step in personalized education technology. The fact that its application of ML algorithms and calendar integration is quite innovative is slowly and steadily revolutionizing the lives of students. The excellence of maintaining a balanced academic and personal life is stress reduction, which the applications promise to provide for students when it comes to managing their studies.Keywords: personalized learning, study planner, time management, calendar integration
Procedia PDF Downloads 49605 Image Recognition Performance Benchmarking for Edge Computing Using Small Visual Processing Unit
Authors: Kasidis Chomrat, Nopasit Chakpitak, Anukul Tamprasirt, Annop Thananchana
Abstract:
Internet of Things devices or IoT and Edge Computing has become one of the biggest things happening in innovations and one of the most discussed of the potential to improve and disrupt traditional business and industry alike. With rises of new hang cliff challenges like COVID-19 pandemic that posed a danger to workforce and business process of the system. Along with drastically changing landscape in business that left ruined aftermath of global COVID-19 pandemic, looming with the threat of global energy crisis, global warming, more heating global politic that posed a threat to become new Cold War. How emerging technology like edge computing and usage of specialized design visual processing units will be great opportunities for business. The literature reviewed on how the internet of things and disruptive wave will affect business, which explains is how all these new events is an effect on the current business and how would the business need to be adapting to change in the market and world, and example test benchmarking for consumer marketed of newer devices like the internet of things devices equipped with new edge computing devices will be increase efficiency and reducing posing a risk from a current and looming crisis. Throughout the whole paper, we will explain the technologies that lead the present technologies and the current situation why these technologies will be innovations that change the traditional practice through brief introductions to the technologies such as cloud computing, edge computing, Internet of Things and how it will be leading into future.Keywords: internet of things, edge computing, machine learning, pattern recognition, image classification
Procedia PDF Downloads 156604 Predictive Analytics Algorithms: Mitigating Elementary School Drop Out Rates
Authors: Bongs Lainjo
Abstract:
Educational institutions and authorities that are mandated to run education systems in various countries need to implement a curriculum that considers the possibility and existence of elementary school dropouts. This research focuses on elementary school dropout rates and the ability to replicate various predictive models carried out globally on selected Elementary Schools. The study was carried out by comparing the classical case studies in Africa, North America, South America, Asia and Europe. Some of the reasons put forward for children dropping out include the notion of being successful in life without necessarily going through the education process. Such mentality is coupled with a tough curriculum that does not take care of all students. The system has completely led to poor school attendance - truancy which continuously leads to dropouts. In this study, the focus is on developing a model that can systematically be implemented by school administrations to prevent possible dropout scenarios. At the elementary level, especially the lower grades, a child's perception of education can be easily changed so that they focus on the better future that their parents desire. To deal effectively with the elementary school dropout problem, strategies that are put in place need to be studied and predictive models are installed in every educational system with a view to helping prevent an imminent school dropout just before it happens. In a competency-based curriculum that most advanced nations are trying to implement, the education systems have wholesome ideas of learning that reduce the rate of dropout.Keywords: elementary school, predictive models, machine learning, risk factors, data mining, classifiers, dropout rates, education system, competency-based curriculum
Procedia PDF Downloads 175603 Numerical Simulation of the Flowing of Ice Slurry in Seawater Pipe of Polar Ships
Authors: Li Xu, Huanbao Jiang, Zhenfei Huang, Lailai Zhang
Abstract:
In recent years, as global warming, the sea-ice extent of North Arctic undergoes an evident decrease and Arctic channel has attracted the attention of shipping industry. Ice crystals existing in the seawater of Arctic channel which enter the seawater system of the ship with the seawater were found blocking the seawater pipe. The appearance of cooler paralysis, auxiliary machine error and even ship power system paralysis may be happened if seriously. In order to reduce the effect of high temperature in auxiliary equipment, seawater system will use external ice-water to participate in the cooling cycle and achieve the state of its flow. The distribution of ice crystals in seawater pipe can be achieved. As the ice slurry system is solid liquid two-phase system, the flow process of ice-water mixture is very complex and diverse. In this paper, the flow process in seawater pipe of ice slurry is simulated with fluid dynamics simulation software based on k-ε turbulence model. As the ice packing fraction is a key factor effecting the distribution of ice crystals, the influence of ice packing fraction on the flowing process of ice slurry is analyzed. In this work, the simulation results show that as the ice packing fraction is relatively large, the distribution of ice crystals is uneven in the flowing process of the seawater which has such disadvantage as increase the possibility of blocking, that will provide scientific forecasting methods for the forming of ice block in seawater piping system. It has important significance for the reliability of the operating of polar ships in the future.Keywords: ice slurry, seawater pipe, ice packing fraction, numerical simulation
Procedia PDF Downloads 367602 Electricity Price Forecasting: A Comparative Analysis with Shallow-ANN and DNN
Authors: Fazıl Gökgöz, Fahrettin Filiz
Abstract:
Electricity prices have sophisticated features such as high volatility, nonlinearity and high frequency that make forecasting quite difficult. Electricity price has a volatile and non-random character so that, it is possible to identify the patterns based on the historical data. Intelligent decision-making requires accurate price forecasting for market traders, retailers, and generation companies. So far, many shallow-ANN (artificial neural networks) models have been published in the literature and showed adequate forecasting results. During the last years, neural networks with many hidden layers, which are referred to as DNN (deep neural networks) have been using in the machine learning community. The goal of this study is to investigate electricity price forecasting performance of the shallow-ANN and DNN models for the Turkish day-ahead electricity market. The forecasting accuracy of the models has been evaluated with publicly available data from the Turkish day-ahead electricity market. Both shallow-ANN and DNN approach would give successful result in forecasting problems. Historical load, price and weather temperature data are used as the input variables for the models. The data set includes power consumption measurements gathered between January 2016 and December 2017 with one-hour resolution. In this regard, forecasting studies have been carried out comparatively with shallow-ANN and DNN models for Turkish electricity markets in the related time period. The main contribution of this study is the investigation of different shallow-ANN and DNN models in the field of electricity price forecast. All models are compared regarding their MAE (Mean Absolute Error) and MSE (Mean Square) results. DNN models give better forecasting performance compare to shallow-ANN. Best five MAE results for DNN models are 0.346, 0.372, 0.392, 0,402 and 0.409.Keywords: deep learning, artificial neural networks, energy price forecasting, turkey
Procedia PDF Downloads 294601 Vibration Transmission across Junctions of Walls and Floors in an Apartment Building: An Experimental Investigation
Authors: Hugo Sampaio Libero, Max de Castro Magalhaes
Abstract:
The perception of sound radiated from a building floor is greatly influenced by the rooms in which it is immersed and by the position of both listener and source. The main question that remains unanswered is related to the influence of the source position on the sound power radiated by a complex wall-floor system in buildings. This research is concerned with the investigation of vibration transmission across walls and floors in buildings. It is primarily based on the determination of vibration reduction index via experimental tests. Knowledge of this parameter may help in predicting noise and vibration propagation in building components. First, the physical mechanisms involving vibration transmission across structural junctions are described. An experimental setup is performed to aid this investigation. The experimental tests have shown that the vibration generation in the walls and floors is directed related to their size and boundary conditions. It is also shown that the vibration source position can affect the overall vibration spectrum significantly. Second, the characteristics of the noise spectra inside the rooms due to an impact source (tapping machine) are also presented. Conclusions are drawn for the general trend of vibration and noise spectrum of the structural components and rooms, respectively. In summary, the aim of this paper is to investigate the vibro-acoustical behavior of building floors and walls under floor impact excitation. The impact excitation was at distinct positions on the slab. The analysis has highlighted the main physical characteristics of the vibration transmission mechanism.Keywords: vibration transmission, vibration reduction index, impact excitation, experimental tests
Procedia PDF Downloads 93600 Time Series Simulation by Conditional Generative Adversarial Net
Authors: Rao Fu, Jie Chen, Shutian Zeng, Yiping Zhuang, Agus Sudjianto
Abstract:
Generative Adversarial Net (GAN) has proved to be a powerful machine learning tool in image data analysis and generation. In this paper, we propose to use Conditional Generative Adversarial Net (CGAN) to learn and simulate time series data. The conditions include both categorical and continuous variables with different auxiliary information. Our simulation studies show that CGAN has the capability to learn different types of normal and heavy-tailed distributions, as well as dependent structures of different time series. It also has the capability to generate conditional predictive distributions consistent with training data distributions. We also provide an in-depth discussion on the rationale behind GAN and the neural networks as hierarchical splines to establish a clear connection with existing statistical methods of distribution generation. In practice, CGAN has a wide range of applications in market risk and counterparty risk analysis: it can be applied to learn historical data and generate scenarios for the calculation of Value-at-Risk (VaR) and Expected Shortfall (ES), and it can also predict the movement of the market risk factors. We present a real data analysis including a backtesting to demonstrate that CGAN can outperform Historical Simulation (HS), a popular method in market risk analysis to calculate VaR. CGAN can also be applied in economic time series modeling and forecasting. In this regard, we have included an example of hypothetical shock analysis for economic models and the generation of potential CCAR scenarios by CGAN at the end of the paper.Keywords: conditional generative adversarial net, market and credit risk management, neural network, time series
Procedia PDF Downloads 144599 Semantic Indexing Improvement for Textual Documents: Contribution of Classification by Fuzzy Association Rules
Authors: Mohsen Maraoui
Abstract:
In the aim of natural language processing applications improvement, such as information retrieval, machine translation, lexical disambiguation, we focus on statistical approach to semantic indexing for multilingual text documents based on conceptual network formalism. We propose to use this formalism as an indexing language to represent the descriptive concepts and their weighting. These concepts represent the content of the document. Our contribution is based on two steps. In the first step, we propose the extraction of index terms using the multilingual lexical resource Euro WordNet (EWN). In the second step, we pass from the representation of index terms to the representation of index concepts through conceptual network formalism. This network is generated using the EWN resource and pass by a classification step based on association rules model (in attempt to discover the non-taxonomic relations or contextual relations between the concepts of a document). These relations are latent relations buried in the text and carried by the semantic context of the co-occurrence of concepts in the document. Our proposed indexing approach can be applied to text documents in various languages because it is based on a linguistic method adapted to the language through a multilingual thesaurus. Next, we apply the same statistical process regardless of the language in order to extract the significant concepts and their associated weights. We prove that the proposed indexing approach provides encouraging results.Keywords: concept extraction, conceptual network formalism, fuzzy association rules, multilingual thesaurus, semantic indexing
Procedia PDF Downloads 141598 Effect of Nitrogen-Based Cryotherapy on the Calf Muscle Spasticity in Stroke Patients
Authors: Engi E. I. Sarhan, Usama M. Rashad, Ibrahim M. I. Hamoda, Mohammed K. Mohamed
Abstract:
Background: This study aimed to know the effect of nitrogen-based cryotherapy on the spasticity of calf muscle in stroke patients. Patients were selected from the outpatient clinic of Neurology, Al-Mansoura general hospital, Al-Mansoura University. Subjects and methods: Thirty Stroke Patients of both sexes ranged from 45 to 60 years old were divided randomly into two equal groups, a study group (A) received a nitrogen-based cryotherapy, a selective physical therapy program and ankle foot orthosis (AFO), while as patients in control group (B) received the same program and AFO only. The treatment duration was three times per week for four weeks for both groups. We assessed spasticity of calf muscle before and after treatment subjectively using modified Ashworth scale (MAS) and objectively via measuring H / M ratio on electromyography machine. We also assessed ankle dorsiflexion ROM objectively using two dimensions motion analysis (2D). Results: After treatment, there was a highly significant improvement in the study group compared to the control group regarding the score of MAS, no significant difference in the study group compared to the control group regarding the readings of H / M ratio, highly significant improvement in the study group compared to the control group regarding the 2D motion analysis findings. Conclusion: This modality considers effective in reducing spasticity in the calf muscle and improving ankle dorsiflexion of the affected limb.Keywords: ankle foot orthosis, nitrogen-based cryotherapy, stroke, spasticity
Procedia PDF Downloads 202597 Techno-Economic Assessment of Distributed Heat Pumps Integration within a Swedish Neighborhood: A Cosimulation Approach
Authors: Monica Arnaudo, Monika Topel, Bjorn Laumert
Abstract:
Within the Swedish context, the current trend of relatively low electricity prices promotes the electrification of the energy infrastructure. The residential heating sector takes part in this transition by proposing a switch from a centralized district heating system towards a distributed heat pumps-based setting. When it comes to urban environments, two issues arise. The first, seen from an electricity-sector perspective, is related to the fact that existing networks are limited with regards to their installed capacities. Additional electric loads, such as heat pumps, can cause severe overloads on crucial network elements. The second, seen from a heating-sector perspective, has to do with the fact that the indoor comfort conditions can become difficult to handle when the operation of the heat pumps is limited by a risk of overloading on the distribution grid. Furthermore, the uncertainty of the electricity market prices in the future introduces an additional variable. This study aims at assessing the extent to which distributed heat pumps can penetrate an existing heat energy network while respecting the technical limitations of the electricity grid and the thermal comfort levels in the buildings. In order to account for the multi-disciplinary nature of this research question, a cosimulation modeling approach was adopted. In this way, each energy technology is modeled in its customized simulation environment. As part of the cosimulation methodology: a steady-state power flow analysis in pandapower was used for modeling the electrical distribution grid, a thermal balance model of a reference building was implemented in EnergyPlus to account for space heating and a fluid-cycle model of a heat pump was implemented in JModelica to account for the actual heating technology. With the models set in place, different scenarios based on forecasted electricity market prices were developed both for present and future conditions of Hammarby Sjöstad, a neighborhood located in the south-east of Stockholm (Sweden). For each scenario, the technical and the comfort conditions were assessed. Additionally, the average cost of heat generation was estimated in terms of levelized cost of heat. This indicator enables a techno-economic comparison study among the different scenarios. In order to evaluate the levelized cost of heat, a yearly performance simulation of the energy infrastructure was implemented. The scenarios related to the current electricity prices show that distributed heat pumps can replace the district heating system by covering up to 30% of the heating demand. By lowering of 2°C, the minimum accepted indoor temperature of the apartments, this level of penetration can increase up to 40%. Within the future scenarios, if the electricity prices will increase, as most likely expected within the next decade, the penetration of distributed heat pumps can be limited to 15%. In terms of levelized cost of heat, a residential heat pump technology becomes competitive only within a scenario of decreasing electricity prices. In this case, a district heating system is characterized by an average cost of heat generation 7% higher compared to a distributed heat pumps option.Keywords: cosimulation, distributed heat pumps, district heating, electrical distribution grid, integrated energy systems
Procedia PDF Downloads 151596 Defining Death and Dying in Relation to Information Technology and Advances in Biomedicine
Authors: Evangelos Koumparoudis
Abstract:
The definition of death is a deep philosophical question, and no single meaning can be ascribed to it. This essay focuses on the ontological, epistemological, and ethical aspects of death and dying in view of technological progress in information technology and biomedicine. It starts with the ad hoc 1968 Harvard committee that proposed that the criterion for the definition of death be irreversible coma and then refers to the debate over the whole brain death formula, emphasizing the integrated function of the organism and higher brain formula, taking consciousness and personality as essential human characteristics. It follows with the contribution of information technology in personalized and precision medicine and anti-aging measures aimed at life prolongation. It also touches on the possibility of the creation of human-machine hybrids and how this raises ontological and ethical issues that concern the “cyborgization” of human beings and the conception of the organism and personhood based on a post/transhumanist essence, and, furthermore, if sentient AI capable of autonomous decision-making that might even surpass human intelligence (singularity, superintelligence) deserves moral or legal personhood. Finally, there is the question as to whether death and dying should be redefined at a transcendent level, which is reinforced by already-existing technologies of “virtual after-” life and the possibility of uploading human minds. In the last section, I refer to the current (and future) applications of nanomedicine in diagnostics, therapeutics, implants, and tissue engineering as well as the aspiration to “immortality” by cryonics. The definition of death is reformulated since age and disease elimination may be realized, and the criterion of irreversibility may be challenged.Keywords: death, posthumanism, infomedicine, nanomedicine, cryonics
Procedia PDF Downloads 73595 Absorptive Capabilities in the Development of Biopharmaceutical Industry: The Case of Bioprocess Development and Research Unit, National Polytechnic Institute
Authors: Ana L. Sánchez Regla, Igor A. Rivera González, María del Pilar Monserrat Pérez Hernández
Abstract:
The ability of an organization to identify and get useful information from external sources, assimilate it, transform and apply to generate products or services with added value is called absorptive capacity. Absorptive capabilities contribute to have market opportunities to firms and get a leader position with respect to others competitors. The Bioprocess Development and Research Unit (UDIBI) is a Research and Development (R&D) laboratory that belongs to the National Polytechnic Institute (IPN), which is a higher education institute in Mexico. The UDIBI was created with the purpose of carrying out R and D activities for the Transferon®, a biopharmaceutical product developed and patented by IPN. The evolution of competence and scientific and technological platform made UDIBI expand its scope by providing technological services (preclínical studies and bio-compatibility evaluation) to the national pharmaceutical industry and biopharmaceutical industry. The relevance of this study is that those industries are classified as high scientific and technological intensity, and yet, after a review of the state of the art, there is only one study of absorption capabilities in biopharmaceutical industry with a similar scope to this research; in the case of Mexico, there is none. In addition to this, UDIBI belongs to a public university and its operation does not depend on the federal budget, but on the income generated by its external technological services. This fact represents a highly remarkable case in Mexico's public higher education context. This current doctoral research (2015-2019) is contextualized within a case study, its main objective is to identify and analyze the absorptive capabilities that characterise the UDIBI that allows it had become in a one of two third authorized laboratory by the sanitary authority in Mexico for developed bio-comparability studies to bio-pharmaceutical products. The development of this work in the field is divided into two phases. In a first phase, 15 interviews were conducted with the UDIBI personnel, covering management levels, heads of services, project leaders and laboratory personnel. These interviews were structured under a questionnaire, which was designed to integrate open questions and to a lesser extent, others, whose answers would be answered on a Likert-type rating scale. From the information obtained in this phase, a scientific article was made (in review and a proposal of presentation was submitted in different academic forums. A second stage will be made from the conduct of an ethnographic study within this organization under study that will last about 3 months. On the other hand, it is intended to carry out interviews with external actors around the UDIBI (suppliers, advisors, IPN officials, including contact with an academic specialized in absorption capacities to express their comments on this thesis. The inicial findings had shown two lines: i) exist institutional, technological and organizational management elements that encourage and/or limit the creation of absorption capacities in this scientific and technological laboratory and, ii) UDIBI has had created a set of multiple transfer technology of knowledge mechanisms which have had permitted to build a huge base of prior knowledge.Keywords: absorptive capabilities, biopharmaceutical industry, high research and development intensity industries, knowledge management, transfer of knowledge
Procedia PDF Downloads 226594 Effect of Fiber Orientation on the Mechanical Properties of Fabricated Plate Using Basalt Fiber
Authors: Sharmili Routray, Kishor Chandra Biswal
Abstract:
The use of corrosion resistant fiber reinforced polymer (FRP) reinforcement is beneficial in structures particularly those exposed to deicing salts, and/or located in highly corrosive environment. Generally Glass, Carbon and Aramid fibers are used for the strengthening purpose of the structures. Due to the necessities of low weight and high strength materials, it is required to find out the suitable substitute with low cost. Recent developments in fiber production technology allow the strengthening of structures using Basalt fiber which is made from basalt rock. Basalt fiber has good range of thermal performance, high tensile strength, resistance to acids, good electro‐magnetic properties, inert nature, resistance to corrosion, radiation and UV light, vibration and impact loading. This investigation focuses on the effect of fibre content and fiber orientation of basalt fibre on mechanical properties of the fabricated composites. Specimen prepared with unidirectional Basalt fabric as reinforcing materials and epoxy resin as a matrix in polymer composite. In this investigation different fiber orientation are taken and the fabrication is done by hand lay-up process. The variation of the properties with the increasing number of plies of fiber in the composites is also studied. Specimens are subjected to tensile strength test and the failure of the composite is examined with the help of INSTRON universal testing Machine (SATEC) of 600 kN capacities. The average tensile strength and modulus of elasticity of BFRP plates are determined from the test Program.Keywords: BFRP, fabrication, Fiber Reinforced Polymer (FRP), strengthening
Procedia PDF Downloads 292593 Local Interpretable Model-agnostic Explanations (LIME) Approach to Email Spam Detection
Authors: Rohini Hariharan, Yazhini R., Blessy Maria Mathew
Abstract:
The task of detecting email spam is a very important one in the era of digital technology that needs effective ways of curbing unwanted messages. This paper presents an approach aimed at making email spam categorization algorithms transparent, reliable and more trustworthy by incorporating Local Interpretable Model-agnostic Explanations (LIME). Our technique assists in providing interpretable explanations for specific classifications of emails to help users understand the decision-making process by the model. In this study, we developed a complete pipeline that incorporates LIME into the spam classification framework and allows creating simplified, interpretable models tailored to individual emails. LIME identifies influential terms, pointing out key elements that drive classification results, thus reducing opacity inherent in conventional machine learning models. Additionally, we suggest a visualization scheme for displaying keywords that will improve understanding of categorization decisions by users. We test our method on a diverse email dataset and compare its performance with various baseline models, such as Gaussian Naive Bayes, Multinomial Naive Bayes, Bernoulli Naive Bayes, Support Vector Classifier, K-Nearest Neighbors, Decision Tree, and Logistic Regression. Our testing results show that our model surpasses all other models, achieving an accuracy of 96.59% and a precision of 99.12%.Keywords: text classification, LIME (local interpretable model-agnostic explanations), stemming, tokenization, logistic regression.
Procedia PDF Downloads 48592 Analysis of the Level of Production Failures by Implementing New Assembly Line
Authors: Joanna Kochanska, Dagmara Gornicka, Anna Burduk
Abstract:
The article examines the process of implementing a new assembly line in a manufacturing enterprise of the household appliances industry area. At the initial stages of the project, a decision was made that one of its foundations should be the concept of lean management. Because of that, eliminating as many errors as possible in the first phases of its functioning was emphasized. During the start-up of the line, there were identified and documented all production losses (from serious machine failures, through any unplanned downtime, to micro-stops and quality defects). During 6 weeks (line start-up period), all errors resulting from problems in various areas were analyzed. These areas were, among the others, production, logistics, quality, and organization. The aim of the work was to analyze the occurrence of production failures during the initial phase of starting up the line and to propose a method for determining their critical level during its full functionality. There was examined the repeatability of the production losses in various areas and at different levels at such an early stage of implementation, by using the methods of statistical process control. Based on the Pareto analysis, there were identified the weakest points in order to focus improvement actions on them. The next step was to examine the effectiveness of the actions undertaken to reduce the level of recorded losses. Based on the obtained results, there was proposed a method for determining the critical failures level in the studied areas. The developed coefficient can be used as an alarm in case of imbalance of the production, which is caused by the increased failures level in production and production support processes in the period of the standardized functioning of the line.Keywords: production failures, level of production losses, new production line implementation, assembly line, statistical process control
Procedia PDF Downloads 131591 Influence of Dryer Autumn Conditions on Weed Control Based on Soil Active Herbicides
Authors: Juergen Junk, Franz Ronellenfitsch, Michael Eickermann
Abstract:
An appropriate weed management in autumn is a prerequisite for an economically successful harvest in the following year. In Luxembourg oilseed rape, wheat and barley is sown from August until October, accompanied by a chemical weed control with soil active herbicides, depending on the state of the weeds and the meteorological conditions. Based on regular ground and surface water-analysis, high levels of contamination by transformation products of respective herbicide compounds have been found in Luxembourg. The most ideal conditions for incorporating soil active herbicides are single rain events. Weed control may be reduced if application is made when weeds are under drought stress or if repeated light rain events followed by dry spells, because the herbicides tend to bind tightly to the soil particles. These effects have been frequently reported for Luxembourg throughout the last years. In the framework of a multisite long-term field experiment (EFFO) weed monitoring, plants observations and corresponding meteorological measurements were conducted. Long-term time series (1947-2016) from the SYNOP station Findel-Airport (WMO ID = 06590) showed a decrease in the number of days with precipitation. As the total precipitation amount has not significantly changed, this indicates a trend towards rain events with higher intensity. All analyses are based on decades (10-day periods) for September and October of each individual year. To assess the future meteorological conditions for Luxembourg, two different approaches were applied. First, multi-model ensembles from the CORDEX experiments (spatial resolution ~12.5 km; transient projections until 2100) were analysed for two different Representative Concentration Pathways (RCP8.5 and RCP4.5), covering the time span from 2005 until 2100. The multi-model ensemble approach allows for the quantification of the uncertainties and also to assess the differences between the two emission scenarios. Second, to assess smaller scale differences within the country a high resolution model projection using the COSMO-LM model was used (spatial resolution 1.3 km). To account for the higher computational demands, caused by the increased spatial resolution, only 10-year time slices have been simulated (reference period 1991-2000; near future 2041-2050 and far future 2091-2100). Statistically significant trends towards higher air temperatures, +1.6 K for September (+5.3 K far future) and +1.3 K for October (+4.3 K), were predicted for the near future compared to the reference period. Precipitation simultaneously decreased by 9.4 mm (September) and 5.0 mm (October) for the near future and -49 mm (September) and -10 mm (October) in the far future. Beside the monthly values also decades were analyzed for the two future time periods of the CLM model. For all decades of September and October the number of days with precipitation decreased for the projected near and far future. Changes in meteorological variables such as air temperature and precipitation did already induce transformations in weed societies (composition, late-emerging etc.) of arable ecosystems in Europe. Therefore, adaptations of agronomic practices as well as effective weed control strategies must be developed to maintain crop yield.Keywords: CORDEX projections, dry spells, ensembles, weed management
Procedia PDF Downloads 235590 Advances and Challenges in Assessing Students’ Learning Competencies in 21st Century Higher Education
Authors: O. Zlatkin-Troitschanskaia, J. Fischer, C. Lautenbach, H. A. Pant
Abstract:
In 21st century higher education (HE), the diversity among students has increased in recent years due to the internationalization and higher mobility. Offering and providing equal and fair opportunities based on students’ individual skills and abilities instead of their social or cultural background is one of the major aims of HE. In this context, valid, objective and transparent assessments of students’ preconditions and academic competencies in HE are required. However, as analyses of the current states of research and practice show, a substantial research gap on assessment practices in HE still exists, calling for the development of effective solutions. These demands lead to significant conceptual and methodological challenges. Funded by the German Federal Ministry of Education and Research, the research program 'Modeling and Measuring Competencies in Higher Education – Validation and Methodological Challenges' (KoKoHs) focusses on addressing these challenges in HE assessment practice by modeling and validating objective test instruments. Including 16 cross-university collaborative projects, the German-wide research program contributes to bridging the research gap in current assessment research and practice by concentrating on practical and policy-related challenges of assessment in HE. In this paper, we present a differentiated overview of existing assessments of HE at the national and international level. Based on the state of research, we describe the theoretical and conceptual framework of the KoKoHs Program as well as results of the validation studies, including their key outcomes. More precisely, this includes an insight into more than 40 developed assessments covering a broad range of transparent and objective methods for validly measuring domain-specific and generic knowledge and skills for five major study areas (Economics, Social Science, Teacher Education, Medicine and Psychology). Computer-, video- and simulation-based instruments have been applied and validated to measure over 20,000 students at the beginning, middle and end of their (bachelor and master) studies at more than 300 HE institutions throughout Germany or during their practical training phase, traineeship or occupation. Focussing on the validity of the assessments, all test instruments have been analyzed comprehensively, using a broad range of methods and observing the validity criteria of the Standards for Psychological and Educational Testing developed by the American Educational Research Association, the American Economic Association and the National Council on Measurement. The results of the developed assessments presented in this paper, provide valuable outcomes to predict students’ skills and abilities at the beginning and the end of their studies as well as their learning development and performance. This allows for a differentiated view of the diversity among students. Based on the given research results practical implications and recommendations are formulated. In particular, appropriate and effective learning opportunities for students can be created to support the learning development of students, promote their individual potential and reduce knowledge and skill gaps. Overall, the presented research on competency assessment is highly relevant to national and international HE practice.Keywords: 21st century skills, academic competencies, innovative assessments, KoKoHs
Procedia PDF Downloads 142589 Quantification Model for Capability Evaluation of Optical-Based in-Situ Monitoring System for Laser Powder Bed Fusion (LPBF) Process
Authors: Song Zhang, Hui Wang, Johannes Henrich Schleifenbaum
Abstract:
Due to the increasing demand for quality assurance and reliability for additive manufacturing, the development of an advanced in-situ monitoring system is required to monitor the process anomalies as input for further process control. Optical-based monitoring systems, such as CMOS cameras and NIR cameras, are proved as effective ways to monitor the geometrical distortion and exceptional thermal distribution. Therefore, many studies and applications are focusing on the availability of the optical-based monitoring system for detecting varied types of defects. However, the capability of the monitoring setup is not quantified. In this study, a quantification model to evaluate the capability of the monitoring setups for the LPBF machine based on acquired monitoring data of a designed test artifact is presented, while the design of the relevant test artifacts is discussed. The monitoring setup is evaluated based on its hardware properties, location of the integration, and light condition. Methodology of data processing to quantify the capacity for each aspect is discussed. The minimal capability of the detectable size of the monitoring set up in the application is estimated by quantifying its resolution and accuracy. The quantification model is validated using a CCD camera-based monitoring system for LPBF machines in the laboratory with different setups. The result shows the model to quantify the monitoring system's performance, which makes the evaluation of monitoring systems with the same concept but different setups possible for the LPBF process and provides the direction to improve the setups.Keywords: data processing, in-situ monitoring, LPBF process, optical system, quantization model, test artifact
Procedia PDF Downloads 197588 Effect of B2O3 Addition on Sol-gel Synthesized 45S5 Bioglass
Abstract:
Ceramics or glass ceramics with the property of bone bonding at the nearby tissues and producing possible bone in growth are known to be bioactive. The most extensively used glass in this context is 45S5 which is a silica based bioglass mostly explored in the field of tissue engineering as scaffolds for bone repair. Nowadays, the borate based bioglass are being utilized in orthopedic area largely due to its superior bioactivity with the formation of bone bonding. An attempt has been made, in the present study, to observe the effect of B2O3 addition in 45S5 glass and perceive its consequences on the thermal, mechanical and biological properties. The B2O3 was added in 1, 2.5, and 5 wt% with simultaneous reduction in the silica content of the 45S5 composition. The borate based bioglass has been synthesized by the means of sol-gel route. The synthesized powders were then thermally analyzed by DSC-TG. The as synthesized powders were then calcined at 600ºC for 2hrs. The calcined powders were then pressed into pellets followed by sintering at 850ºC with a holding time of 2hrs. The phase analysis and the microstructural analysis of the as synthesized and calcined powder glass samples and the sintered glass samples were being carried out using XRD and FESEM respectively. The formation of hydroxyapatite layer was performed by immersing the sintered samples in the simulated body fluid (SBF) and mechanical property has been tested for the sintered samples by universal testing machine (UTM). The sintered samples showed the presence of sodium calcium silicate phase while the formation of hydroxyapaptite takes place for SBF immersed samples. The formation of hydroxyapatite is more pronounced in case of borated based glass samples instead of 45S5.Keywords: 45S5 bioglass, bioactive, borate, hydroxyapatite, sol-gel synthesis
Procedia PDF Downloads 256587 Integrated Intensity and Spatial Enhancement Technique for Color Images
Authors: Evan W. Krieger, Vijayan K. Asari, Saibabu Arigela
Abstract:
Video imagery captured for real-time security and surveillance applications is typically captured in complex lighting conditions. These less than ideal conditions can result in imagery that can have underexposed or overexposed regions. It is also typical that the video is too low in resolution for certain applications. The purpose of security and surveillance video is that we should be able to make accurate conclusions based on the images seen in the video. Therefore, if poor lighting and low resolution conditions occur in the captured video, the ability to make accurate conclusions based on the received information will be reduced. We propose a solution to this problem by using image preprocessing to improve these images before use in a particular application. The proposed algorithm will integrate an intensity enhancement algorithm with a super resolution technique. The intensity enhancement portion consists of a nonlinear inverse sign transformation and an adaptive contrast enhancement. The super resolution section is a single image super resolution technique is a Fourier phase feature based method that uses a machine learning approach with kernel regression. The proposed technique intelligently integrates these algorithms to be able to produce a high quality output while also being more efficient than the sequential use of these algorithms. This integration is accomplished by performing the proposed algorithm on the intensity image produced from the original color image. After enhancement and super resolution, a color restoration technique is employed to obtain an improved visibility color image.Keywords: dynamic range compression, multi-level Fourier features, nonlinear enhancement, super resolution
Procedia PDF Downloads 554586 Quantification of Effect of Linear Anionic Polyacrylamide on Seepage in Irrigation Channels
Authors: Hamil Uribe, Cristian Arancibia
Abstract:
In Chile, the water for irrigation and hydropower generation is delivery essentially through unlined channels on earth, which have high seepage losses. Traditional seepage-abatement technologies are very expensive. The goals of this work were to quantify water loss in unlined channels and select reaches to evaluate the use of linear anionic polyacrylamide (LA-PAM) to reduce seepage losses. The study was carried out in Maule Region, central area of Chile. Water users indicated reaches with potential seepage losses, 45 km of channels in total, whose flow varied between 1.07 and 23.6 m³ s⁻¹. According to seepage measurements, 4 reaches of channels, 4.5 km in total, were selected for LA-PAM application. One to 4 LA-PAM applications were performed at rates of 11 kg ha⁻¹, considering wet perimeter area as basis of calculation. Large channels were used to allow motorboat moving against the current to carry-out LA-PAM application. For applications, a seeder machine was used to evenly distribute granulated polymer on water surface. Water flow was measured (StreamPro ADCP) upstream and downstream in selected reaches, to estimate seepage losses before and after LA-PAM application. Weekly measurements were made to quantify treatment effect and duration. In each case, water turbidity and temperature were measured. Channels showed variable losses up to 13.5%. Channels showing water gains were not treated with PAM. In all cases, LA-PAM effect was positive, achieving average loss reductions of 8% to 3.1%. Water loss was confirmed and it was possible to reduce seepage through LA-PAM applications provided that losses were known and correctly determined when applying the polymer. This could allow increasing irrigation security in critical periods, especially under drought conditions.Keywords: canal seepage, irrigation, polyacrylamide, water management
Procedia PDF Downloads 176585 Utilizing Laser Cutting Method in Men's' Custom-Made Casualwear
Authors: M A. Habit, S. A. Syed-Sahil, A. Bahari
Abstract:
Abstract—Laser cutting is a method of manufacturing process that uses laser in order to cut materials. It provides and ensures extreme accuracy which has a clean cut effect, CO2 laser dominate this application due to their good- quality beam combined with high output power. It comes with a small scale and it has a limitation in cutting sizes of materials, therefore it is more appropriate for custom- made products. The same laser cutting machine is also capable in cutting fine material such as fine silk, cotton, leather, polyester, etc. Lack of explorations and knowledge besides being unaware about this technology had caused many of the designers not to use this laser cutting method in their collections. The objectives of this study are: 1) To identify the potential of laser cutting technique in Custom-Made Garments for men’s casual wear: 2) To experiment the laser cutting technique in custom made garments: 3) To offer guidelines and formula for men’s custom- made casualwear designs with aesthetic value. In order to achieve the objectives, this research has been conducted by using mixed methods which are interviews with two (2) local experts in the apparel manufacturing industries and interviews via telephone with five (5) local respondents who are local emerging fashion designers, the questionnaires were distributed to one hundred (100) respondents around Klang Valley, in order to gain the information about their understanding and awareness regarding laser cutting technology. The experiment was conducted by using natural and man- made fibers. As a conclusion, all of the objectives had been achieved in producing custom-made men’s casualwear and with the production of these attires it will help to educate and enhance the innovation in fine technology. Therefore, there will be a good linkage and collaboration between the design experts and the manufacturing companies.Keywords: custom-made, fashion, laser cut, men’s wear
Procedia PDF Downloads 444584 A Longitudinal Exploration into Computer-Mediated Communication Use (CMC) and Relationship Change between 2005-2018
Authors: Laurie Dempsey
Abstract:
Relationships are considered to be beneficial for emotional wellbeing, happiness and physical health. However, they are also complicated: individuals engage in a multitude of complex and volatile relationships during their lifetime, where the change to or ending of these dynamics can be deeply disruptive. As the internet is further integrated into everyday life and relationships are increasingly mediated, Media Studies’ and Sociology’s research interests intersect and converge. This study longitudinally explores how relationship change over time corresponds with the developing UK technological landscape between 2005-2018. Since the early 2000s, the use of computer-mediated communication (CMC) in the UK has dramatically reshaped interaction. Its use has compelled individuals to renegotiate how they consider their relationships: some argue it has allowed for vast networks to be accumulated and strengthened; others contend that it has eradicated the core values and norms associated with communication, damaging relationships. This research collaborated with UK media regulator Ofcom, utilising the longitudinal dataset from their Adult Media Lives study to explore how relationships and CMC use developed over time. This is a unique qualitative dataset covering 2005-2018, where the same 18 participants partook in annual in-home filmed depth interviews. The interviews’ raw video footage was examined year-on-year to consider how the same people changed their reported behaviour and outlooks towards their relationships, and how this coincided with CMC featuring more prominently in their everyday lives. Each interview was transcribed, thematically analysed and coded using NVivo 11 software. This study allowed for a comprehensive exploration into these individuals’ changing relationships over time, as participants grew older, experienced marriages or divorces, conceived and raised children, or lost loved ones. It found that as technology developed between 2005-2018, everyday CMC use was increasingly normalised and incorporated into relationship maintenance. It played a crucial role in altering relationship dynamics, even factoring in the breakdown of several ties. Three key relationships were identified as being shaped by CMC use: parent-child; extended family; and friendships. Over the years there were substantial instances of relationship conflict: for parents renegotiating their dynamic with their child as they tried to both restrict and encourage their child’s technology use; for estranged family members ‘forced’ together in the online sphere; and for friendships compelled to publicly display their relationship on social media, for fear of social exclusion. However, it was also evident that CMC acted as a crucial lifeline for these participants, providing opportunities to strengthen and maintain their bonds via previously unachievable means, both over time and distance. A longitudinal study of this length and nature utilising the same participants does not currently exist, thus provides crucial insight into how and why relationship dynamics alter over time. This unique and topical piece of research draws together Sociology and Media Studies, illustrating how the UK’s changing technological landscape can reshape one of the most basic human compulsions. This collaboration with Ofcom allows for insight that can be utilised in both academia and policymaking alike, making this research relevant and impactful across a range of academic fields and industries.Keywords: computer mediated communication, longitudinal research, personal relationships, qualitative data
Procedia PDF Downloads 123583 The Impact of Technology on Physics Development
Authors: Fady Gaml Malk Mossad
Abstract:
these days, distance training that make use of internet generation is used widely all over the international to triumph over geographical and time primarily based issues in schooling. portraits, animation and other auxiliary visual resources help scholar to apprehend the topics easily. specially some theoretical guides which are pretty hard to understand along with physics and chemistry require visual material for college kids to apprehend subjects really. in this look at, physics packages for laboratory of physics path had been advanced. All facilities of internet-primarily based instructional technology have been used for students in laboratory research to avoid making mistakes and to analyze higher physics subjects.Android is a mobile running machine (OS) primarily based at the linux kerrnel and currently developed by way of google. With a user interface based on direct manipulation, Android is designed often for touchscreen cell deviced which includes smartphone and pill laptop, with specialized person interface for tv (Android television), vehicles (Android automobile), and wrist watches (Android wear). Now, nearly all peoples using cellphone. smartphone seems to be a have to-have item, because phone has many benefits. in addition, of course cellphone have many blessings for education, like resume of lesson that shape of 7451f44f4142a41b41fe20fbf0d491b7. but, this text isn't always approximately resume of lesson. this article is ready realistic based on android, precisely for physics. consequently, we can give an explanation for our concept approximately physics’s realistic primarily based on android and for output, we want many students might be like to reading physics and continually don't forget approximately physics’s phenomenon through physics’s sensible based on android.Keywords: physics education, laboratory, web-based education, distance, educationandroid, smartphone, physics practical
Procedia PDF Downloads 15582 Prediction of Damage to Cutting Tools in an Earth Pressure Balance Tunnel Boring Machine EPB TBM: A Case Study L3 Guadalajara Metro Line (Mexico)
Authors: Silvia Arrate, Waldo Salud, Eloy París
Abstract:
The wear of cutting tools is one of the most decisive elements when planning tunneling works, programming the maintenance stops and saving the optimum stock of spare parts during the evolution of the excavation. Being able to predict the behavior of cutting tools can give a very competitive advantage in terms of costs and excavation performance, optimized to the needs of the TBM itself. The incredible evolution of data science in recent years gives the option to implement it at the time of analyzing the key and most critical parameters related to machinery with the purpose of knowing how the cutting head is performing in front of the excavated ground. Taking this as a case study, Metro Line 3 of Guadalajara in Mexico will develop the feasibility of using Specific Energy versus data science applied over parameters of Torque, Penetration, and Contact Force, among others, to predict the behavior and status of cutting tools. The results obtained through both techniques are analyzed and verified in the function of the wear and the field situations observed in the excavation in order to determine its effectiveness regarding its predictive capacity. In conclusion, the possibilities and improvements offered by the application of digital tools and the programming of calculation algorithms for the analysis of wear of cutting head elements compared to purely empirical methods allow early detection of possible damage to cutting tools, which is reflected in optimization of excavation performance and a significant improvement in costs and deadlines.Keywords: cutting tools, data science, prediction, TBM, wear
Procedia PDF Downloads 49581 Adapting Hazard Analysis and Critical Control Points (HACCP) Principles to Continuing Professional Education
Authors: Yaroslav Pavlov
Abstract:
In the modern world, ensuring quality has become increasingly important in various fields of human activity. One universal approach to quality management, proven effective in the food industry, is the HACCP (Hazard Analysis and Critical Control Points) concept. Based on principles of preventing potential hazards to consumers at all stages of production, from raw materials to the final product, HACCP offers a systematic approach to identifying, assessing risks, and managing critical control points (CCPs). Initially used primarily for food production, it was later effectively adapted to the food service sector. Implementing HACCP provides organizations with a reliable foundation for improving food safety, covering all links in the food chain from producer to consumer, making it an integral part of modern quality management systems. The main principles of HACCP—hazard identification, CCP determination, effective monitoring procedures, corrective actions, regular checks, and documentation—are universal and can be adapted to other areas. The adaptation of the HACCP concept is relevant for continuing professional education (CPE) with certain reservations. Specifically, it is reasonable to abandon the term ‘hazards’ as deviations in CCPs do not pose dangers, unlike in food production. However, the approach through CCP analysis and the use of HACCP's main principles for educational services are promising. This is primarily because it allows for identifying key CCPs based on the value creation model of a specific educational organization and consequently focusing efforts on specific CCPs to manage the quality of educational services. This methodology can be called the Analysis of Critical Points in Educational Services (ACPES). ACPES offers a similar approach to managing the quality of educational services, focusing on preventing and eliminating potential risks that could negatively impact the educational process, learners' achievement of set educational goals, and ultimately lead to students rejecting the organization's educational services. ACPES adapts proven HACCP principles to educational services, enhancing quality management effectiveness and student satisfaction. ACPES includes identifying potential problems at all stages of the educational process, from initial interest to graduation and career development. In ACPES, the term "hazards" is replaced with "problematic areas," reflecting the specific nature of the educational environment. Special attention is paid to determining CCPs—stages where corrective measures can most effectively prevent or minimize the risk of failing educational goals. The ACPES principles align with HACCP's principles, adjusted for the specificities of CPE. The method of the learner's journey map (variation of Customer Journey Map, CJM) can be used to overcome the complexity of formalizing the production chain in educational services. CJM provides a comprehensive understanding of the learner's experience at each stage, facilitating targeted and effective quality management. Thus, integrating the learner's journey map into ACPES represents a significant extension of the methodology's capabilities, ensuring a comprehensive understanding of the educational process and forming an effective quality management system focused on meeting learners' needs and expectations.Keywords: quality management, continuing professional education, customer journey map, HACCP
Procedia PDF Downloads 38580 Mean Nutrient Intake and Nutrient Adequacy Ratio in India: Occurrence of Hidden Hunger in Indians
Authors: Abha Gupta, Deepak K. Mishra
Abstract:
The focus of food security studies in India has been on the adequacy of calories and its linkage with poverty level. India currently being undergoing a massive demographic and epidemiological transition has demonstrated a decline in average physical activity with improved mechanization and urbanization. Food consumption pattern is also changing with decreasing intake of coarse cereals and a marginal increase in the consumption of fruits, vegetables and meat products resulting into a nutrition transition in the country. However, deficiency of essential micronutrients such as vitamins and minerals is rampant despite their growing importance in fighting back with lifestyle and other modern diseases. The calorie driven studies can hardly tackle the complex problem of malnutrition. This paper fills these research lacuna and analyses mean intake of different major and micro-nutrients among different socio-economic groups and adequacy of these nutrients from recommended dietary allowance. For the purpose, a cross-sectional survey covering 304 households selected through proportional stratified random sampling was conducted in six villages of Aligarh district of the state of Uttar Pradesh, India. Data on quantity consumed of 74 food items grouped into 10 food categories with a recall period of seven days was collected from the households and converted into energy, protein, fat, carbohydrate, calcium, iron, thiamine, riboflavin, niacin and vitamin C using standard guidelines of National Institute of Nutrition. These converted nutrients were compared with recommended norms given by National Nutrition Monitoring Bureau. Per capita nutrient adequacy was calculated by dividing mean nutrient intake by the household size and then by comparing it with recommended norm. Findings demonstrate that source of both macro and micro-nutrients are mainly cereals followed by milk, edible oil and sugar items. Share of meat in providing essential nutrients is very low due to vegetarian diet. Vegetables, pulses, nuts, fruits and dry fruits are a poor source for most of the nutrients. Further analysis evinces that intake of most of the nutrients is higher than the recommended norm. Riboflavin is the only vitamin whose intake is less than the standard norm. Poor group, labour, small farmers, Muslims, scheduled caste demonstrate comparatively lower intake of all nutrients than their counterpart groups, though, they get enough macro and micro-nutrients significantly higher than the norm. One of the major reasons for higher intake of most of the nutrients across all socio-economic groups is higher consumption of monotonous diet based on cereals and milk. Most of the nutrients get their major share from cereals particularly wheat and milk intake. It can be concluded from the analysis that although there is adequate intake of most of the nutrients in the diet of rural population yet their source is mainly cereals and milk products depicting a monotonous diet. Hence, more efforts are needed to diversify the diet by giving more focus to the production of other food items particularly fruits, vegetables and pulse products. Awareness among the population, more accessibility and incorporating food items other than cereals in government social safety programmes are other measures to improve food security in India.Keywords: hidden hunger, India, nutrients, recommended norm
Procedia PDF Downloads 317579 Challenges for Reconstruction: A Case Study from 2015 Gorkha, Nepal Earthquake
Authors: Hari K. Adhikari, Keshab Sharma, K. C. Apil
Abstract:
The Gorkha Nepal earthquake of moment magnitude (Mw) 7.8 hit the central region of Nepal on April 25, 2015; with the epicenter about 77 km northwest of Kathmandu Valley. This paper aims to explore challenges of reconstruction in the rural earthquake-stricken areas of Nepal. The Gorkha earthquake on April 25, 2015, has significantly affected the livelihood of people and overall economy in Nepal, causing severe damage and destruction in central Nepal including nation’s capital. A larger part of the earthquake affected area is difficult to access with rugged terrain and scattered settlements, which posed unique challenges and efforts on a massive scale reconstruction and rehabilitation. 800 thousand buildings were affected leaving 8 million people homeless. Challenge of reconstruction of optimum 800 thousand houses is arduous for Nepal in the background of its turmoil political scenario and weak governance. With significant actors involved in the reconstruction process, no appreciable relief has reached to the ground, which is reflected over the frustration of affected people. The 2015 Gorkha earthquake is one of most devastating disasters in the modern history of Nepal. Best of our knowledge, there is no comprehensive study on reconstruction after disasters in modern Nepal, which integrates the necessary information to deal with challenges and opportunities of reconstructions. The study was conducted using qualitative content analysis method. Thirty engineers and ten social mobilizes working for reconstruction and more than hundreds local social workers, local party leaders, and earthquake victims were selected arbitrarily. Information was collected through semi-structured interviews and open-ended questions, focus group discussions, and field notes, with no previous assumption. Author also reviewed literature and document reviews covering academic and practitioner studies on challenges of reconstruction after earthquake in developing countries such as 2001 Gujarat earthquake, 2005 Kashmir earthquake, 2003 Bam earthquake and 2010 Haiti earthquake; which have very similar building typologies, economic, political, geographical, and geological conditions with Nepal. Secondary data was collected from reports, action plans, and reflection papers of governmental entities, non-governmental organizations, private sector businesses, and the online news. This study concludes that inaccessibility, absence of local government, weak governance, weak infrastructures, lack of preparedness, knowledge gap and manpower shortage, etc. are the key challenges of the reconstruction after 2015 earthquake in Nepal. After scrutinizing different challenges and issues, study counsels that good governance, integrated information, addressing technical issues, public participation along with short term and long term strategies to tackle with technical issues are some crucial factors for timely and quality reconstruction in context of Nepal. Sample collected for this study is relatively small sample size and may not be fully representative of the stakeholders involved in reconstruction. However, the key findings of this study are ones that need to be recognized by academics, governments, and implementation agencies, and considered in the implementation of post-disaster reconstruction program in developing countries.Keywords: Gorkha earthquake, reconstruction, challenges, policy
Procedia PDF Downloads 411578 Design of UV Based Unicycle Robot to Disinfect Germs and Communicate With Multi-Robot System
Authors: Charles Koduru, Parth Patel, M. Hassan Tanveer
Abstract:
In this paper, the communication between a team of robots is used to sanitize an environment with germs is proposed. We introduce capabilities from a team of robots (most likely heterogeneous), a wheeled robot named ROSbot 2.0 that consists of a mounted LiDAR and Kinect sensor, and a modified prototype design of a unicycle-drive Roomba robot called the UV robot. The UV robot consists of ultrasonic sensors to avoid obstacles and is equipped with an ultraviolet light system to disinfect and kill germs, such as bacteria and viruses. In addition, the UV robot is equipped with disinfectant spray to target hidden objects that ultraviolet light is unable to reach. Using the sensors from the ROSbot 2.0, the robot will create a 3-D model of the environment which will be used to factor how the ultraviolet robot will disinfect the environment. Together this proposed system is known as the RME assistive robot device or RME system, which communicates between a navigation robot and a germ disinfecting robot operated by a user. The RME system includes a human-machine interface that allows the user to control certain features of each robot in the RME assistive robot device. This method allows the cleaning process to be done at a more rapid and efficient pace as the UV robot disinfects areas just by moving around in the environment while using the ultraviolet light system to kills germs. The RME system can be used in many applications including, public offices, stores, airports, hospitals, and schools. The RME system will be beneficial even after the COVID-19 pandemic. The Kennesaw State University will continue the research in the field of robotics, engineering, and technology and play its role to serve humanity.Keywords: multi robot system, assistive robots, COVID-19 pandemic, ultraviolent technology
Procedia PDF Downloads 187577 Task Scheduling and Resource Allocation in Cloud-based on AHP Method
Authors: Zahra Ahmadi, Fazlollah Adibnia
Abstract:
Scheduling of tasks and the optimal allocation of resources in the cloud are based on the dynamic nature of tasks and the heterogeneity of resources. Applications that are based on the scientific workflow are among the most widely used applications in this field, which are characterized by high processing power and storage capacity. In order to increase their efficiency, it is necessary to plan the tasks properly and select the best virtual machine in the cloud. The goals of the system are effective factors in scheduling tasks and resource selection, which depend on various criteria such as time, cost, current workload and processing power. Multi-criteria decision-making methods are a good choice in this field. In this research, a new method of work planning and resource allocation in a heterogeneous environment based on the modified AHP algorithm is proposed. In this method, the scheduling of input tasks is based on two criteria of execution time and size. Resource allocation is also a combination of the AHP algorithm and the first-input method of the first client. Resource prioritization is done with the criteria of main memory size, processor speed and bandwidth. What is considered in this system to modify the AHP algorithm Linear Max-Min and Linear Max normalization methods are the best choice for the mentioned algorithm, which have a great impact on the ranking. The simulation results show a decrease in the average response time, return time and execution time of input tasks in the proposed method compared to similar methods (basic methods).Keywords: hierarchical analytical process, work prioritization, normalization, heterogeneous resource allocation, scientific workflow
Procedia PDF Downloads 146