Search results for: combining forecasts
621 High School Stem Curriculum and Example of Laboratory Work That Shows How Microcomputers Can Help in Understanding of Physical Concepts
Authors: Jelena Slugan, Ivica Ružić
Abstract:
We are witnessing the rapid development of technologies that change the world around us. However, curriculums and teaching processes are often slow to adapt to the change; it takes time, money and expertise to implement technology in the classroom. Therefore, the University of Split, Croatia, partnered with local school Marko Marulić High School and created the project "Modern competence in modern high schools" as part of which five different curriculums for STEM areas were developed. One of the curriculums involves combining information technology with physics. The main idea was to teach students how to use different circuits and microcomputers to explore nature and physical phenomena. As a result, using electrical circuits, students are able to recreate in the classroom the phenomena that they observe every day in their environment. So far, high school students had very little opportunity to perform experiments independently, and especially, those physics experiment did not involve ICT. Therefore, this project has a great importance, because the students will finally get a chance to develop themselves in accordance to modern technologies. This paper presents some new methods of teaching physics that will help students to develop experimental skills through the study of deterministic nature of physical laws. Students will learn how to formulate hypotheses, model physical problems using the electronic circuits and evaluate their results. While doing that, they will also acquire useful problem solving skills.Keywords: ICT in physics, curriculum, laboratory activities, STEM (science, technology, engineering, mathematics)
Procedia PDF Downloads 201620 Examining Audiology Students: Clinical Reasoning Skills When Using Virtual Audiology Cases Aided With no Collaboration, Live Collaboration, and Virtual Collaboration
Authors: Ramy Shaaban
Abstract:
The purpose of this study was to examine the difference in clinical reasoning skills of students when using virtual audiology cases with and without collaborative assistance from major learning approaches important to clinical reasoning skills and computer-based learning models: Situated Learning Theory, Social Development Theory, Scaffolding, and Collaborative Learning. A quasi-experimental design was conducted at two United States universities to examine whether there is a significant difference in clinical reasoning skills between three treatment groups using IUP Audiosim software. Two computer-based audiology case simulations were developed, and participants were randomly placed into the three groups: no collaboration, virtual collaboration, and live collaboration. The clinical reasoning data were analyzed using One-Way ANOVA and Tukey posthoc analyses. The results show that there was a significant difference in clinical reasoning skills between the three treatment groups. The score obtained by the no collaboration group was significantly less than the scores obtained by the virtual and live collaboration groups. Collaboration, whether virtual or in person, has a positive effect on students’ clinical reasoning. These results with audiology students indicate that combining collaboration models with scaffolding and embedding situated learning and social development theories into the design of future virtual patients has the potential to improve students’ clinical reasoning skills.Keywords: clinical reasoning, virtual patients, collaborative learning, scaffolding
Procedia PDF Downloads 215619 First and Second Analysis on the Reheat Organic Rankine Cycle
Authors: E. Moradimaram, H. Sayehvand
Abstract:
In recent years the increasing use of fossil fuels has led to various environmental problems including urban pollution, ozone layer depletion and acid rains. Moreover, with the increased number of industrial centers and higher consumption of these fuels, the end point of the fossil energy reserves has become more evident. Considering the environmental pollution caused by fossil fuels and their limited availability, renewable sources can be considered as the main substitute for non-renewable resources. One of these resources is the Organic Rankine Cycles (ORCs). These cycles while having high safety, have low maintenance requirements. Combining the ORCs with other systems, such as ejector and reheater will increase overall cycle efficiency. In this study, ejector and reheater are used to improve the thermal efficiency (ηth), exergy efficiency (η_ex) and net output power (w_net); therefore, the ORCs with reheater (RORCs) are proposed. A computational program has been developed to calculate the thermodynamic parameters required in Engineering Equations Solver (EES). In this program, the analysis of the first and second law in RORC is conducted, and a comparison is made between them and the ORCs with Ejector (EORC). R245fa is selected as the working fluid and water is chosen as low temperature heat source with a temperature of 95 °C and a mass transfer rate of 1 kg/s. The pressures of the second evaporator and reheater are optimized in terms of maximum exergy efficiency. The environment is at 298.15 k and at 101.325 kpa. The results indicate that the thermodynamic parameters in the RORC have improved compared to EORC.Keywords: Organic Rankine Cycle (ORC), Organic Rankine Cycle with Reheater (RORC), Organic Rankine Cycle with Ejector (EORC), exergy efficiency
Procedia PDF Downloads 164618 Urban Growth Analysis Using Multi-Temporal Satellite Images, Non-stationary Decomposition Methods and Stochastic Modeling
Authors: Ali Ben Abbes, ImedRiadh Farah, Vincent Barra
Abstract:
Remotely sensed data are a significant source for monitoring and updating databases for land use/cover. Nowadays, changes detection of urban area has been a subject of intensive researches. Timely and accurate data on spatio-temporal changes of urban areas are therefore required. The data extracted from multi-temporal satellite images are usually non-stationary. In fact, the changes evolve in time and space. This paper is an attempt to propose a methodology for changes detection in urban area by combining a non-stationary decomposition method and stochastic modeling. We consider as input of our methodology a sequence of satellite images I1, I2, … In at different periods (t = 1, 2, ..., n). Firstly, a preprocessing of multi-temporal satellite images is applied. (e.g. radiometric, atmospheric and geometric). The systematic study of global urban expansion in our methodology can be approached in two ways: The first considers the urban area as one same object as opposed to non-urban areas (e.g. vegetation, bare soil and water). The objective is to extract the urban mask. The second one aims to obtain a more knowledge of urban area, distinguishing different types of tissue within the urban area. In order to validate our approach, we used a database of Tres Cantos-Madrid in Spain, which is derived from Landsat for a period (from January 2004 to July 2013) by collecting two frames per year at a spatial resolution of 25 meters. The obtained results show the effectiveness of our method.Keywords: multi-temporal satellite image, urban growth, non-stationary, stochastic model
Procedia PDF Downloads 430617 The Application of Sequence Stratigraphy to the Sajau (Pliocene) Coal Distribution in Berau Basin, Northeast Kalimantan, Indonesia
Authors: Ahmad Helman Hamdani, Diana Putri Hamdiana
Abstract:
The Sajau coal measures of Berau Basin, northeastern Kalimantan were deposited within a range of facies associations spanning a spectrum of settings from fluvial to marine. The transitional to terrestrial coal measures are dominated by siliciclastics, but they also contain three laterally extensive marine bands (mudstone). These bands act as marker horizons that enable correlation between fully marine and terrestrial facies. Examination of this range of facies and their sedimentology has enabled the development of a high-resolution sequence stratigraphic framework. Set against the established backdrop of third-order Sajau transgression, nine fourth-order sequences are recognized. Results show that, in the composite sequences, peat accumulation predominantly correlates in transitional areas with early transgressive sequence sets (TSS) and highstand sequence set (HSS), while in more landward areas it correlates with the middle TSS to late highstand sequence sets (HSS). Differences in peat accumulation regimes within the sequence stratigraphic framework are attributed to variations in subsidence and background siliciclastic input rates in different depositional settings, with these combining to produce differences in the rate of accommodation change. The preservation of coal resources in the middle to late HSS in this area was most likely related to the rise of the regional base level throughout the Sajau.Keywords: sequence stratigraphy, coal, Pliocene, Berau basin
Procedia PDF Downloads 466616 Designing Floor Planning in 2D and 3D with an Efficient Topological Structure
Authors: V. Nagammai
Abstract:
Very-large-scale integration (VLSI) is the process of creating an integrated circuit (IC) by combining thousands of transistors into a single chip. Development of technology increases the complexity in IC manufacturing which may vary the power consumption, increase the size and latency period. Topology defines a number of connections between network. In this project, NoC topology is generated using atlas tool which will increase performance in turn determination of constraints are effective. The routing is performed by XY routing algorithm and wormhole flow control. In NoC topology generation, the value of power, area and latency are predetermined. In previous work, placement, routing and shortest path evaluation is performed using an algorithm called floor planning with cluster reconstruction and path allocation algorithm (FCRPA) with the account of 4 3x3 switch, 6 4x4 switch, and 2 5x5 switches. The usage of the 4x4 and 5x5 switch will increase the power consumption and area of the block. In order to avoid the problem, this paper has used one 8x8 switch and 4 3x3 switches. This paper uses IPRCA which of 3 steps they are placement, clustering, and shortest path evaluation. The placement is performed using min – cut placement and clustering are performed using an algorithm called cluster generation. The shortest path is evaluated using an algorithm called Dijkstra's algorithm. The power consumption of each block is determined. The experimental result shows that the area, power, and wire length improved simultaneously.Keywords: application specific noc, b* tree representation, floor planning, t tree representation
Procedia PDF Downloads 394615 Identification of Factors Affecting Technical Efficiency Sugar Cane Farming in East Java
Authors: Noor Rizkiyah, Djoko Koestiono, Budi Setiawan, Nuhfil Hanani
Abstract:
This research aims to identify the factors that affect the production of sugar cane, the level of technical efficiency of farming sugar cane ratooning and factors that affect technical inefficiency. Research carried out in Malang of East Java with sampling in a non random sampling stratified proportioned and obtained 172 household sugar cane farmers who are classified based on the level of ratooning i.e. ratooning I 3-4 times ratoning, ratooning II 5-10 times ratoning as well as ratooning III > 10 times ratoning. The method used is the Stochastic Production Frontier approach MLE (maximum likelihood estimation). From the results obtained by analysis of the factors affecting the production of sugar cane farming land, namely ratooning fertilizer use ZA petroganic, use of fertilizer and seeds of embroidery and labor. While the average level of sugar cane farmers ratooning efficiency of 0.78 and categorized yet efficient technically. For the factors that influence the technical inefficiency i.e. age, number of dependents and the frequency of family ratooning. Though not yet technically efficient but sugar cane farmers cultivate cultivation remains ratooning. But if it is done repeatedly ratooning will result in a decrease in the production of sugar cane. Whereas the results of the analysis of farming level of feasibility or RC ratooning sugar cane ratio of 1.15 so worth trying to accomplish. Thus with increased technology and combining the use of inputs is an attempt to let the technical efficiency can be achieved so that the more worthy to be organised.Keywords: technical efficiency, production, sugarcane, frontier
Procedia PDF Downloads 173614 The Effects of Prolonged Social Media Use on Student Health: A Focus on Computer Vision Syndrome, Hand Pain, and Headaches and Mental Status
Authors: Augustine Ndudi Egere, Shehu Adamu, Esther Ishaya Solomon
Abstract:
As internet accessibility and smartphones continue to increase in Nigeria, Africa’s most populous country, social media platforms have become ubiquitous, causing students of 18-25 age brackets to spend more time on social media. The research investigated the impact of prolonged social media use on the physical health of students, with a specific focus on computer vision syndrome, hand pain, headaches and mental status. The study adopted a mixed-methods approach combining quantitative surveys to gather statistical data on usage patterns and symptoms, along with qualitative interviews into the experiences and perceptions of medical practitioners concerning cases under study within the geopolitical region. The result was analyzed using Regression analysis. It was observed that there is a significant correlation between social media usage by the students in the study age bracket concerning computer vision syndrome, hand pain, headache and general mental status. The research concluded by providing valuable insights into potential interventions and strategies to mitigate the adverse effects of excessive social media use on student well-being and recommends, among others, that educational institutions, parents, and students themselves collaborate to implement strategies aimed at promoting responsible and balanced use of social media.Keywords: social media, student health, computer vision syndrome, hand pain, headaches, mental staus
Procedia PDF Downloads 47613 Investigating the Socio-ecological Impacts of Sea Level Rise on Coastal Rural Communities in Ghana
Authors: Benjamin Ankomah-Asare, Richard Adade
Abstract:
Sea level rise (SLR) poses a significant threat to coastal communities globally. Ghana has over the years implemented protective measures such as the construction of groynes and revetment to serve as barriers to sea waves in major cities and towns to prevent sea erosion and flooding. For vulnerable rural coastal communities, the planned retreat is often proposed; however, relocation costs are often underestimated as losses of future social and cultural value are not always adequately taken into account. Through a mixed-methods approach combining qualitative interviews, surveys, and spatial analysis, the study examined the experiences of coastal rural communities in Ghana and assess the effectiveness of relocation strategies in addressing the socio-economic and environmental challenges posed by sea level rise. The study revealed the devastating consequences of sea level rise on these communities, including increased flooding, erosion, and saltwater intrusion into freshwater sources. Moreover, it highlights the adaptive capacities within these communities and how factors such as infrastructure, economic activities, cultural heritage, and governance structures shape their resilience in the face of environmental change. While relocation can be an effective strategy in reducing the risks associated with sea level rise, the study recommends that proper implementation of this adaptation strategy can be achieved when coupled with community-led planning, participatory decision-making, and targeted support for vulnerable groups.Keywords: sea level rise, relocation, socio-ecological impacts, rural communities
Procedia PDF Downloads 54612 Using Monte Carlo Model for Simulation of Rented Housing in Mashhad, Iran
Authors: Mohammad Rahim Rahnama
Abstract:
The study employs Monte Carlo method for simulation of rented housing in Mashhad second largest city in Iran. A total number of 334 rental residential units in Mashhad, including both apartments and houses (villa), were randomly selected from advertisements placed in Khorasan Newspapers during the months of July and August of 2015. In order to simulate the monthly rent price, the rent index was calculated through combining the mortgage and the rent price. In the next step, the relation between the variables of the floor area and that of the number of bedrooms for each unit, in both apartments and houses(villa), was calculated through multivariate regression using SPSS and was coded in XML. The initial model was called using simulation button in SPSS and was simulated using triangular and binominal algorithms. The findings revealed that the average simulated rental index was 548.5$ per month. Calculating the sensitivity of rental index to a number of bedrooms we found that firstly, 97% of units have three bedrooms, and secondly as the number of bedrooms increases from one to three, for the rent price of less than 200$, the percentage of units having one bedroom decreases from 10% to 0. Contrariwise, for units with the rent price of more than 571.4$, the percentage of bedrooms increases from 37% to 48%. In the light of these findings, it becomes clear that planning to build rental residential units, overseeing the rent prices, and granting subsidies to rental residential units, for apartments with two bedrooms, present a felicitous policy for regulating residential units in Mashhad.Keywords: Mashhad, Monte Carlo, simulation, rent price, residential unit
Procedia PDF Downloads 277611 Enhancing the Performance of Automatic Logistic Centers by Optimizing the Assignment of Material Flows to Workstations and Flow Racks
Authors: Sharon Hovav, Ilya Levner, Oren Nahum, Istvan Szabo
Abstract:
In modern large-scale logistic centers (e.g., big automated warehouses), complex logistic operations performed by human staff (pickers) need to be coordinated with the operations of automated facilities (robots, conveyors, cranes, lifts, flow racks, etc.). The efficiency of advanced logistic centers strongly depends on optimizing picking technologies in synch with the facility/product layout, as well as on optimal distribution of material flows (products) in the system. The challenge is to develop a mathematical operations research (OR) tool that will optimize system cost-effectiveness. In this work, we propose a model that describes an automatic logistic center consisting of a set of workstations located at several galleries (floors), with each station containing a known number of flow racks. The requirements of each product and the working capacity of stations served by a given set of workers (pickers) are assumed as predetermined. The goal of the model is to maximize system efficiency. The proposed model includes two echelons. The first is the setting of the (optimal) number of workstations needed to create the total processing/logistic system, subject to picker capacities. The second echelon deals with the assignment of the products to the workstations and flow racks, aimed to achieve maximal throughputs of picked products over the entire system given picker capacities and budget constraints. The solutions to the problems at the two echelons interact to balance the overall load in the flow racks and maximize overall efficiency. We have developed an operations research model within each echelon. In the first echelon, the problem of calculating the optimal number of workstations is formulated as a non-standard bin-packing problem with capacity constraints for each bin. The problem arising in the second echelon is presented as a constrained product-workstation-flow rack assignment problem with non-standard mini-max criteria in which the workload maximum is calculated across all workstations in the center and the exterior minimum is calculated across all possible product-workstation-flow rack assignments. The OR problems arising in each echelon are proved to be NP-hard. Consequently, we find and develop heuristic and approximation solution algorithms based on exploiting and improving local optimums. The LC model considered in this work is highly dynamic and is recalculated periodically based on updated demand forecasts that reflect market trends, technological changes, seasonality, and the introduction of new items. The suggested two-echelon approach and the min-max balancing scheme are shown to work effectively on illustrative examples and real-life logistic data.Keywords: logistics center, product-workstation, assignment, maximum performance, load balancing, fast algorithm
Procedia PDF Downloads 228610 Modeling and Estimating Reserve of the Ali Javad Porphyry Copper-Gold Deposit, East Azerbaijan, Iran
Authors: Behzad Hajalilou, Nasim Hajalilou, Saeid Ansari
Abstract:
The study area is located in East Azerbaijan province, north of Ahar city, and 1/100000 geological map of Varzgan. This region is located in the middle of Iran zone. Ali Javad Porphyry copper-gold ore deposit has been created in a magmatic complex containing intrusive masses, combining Granodiorite and quartz Monzonite that penetrates into the Eocene volcanic aggregate. The most important mineralization includes primary oxides minerals (magnetite), sulfide (pyrite, chalcopyrite, Molybdenite, Bornite, Chalcocite, Covollite), secondary oxide or hydroxide minerals (hematite, goethite, limonite), and carbonate (malachite and Azurite). The mineralization forms into the vein-veinlets and scattered system. The alterations observed in the region include intermediate Argillic, advanced Argillic, Phyllic, silica, Propylitic, chlorite and Potassic. The 3D model of mineralization of the Alijavad is provided by Data DATAMINE software and based on the study of 700 polished sections of 32 drilled boreholes in the region. This model is completely compatible with the model provided by Lowell and Gilbert for the mineralization of porphyry copper deposits of quartz Monzonite type. The estimated cumulative residual value of copper for Ali Javad deposit is 81.5 million tons with 0.75 percent of copper, and for gold is 8.37 million tons with 1.8 ppm.Keywords: porphyry copper, mineralization, Ali Javad, modeling, reserve estimation
Procedia PDF Downloads 220609 The Impact of Data Science on Geography: A Review
Authors: Roberto Machado
Abstract:
We conducted a systematic review using the Preferred Reporting Items for Systematic Reviews and Meta-Analyses methodology, analyzing 2,996 studies and synthesizing 41 of them to explore the evolution of data science and its integration into geography. By employing optimization algorithms, we accelerated the review process, significantly enhancing the efficiency and precision of literature selection. Our findings indicate that data science has developed over five decades, facing challenges such as the diversified integration of data and the need for advanced statistical and computational skills. In geography, the integration of data science underscores the importance of interdisciplinary collaboration and methodological innovation. Techniques like large-scale spatial data analysis and predictive algorithms show promise in natural disaster management and transportation route optimization, enabling faster and more effective responses. These advancements highlight the transformative potential of data science in geography, providing tools and methodologies to address complex spatial problems. The relevance of this study lies in the use of optimization algorithms in systematic reviews and the demonstrated need for deeper integration of data science into geography. Key contributions include identifying specific challenges in combining diverse spatial data and the necessity for advanced computational skills. Examples of connections between these two fields encompass significant improvements in natural disaster management and transportation efficiency, promoting more effective and sustainable environmental solutions with a positive societal impact.Keywords: data science, geography, systematic review, optimization algorithms, supervised learning
Procedia PDF Downloads 34608 Modelling and Numerical Analysis of Thermal Non-Destructive Testing on Complex Structure
Authors: Y. L. Hor, H. S. Chu, V. P. Bui
Abstract:
Composite material is widely used to replace conventional material, especially in the aerospace industry to reduce the weight of the devices. It is formed by combining reinforced materials together via adhesive bonding to produce a bulk material with alternated macroscopic properties. In bulk composites, degradation may occur in microscopic scale, which is in each individual reinforced fiber layer or especially in its matrix layer such as delamination, inclusion, disbond, void, cracks, and porosity. In this paper, we focus on the detection of defect in matrix layer which the adhesion between the composite plies is in contact but coupled through a weak bond. In fact, the adhesive defects are tested through various nondestructive methods. Among them, pulsed phase thermography (PPT) has shown some advantages providing improved sensitivity, large-area coverage, and high-speed testing. The aim of this work is to develop an efficient numerical model to study the application of PPT to the nondestructive inspection of weak bonding in composite material. The resulting thermal evolution field is comprised of internal reflections between the interfaces of defects and the specimen, and the important key-features of the defects presented in the material can be obtained from the investigation of the thermal evolution of the field distribution. Computational simulation of such inspections has allowed the improvement of the techniques to apply in various inspections, such as materials with high thermal conductivity and more complex structures.Keywords: pulsed phase thermography, weak bond, composite, CFRP, computational modelling, optimization
Procedia PDF Downloads 176607 The Neutrophil-to-Lymphocyte Ratio after Surgery for Hip Fracture in a New, Simple, and Objective Score to Predict Postoperative Mortality
Authors: Philippe Dillien, Patrice Forget, Harald Engel, Olivier Cornu, Marc De Kock, Jean Cyr Yombi
Abstract:
Introduction: Hip fracture precedes commonly death in elderly people. Identification of high-risk patients may contribute to target patients in whom optimal management, resource allocation and trials efficiency is needed. The aim of this study is to construct a predictive score of mortality after hip fracture on the basis of the objective prognostic factors available: Neutrophil-to-lymphocyte ratio (NLR), age, and sex. C-Reactive Protein (CRP), is also considered as an alternative to the NLR. Patients and methods: After the IRB approval, we analyzed our prospective database including 286 consecutive patients with hip fracture. A score was constructed combining age (1 point per decade above 74 years), sex (1 point for males), and NLR at postoperative day+5 (1 point if >5). A receiver-operating curve (ROC) curve analysis was performed. Results: From the 286 patients included, 235 were analyzed (72 males and 163 females, 30.6%/69.4%), with a median age of 84 (range: 65 to 102) years, mean NLR values of 6.47+/-6.07. At one year, 82/280 patients died (29.3%). Graphical analysis and log-rank test confirm a highly statistically significant difference (P<0.001). Performance analysis shows an AUC of 0.72 [95%CI 0.65-0.79]. CRP shows no advantage on NLR. Conclusion: We have developed a score based on age, sex and the NLR to predict the risk of mortality at one year in elderly patients after surgery for a hip fracture. After external validation, it may be included in clinical practice as in clinical research to stratify the risk of postoperative mortality.Keywords: neutrophil-to-lymphocyte ratio, hip fracture, postoperative mortality, medical and health sciences
Procedia PDF Downloads 414606 The Potential Impact of Big Data Analytics on Pharmaceutical Supply Chain Management
Authors: Maryam Ziaee, Himanshu Shee, Amrik Sohal
Abstract:
Big Data Analytics (BDA) in supply chain management has recently drawn the attention of academics and practitioners. Big data refers to a massive amount of data from different sources, in different formats, generated at high speed through transactions in business environments and supply chain networks. Traditional statistical tools and techniques find it difficult to analyse this massive data. BDA can assist organisations to capture, store, and analyse data specifically in the field of supply chain. Currently, there is a paucity of research on BDA in the pharmaceutical supply chain context. In this research, the Australian pharmaceutical supply chain was selected as the case study. This industry is highly significant since the right medicine must reach the right patients, at the right time, in right quantity, in good condition, and at the right price to save lives. However, drug shortages remain a substantial problem for hospitals across Australia with implications on patient care, staff resourcing, and expenditure. Furthermore, a massive volume and variety of data is generated at fast speed from multiple sources in pharmaceutical supply chain, which needs to be captured and analysed to benefit operational decisions at every stage of supply chain processes. As the pharmaceutical industry lags behind other industries in using BDA, it raises the question of whether the use of BDA can improve transparency among pharmaceutical supply chain by enabling the partners to make informed-decisions across their operational activities. This presentation explores the impacts of BDA on supply chain management. An exploratory qualitative approach was adopted to analyse data collected through interviews. This study also explores the BDA potential in the whole pharmaceutical supply chain rather than focusing on a single entity. Twenty semi-structured interviews were undertaken with top managers in fifteen organisations (five pharmaceutical manufacturers, five wholesalers/distributors, and five public hospital pharmacies) to investigate their views on the use of BDA. The findings revealed that BDA can enable pharmaceutical entities to have improved visibility over the whole supply chain and also the market; it enables entities, especially manufacturers, to monitor consumption and the demand rate in real-time and make accurate demand forecasts which reduce drug shortages. Timely and precise decision-making can allow the entities to source and manage their stocks more effectively. This can likely address the drug demand at hospitals and respond to unanticipated issues such as drug shortages. Earlier studies explore BDA in the context of clinical healthcare; however, this presentation investigates the benefits of BDA in the Australian pharmaceutical supply chain. Furthermore, this research enhances managers’ insight into the potentials of BDA at every stage of supply chain processes and helps to improve decision-making in their supply chain operations. The findings will turn the rhetoric of data-driven decision into a reality where the managers may opt for analytics for improved decision-making in the supply chain processes.Keywords: big data analytics, data-driven decision, pharmaceutical industry, supply chain management
Procedia PDF Downloads 108605 Creative Application of Cognitive Linguistics and Communicative Methods to Eliminate Common Learners' Mistakes in Academic Essay Writing
Authors: Ekaterina Lukianchenko
Abstract:
This article sums up a six-year experience of teaching English as a foreign language to over 900 university students at MGIMO (Moscow University of International Relations, Russia), all of them native speakers of Russian aged 16 to 23. By combining modern communicative approach to teaching with cognitive linguistics theories, one can deal more effectively with deeply rooted mistakes which particular students have of which conventional methods have failed to eliminate. If language items are understood as concepts and frames, and classroom activities as meaningful parts of language competence development, this might help to solve such problems as incorrect use of words, unsuitable register, and confused tenses - as well as logical or structural mistakes, and even certain psychological issues concerning essay writing. Along with classic teaching methods, such classroom practice includes plenty of interaction between students - playing special classroom games aimed at eliminating particular mistakes, working in pairs and groups, integrating all skills in one class. The main conclusions that the author of the experiment makes consist in an assumption that academic essay writing classes demand a balanced plan. This should not only include writing as such, but additionally feature elements of listening, reading, speaking activities specifically chosen according to the skills and language students will need to write the particular type of essay.Keywords: academic essay writing, creative teaching, cognitive linguistics, competency-based approach, communicative language teaching, frame, concept
Procedia PDF Downloads 297604 Robust Shrinkage Principal Component Parameter Estimator for Combating Multicollinearity and Outliers’ Problems in a Poisson Regression Model
Authors: Arum Kingsley Chinedu, Ugwuowo Fidelis Ifeanyi, Oranye Henrietta Ebele
Abstract:
The Poisson regression model (PRM) is a nonlinear model that belongs to the exponential family of distribution. PRM is suitable for studying count variables using appropriate covariates and sometimes experiences the problem of multicollinearity in the explanatory variables and outliers on the response variable. This study aims to address the problem of multicollinearity and outliers jointly in a Poisson regression model. We developed an estimator called the robust modified jackknife PCKL parameter estimator by combining the principal component estimator, modified jackknife KL and transformed M-estimator estimator to address both problems in a PRM. The superiority conditions for this estimator were established, and the properties of the estimator were also derived. The estimator inherits the characteristics of the combined estimators, thereby making it efficient in addressing both problems. And will also be of immediate interest to the research community and advance this study in terms of novelty compared to other studies undertaken in this area. The performance of the estimator (robust modified jackknife PCKL) with other existing estimators was compared using mean squared error (MSE) as a performance evaluation criterion through a Monte Carlo simulation study and the use of real-life data. The results of the analytical study show that the estimator outperformed other existing estimators compared with by having the smallest MSE across all sample sizes, different levels of correlation, percentages of outliers and different numbers of explanatory variables.Keywords: jackknife modified KL, outliers, multicollinearity, principal component, transformed M-estimator.
Procedia PDF Downloads 67603 Forecasting Thermal Energy Demand in District Heating and Cooling Systems Using Long Short-Term Memory Neural Networks
Authors: Kostas Kouvaris, Anastasia Eleftheriou, Georgios A. Sarantitis, Apostolos Chondronasios
Abstract:
To achieve the objective of almost zero carbon energy solutions by 2050, the EU needs to accelerate the development of integrated, highly efficient and environmentally friendly solutions. In this direction, district heating and cooling (DHC) emerges as a viable and more efficient alternative to conventional, decentralized heating and cooling systems, enabling a combination of more efficient renewable and competitive energy supplies. In this paper, we develop a forecasting tool for near real-time local weather and thermal energy demand predictions for an entire DHC network. In this fashion, we are able to extend the functionality and to improve the energy efficiency of the DHC network by predicting and adjusting the heat load that is distributed from the heat generation plant to the connected buildings by the heat pipe network. Two case-studies are considered; one for Vransko, Slovenia and one for Montpellier, France. The data consists of i) local weather data, such as humidity, temperature, and precipitation, ii) weather forecast data, such as the outdoor temperature and iii) DHC operational parameters, such as the mass flow rate, supply and return temperature. The external temperature is found to be the most important energy-related variable for space conditioning, and thus it is used as an external parameter for the energy demand models. For the development of the forecasting tool, we use state-of-the-art deep neural networks and more specifically, recurrent networks with long-short-term memory cells, which are able to capture complex non-linear relations among temporal variables. Firstly, we develop models to forecast outdoor temperatures for the next 24 hours using local weather data for each case-study. Subsequently, we develop models to forecast thermal demand for the same period, taking under consideration past energy demand values as well as the predicted temperature values from the weather forecasting models. The contributions to the scientific and industrial community are three-fold, and the empirical results are highly encouraging. First, we are able to predict future thermal demand levels for the two locations under consideration with minimal errors. Second, we examine the impact of the outdoor temperature on the predictive ability of the models and how the accuracy of the energy demand forecasts decreases with the forecast horizon. Third, we extend the relevant literature with a new dataset of thermal demand and examine the performance and applicability of machine learning techniques to solve real-world problems. Overall, the solution proposed in this paper is in accordance with EU targets, providing an automated smart energy management system, decreasing human errors and reducing excessive energy production.Keywords: machine learning, LSTMs, district heating and cooling system, thermal demand
Procedia PDF Downloads 143602 Building an Absurdist Approach to the Philosophy of Science: Combining Camus and Feyerabend
Authors: Robert Herold
Abstract:
This project aims to begin building out a new approach within the philosophy of science that is based around a combination of insights from Albert Camus and Paul Feyerabend. This approach is one that will be labeled an absurdist approach as it uses, for its foundation, the philosophy of the absurd as discussed by Camus. While Camus didn’t directly discuss the philosophy of science, nor did he offer his own views on the subject in any substantial way, that doesn’t mean that his work doesn’t have applications within the philosophy of science. In fact, as is argued throughout the piece, much of the work done by Paul Feyerabend stems from a similar metaphysical and epistemological foundation as Camus. This foundation is the notion of the absurd and the inability of us as humans to reach some sort of objective truth. In modern times both Camus and Feyerabend have been largely pushed to the wayside, though Feyerabend has undoubtedly received the most unfair treatment of the two, and this is something that serves to act more as a hindrance than anything else. Much of the claims and arguments made by both Camus and Feyerabend have not been truly refuted and have simply been pushed aside by pointing to supposed contradictions or inconsistencies. However, while it would be a monumental task to attempt to discuss all of this past work, perhaps it might be better to move beyond both Camus and Feyerabend and chart a new path. This is the overall goal of this paper. This research will demonstrate that not only are the philosophies of Camus and Feyerabend surprisingly similar and able to mesh well together, they also are able to form into something that is truly more than the sum of its parts. While the task of actually building out an approach is a monumental undertaking, the plan is to use this project as a jumping-off point. As such, this paper will start by examining some of the main claims made by both Camus and Feyerabend. Once this is done, then begin weaving them together and demonstrating where the links between the philosophies of both are. Then this study will end by building out the very begging foundations of the absurdist approach to the philosophy of science.Keywords: philosophy, philosophy of science, albert camus, paul feyerabend
Procedia PDF Downloads 252601 Analysis of Urban Rail Transit Station's Accessibility Reliability: A Case Study of Hangzhou Metro, China
Authors: Jin-Qu Chen, Jie Liu, Yong Yin, Zi-Qi Ju, Yu-Yao Wu
Abstract:
Increase in travel fare and station’s failure will have huge impact on passengers’ travel. The Urban Rail Transit (URT) station’s accessibility reliability under increasing travel fare and station failure are analyzed in this paper. Firstly, the passenger’s travel path is resumed based on stochastic user equilibrium and Automatic Fare Collection (AFC) data. Secondly, calculating station’s importance by combining LeaderRank algorithm and Ratio of Station Affected Passenger Volume (RSAPV), and then the station’s accessibility evaluation indicators are proposed based on the analysis of passenger’s travel characteristic. Thirdly, station’s accessibility under different scenarios are measured and rate of accessibility change is proposed as station’s accessibility reliability indicator. Finally, the accessibility of Hangzhou metro stations is analyzed by the formulated models. The result shows that Jinjiang station and Liangzhu station are the most important and convenient station in the Hangzhou metro, respectively. Station failure and increase in travel fare and station failure have huge impact on station’s accessibility, except for increase in travel fare. Stations in Hangzhou metro Line 1 have relatively worse accessibility reliability and Fengqi Road station’s accessibility reliability is weakest. For Hangzhou metro operational department, constructing new metro line around Line 1 and protecting Line 1’s station preferentially can effective improve the accessibility reliability of Hangzhou metro.Keywords: automatic fare collection data, AFC, station’s accessibility reliability, stochastic user equilibrium, urban rail transit, URT
Procedia PDF Downloads 136600 A Picture is worth a Billion Bits: Real-Time Image Reconstruction from Dense Binary Pixels
Authors: Tal Remez, Or Litany, Alex Bronstein
Abstract:
The pursuit of smaller pixel sizes at ever increasing resolution in digital image sensors is mainly driven by the stringent price and form-factor requirements of sensors and optics in the cellular phone market. Recently, Eric Fossum proposed a novel concept of an image sensor with dense sub-diffraction limit one-bit pixels (jots), which can be considered a digital emulation of silver halide photographic film. This idea has been recently embodied as the EPFL Gigavision camera. A major bottleneck in the design of such sensors is the image reconstruction process, producing a continuous high dynamic range image from oversampled binary measurements. The extreme quantization of the Poisson statistics is incompatible with the assumptions of most standard image processing and enhancement frameworks. The recently proposed maximum-likelihood (ML) approach addresses this difficulty, but suffers from image artifacts and has impractically high computational complexity. In this work, we study a variant of a sensor with binary threshold pixels and propose a reconstruction algorithm combining an ML data fitting term with a sparse synthesis prior. We also show an efficient hardware-friendly real-time approximation of this inverse operator. Promising results are shown on synthetic data as well as on HDR data emulated using multiple exposures of a regular CMOS sensor.Keywords: binary pixels, maximum likelihood, neural networks, sparse coding
Procedia PDF Downloads 204599 Combined Synchrotron Radiography and Diffraction for in Situ Study of Reactive Infiltration of Aluminum into Iron Porous Preform
Authors: S. Djaziri, F. Sket, A. Hynowska, S. Milenkovic
Abstract:
The use of Fe-Al based intermetallics as an alternative to Cr/Ni based stainless steels is very promising for industrial applications that use critical raw materials parts under extreme conditions. However, the development of advanced Fe-Al based intermetallics with appropriate mechanical properties presents several challenges that involve appropriate processing and microstructure control. A processing strategy is being developed which aims at producing a net-shape porous Fe-based preform that is infiltrated with molten Al or Al-alloy. In the present work, porous Fe-based preforms produced by two different methods (selective laser melting (SLM) and Kochanek-process (KE)) are studied during infiltration with molten aluminum. In the objective to elucidate the mechanisms underlying the formation of Fe-Al intermetallic phases during infiltration, an in-house furnace has been designed for in situ observation of infiltration at synchrotron facilities combining x-ray radiography (XR) and x-ray diffraction (XRD) techniques. The feasibility of this approach has been demonstrated, and information about the melt flow front propagation has been obtained. In addition, reactive infiltration has been achieved where a bi-phased intermetallic layer has been identified to be formed between the solid Fe and liquid Al. In particular, a tongue-like Fe₂Al₅ phase adhering to the Fe and a needle-like Fe₄Al₁₃ phase adhering to the Al were observed. The growth of the intermetallic compound was found to be dependent on the temperature gradient present along the preform as well as on the reaction time which will be discussed in view of the different obtained results.Keywords: combined synchrotron radiography and diffraction, Fe-Al intermetallic compounds, in-situ molten Al infiltration, porous solid Fe preforms
Procedia PDF Downloads 226598 Hybrid Advanced Oxidative Pretreatment of Complex Industrial Effluent for Biodegradability Enhancement
Authors: K. Paradkar, S. N. Mudliar, A. Sharma, A. B. Pandit, R. A. Pandey
Abstract:
The study explores the hybrid combination of Hydrodynamic Cavitation (HC) and Subcritical Wet Air Oxidation-based pretreatment of complex industrial effluent to enhance the biodegradability selectively (without major COD destruction) to facilitate subsequent enhanced downstream processing via anaerobic or aerobic biological treatment. Advanced oxidation based techniques can be less efficient as standalone options and a hybrid approach by combining Hydrodynamic Cavitation (HC), and Wet Air Oxidation (WAO) can lead to a synergistic effect since both the options are based on common free radical mechanism. The HC can be used for initial turbulence and generation of hotspots which can begin the free radical attack and this agitating mixture then can be subjected to less intense WAO since initial heat (to raise the activation energy) can be taken care by HC alone. Lab-scale venturi-based hydrodynamic cavitation and wet air oxidation reactor with biomethanated distillery wastewater (BMDWW) as a model effluent was examined for establishing the proof-of-concept. The results indicated that for a desirable biodegradability index (BOD: COD - BI) enhancement (up to 0.4), the Cavitation (standalone) pretreatment condition was: 5 bar and 88 min reaction time with a COD reduction of 36 % and BI enhancement of up to 0.27 (initial BI - 0.17). The optimum WAO condition (standalone) was: 150oC, 6 bar and 30 minutes with 31% COD reduction and 0.33 BI. The hybrid pretreatment (combined Cavitation + WAO) worked out to be 23.18 min HC (at 5 bar) followed by 30 min WAO at 150oC, 6 bar, at which around 50% COD was retained yielding a BI of 0.55. FTIR & NMR analysis of pretreated effluent indicated dissociation and/or reorientation of complex organic compounds in untreated effluent to simpler organic compounds post-pretreatment.Keywords: hybrid, hydrodynamic cavitation, wet air oxidation, biodegradability index
Procedia PDF Downloads 618597 Optimal Harmonic Filters Design of Taiwan High Speed Rail Traction System
Authors: Ying-Pin Chang
Abstract:
This paper presents a method for combining a particle swarm optimization with nonlinear time-varying evolution and orthogonal arrays (PSO-NTVEOA) in the planning of harmonic filters for the high speed railway traction system with specially connected transformers in unbalanced three-phase power systems. The objective is to minimize the cost of the filter, the filters loss, the total harmonic distortion of currents and voltages at each bus simultaneously. An orthogonal array is first conducted to obtain the initial solution set. The set is then treated as the initial training sample. Next, the PSO-NTVEOA method parameters are determined by using matrix experiments with an orthogonal array, in which a minimal number of experiments would have an effect that approximates the full factorial experiments. This PSO-NTVEOA method is then applied to design optimal harmonic filters in Taiwan High Speed Rail (THSR) traction system, where both rectifiers and inverters with IGBT are used. From the results of the illustrative examples, the feasibility of the PSO-NTVEOA to design an optimal passive harmonic filter of THSR system is verified and the design approach can greatly reduce the harmonic distortion. Three design schemes are compared that V-V connection suppressing the 3rd order harmonic, and Scott and Le Blanc connection for the harmonic improvement is better than the V-V connection.Keywords: harmonic filters, particle swarm optimization, nonlinear time-varying evolution, orthogonal arrays, specially connected transformers
Procedia PDF Downloads 393596 Language Inequalities in the Algerian Public Space: A Semiotic Landscape Analysis
Authors: Sarah Smail
Abstract:
Algeria has been subject to countless conquests and invasions that resulted in having a diverse linguistic repertoire. The sociolinguistic situation of the country made linguistic landscape analysis pertinent. This in fact, has led to the growth of diverse linguistic landscape studies that mainly focused on identifying the sociolinguistic situation of the country through shop names analysis. The present research adds to the existing literature by offering another perspective to the analysis of signs by combining the physical and digital semiotic landscape. The powerful oil, gas and agri-food industries in Algeria make it interesting to focus on the commodification of natural resources for the sake of identifying the language and semiotic resources deployed in the Algerian public scene in addition to the identification of the visibility of linguistic inequalities and minorities in the business domain. The study discusses the semiotic landscape of three trade cities: Bejaia, Setif and Hassi-Messaoud. In addition to interviews conducted with business owners and graphic designers and questionnaires with business employees. Withal, the study relies on Gorter’s multilingual inequalities in public space (MIPS) model (2021) and Irvine and Gal’s language ideology and linguistic differentiation (2000). The preliminary results demonstrate the sociolinguistic injustice existing in the business domain, e.g., the exclusion of the official languages, the dominance of foreign languages, and the excessive use of the roman script.Keywords: semiotic landscaping, digital scapes, language commodification, linguistic inequalities, business signage
Procedia PDF Downloads 109595 The Reasons and the Practical Benefits Behind the Motivation of Businesses to Participate in the Dual Education System (DLS)
Authors: Ainur Bulasheva
Abstract:
During the last decade, the dual learning system (DLS) has been actively introduced in various industries in Kazakhstan, including both vocational, post-secondary, and higher education levels. It is a relatively new practice-oriented approach to training qualified personnel in Kazakhstan, officially introduced in 2012. Dual learning was integrated from the German vocational education and training system, combining practical training with part-time work in production and training in an educational institution. The policy of DLS has increasingly focused on decreasing youth unemployment and the shortage of mid-level professionals by providing incentives for employers to involve in this system. By participating directly in the educational process, the enterprise strives to train its future personnel to meet fast-changing market demands. This study examines the effectiveness of DLS from the perspective of employers to understand the motivations of businesses to participate (invest) in this program. The human capital theory of Backer, which predicts that employers will invest in training their workers (in our case, dual students) when they expect that the return on investment will be greater than the cost - acts as a starting point. Further extensionists of this theory will be considered to understand investing intentions of businesses. By comparing perceptions of DLS employers and non-dual practices, this study determines the efficiency of promoted training approach for enterprises in the Kazakhstan agri-food industry.Keywords: vocational and technical education, dualeducation, human capital theory, argi-food industry
Procedia PDF Downloads 70594 Thermo-Mechanical Behavior of Steel-Wood Connections of Wooden Structures Under the Effect of a Fire
Authors: Ahmed Alagha, Belkacem Lamri, Abdelhak Kada.
Abstract:
Steel-wood assemblies often have complex geometric configurations whose overall behavior under the effect of a fire is conditioned by the thermal response, by combining the two materials steel and wood, whose thermal characteristics are greatly influenced by high temperatures. The objective of this work is to study the thermal behavior of a steel-wood connection, with or without insulating material, subjected to an ISO834 standard fire model. The analysis is developed by the analytical approach using the Eurocode, and numerically, by the finite element method, through the ANSYS calculation code. The design of the connections is evaluated at room temperature taking the cases of single shear and double shear. The thermal behavior of the connections is simulated in transient state while taking into account the modes of heat transfer by convection and by radiation. The variation of temperature as a function of time is evaluated in different positions of the connections while talking about the heat produced and the formation of the carbon layer. The results relate to the temperature distributions in the connection elements as a function of the duration of the fire. The results of the thermal analysis show that the temperature increases rapidly and reaches more than 260 °C in the steel material for an hour of exposure to fire. The temperature development in wood material is different from that in steel because of its thermal properties. Wood heats up on the outside and burns, its surface can reach very high temperatures in points on the surface.Keywords: Eurocode 5, finite elements, ISO834, simple shear, thermal behaviour, wood-steel connection
Procedia PDF Downloads 86593 A Collective Intelligence Approach to Safe Artificial General Intelligence
Authors: Craig A. Kaplan
Abstract:
If AGI proves to be a “winner-take-all” scenario where the first company or country to develop AGI dominates, then the first AGI must also be the safest. The safest, and fastest, path to Artificial General Intelligence (AGI) may be to harness the collective intelligence of multiple AI and human agents in an AGI network. This approach has roots in seminal ideas from four of the scientists who founded the field of Artificial Intelligence: Allen Newell, Marvin Minsky, Claude Shannon, and Herbert Simon. Extrapolating key insights from these founders of AI, and combining them with the work of modern researchers, results in a fast and safe path to AGI. The seminal ideas discussed are: 1) Society of Mind (Minsky), 2) Information Theory (Shannon), 3) Problem Solving Theory (Newell & Simon), and 4) Bounded Rationality (Simon). Society of Mind describes a collective intelligence approach that can be used with AI and human agents to create an AGI network. Information theory helps address the critical issue of how an AGI system will increase its intelligence over time. Problem Solving Theory provides a universal framework that AI and human agents can use to communicate efficiently, effectively, and safely. Bounded Rationality helps us better understand not only the capabilities of SuperIntelligent AGI but also how humans can remain relevant in a world where the intelligence of AGI vastly exceeds that of its human creators. Each key idea can be combined with recent work in the fields of Artificial Intelligence, Machine Learning, and Large Language Models to accelerate the development of a working, safe, AGI system.Keywords: AI Agents, Collective Intelligence, Minsky, Newell, Shannon, Simon, AGI, AGI Safety
Procedia PDF Downloads 92592 Optical Emission Studies of Laser Produced Lead Plasma: Measurements of Transition Probabilities of the 6P7S → 6P2 Transitions Array
Authors: Javed Iqbal, R. Ahmed, M. A. Baig
Abstract:
We present new data on the optical emission spectra of the laser produced lead plasma using a pulsed Nd:YAG laser at 1064 nm (pulse energy 400 mJ, pulse width 5 ns, 10 Hz repetition rate) in conjunction with a set of miniature spectrometers covering the spectral range from 200 nm to 720 nm. Well resolved structure due to the 6p7s → 6p2 transition array of neutral lead and a few multiplets of singly ionized lead have been observed. The electron temperatures have been calculated in the range (9000 - 10800) ± 500 K using four methods; two line ratio, Boltzmann plot, Saha-Boltzmann plot and Morrata method whereas, the electron number densities have been determined in the range (2.0 – 8.0) ± 0.6 ×1016 cm-3 using the Stark broadened line profiles of neutral lead lines, singly ionized lead lines and hydrogen Hα-line. Full width at half maximum (FWHM) of a number of neutral and singly ionized lead lines have been extracted by the Lorentzian fit to the experimentally observed line profiles. Furthermore, branching fractions have been deduced for eleven lines of the 6p7s → 6p2 transition array in lead whereas the absolute values of the transition probabilities have been calculated by combining the experimental branching fractions with the life times of the excited levels The new results are compared with the existing data showing a good agreement.Keywords: LIBS, plasma parameters, transition probabilities, branching fractions, stark width
Procedia PDF Downloads 284