Search results for: virtual machine
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 3831

Search results for: virtual machine

471 Improving Fingerprinting-Based Localization System Using Generative AI

Authors: Getaneh Berie Tarekegn

Abstract:

A precise localization system is crucial for many artificial intelligence Internet of Things (AI-IoT) applications in the era of smart cities. Their applications include traffic monitoring, emergency alarming, environmental monitoring, location-based advertising, intelligent transportation, and smart health care. The most common method for providing continuous positioning services in outdoor environments is by using a global navigation satellite system (GNSS). Due to nonline-of-sight, multipath, and weather conditions, GNSS systems do not perform well in dense urban, urban, and suburban areas.This paper proposes a generative AI-based positioning scheme for large-scale wireless settings using fingerprinting techniques. In this article, we presented a semi-supervised deep convolutional generative adversarial network (S-DCGAN)-based radio map construction method for real-time device localization. It also employed a reliable signal fingerprint feature extraction method with t-distributed stochastic neighbor embedding (t-SNE), which extracts dominant features while eliminating noise from hybrid WLAN and long-term evolution (LTE) fingerprints. The proposed scheme reduced the workload of site surveying required to build the fingerprint database by up to 78.5% and significantly improved positioning accuracy. The results show that the average positioning error of GAILoc is less than 0.39 m, and more than 90% of the errors are less than 0.82 m. According to numerical results, SRCLoc improves positioning performance and reduces radio map construction costs significantly compared to traditional methods.

Keywords: location-aware services, feature extraction technique, generative adversarial network, long short-term memory, support vector machine

Procedia PDF Downloads 30
470 E-Business Role in the Development of the Economy of Sultanate of Oman

Authors: Mairaj Salim, Asma Zaheer

Abstract:

Oman has accomplished as much or more than its fellow Gulf monarchies, despite starting from scratch considerably later, having less oil income to utilize, dealing with a larger and more rugged geography, and resolving a bitter civil war along the way. Of course, Oman's progress in the past 30-plus years has not been without problems and missteps, but the balance is squarely on the positive side of the ledger. Oil has been the driving force of the Omani economy since Oman began commercial production in 1967. The oil industry supports the country’s high standard of living and is primarily responsible for its modern and expansive infrastructure, including electrical utilities, telephone services, roads, public education and medical services. In addition to extensive oil reserves, Oman also has substantial natural gas reserves, which are expected to play a leading role in the Omani economy in the Twenty-first Century. To reduce the country’s dependence on oil revenues, the government is restructuring the economy by directing investment to non-oil activities. Since the 21st century IT has changed the performing tasks. To manage the affairs for the benefits of organizations and economy, the Omani government has adopted E-Business technologies for the development. E-Business is important because it allows • Transformation of old economy relationships (vertical/linear relationships) to new economy relationships characterized by end-to-end relationship management solutions (integrated or extended relationships) • Facilitation and organization of networks, small firms depend on ‘partner’ firms for supplies and product distribution to meet customer demands • SMEs to outsource back-end process or cost centers enabling the SME to focus on their core competence • ICT to connect, manage and integrate processes internally and externally • SMEs to join networks and enter new markets, through shortened supply chains to increase market share, customers and suppliers • SMEs to take up the benefits of e-business to reduce costs, increase customer satisfaction, improve client referral and attract quality partners • New business models of collaboration for SMEs to increase their skill base • SMEs to enter virtual trading arena and increase their market reach A national strategy for the advancement of information and communication technology (ICT) has been worked out, mainly to introduce e-government, e-commerce, and a digital society. An information technology complex KOM (Knowledge Oasis Muscat) had been established, consisting of section for information technology, incubator services, a shopping center of technology software and hardware, ICT colleges, E-Government services and other relevant services. So, all these efforts play a vital role in the development of Oman economy.

Keywords: ICT, ITA, CRM, SCM, ERP, KOM, SMEs, e-commerce and e-business

Procedia PDF Downloads 230
469 Interactive IoT-Blockchain System for Big Data Processing

Authors: Abdallah Al-ZoubI, Mamoun Dmour

Abstract:

The spectrum of IoT devices is becoming widely diversified, entering almost all possible fields and finding applications in industry, health, finance, logistics, education, to name a few. The IoT active endpoint sensors and devices exceeded the 12 billion mark in 2021 and are expected to reach 27 billion in 2025, with over $34 billion in total market value. This sheer rise in numbers and use of IoT devices bring with it considerable concerns regarding data storage, analysis, manipulation and protection. IoT Blockchain-based systems have recently been proposed as a decentralized solution for large-scale data storage and protection. COVID-19 has actually accelerated the desire to utilize IoT devices as it impacted both demand and supply and significantly affected several regions due to logistic reasons such as supply chain interruptions, shortage of shipping containers and port congestion. An IoT-blockchain system is proposed to handle big data generated by a distributed network of sensors and controllers in an interactive manner. The system is designed using the Ethereum platform, which utilizes smart contracts, programmed in solidity to execute and manage data generated by IoT sensors and devices. such as Raspberry Pi 4, Rasbpian, and add-on hardware security modules. The proposed system will run a number of applications hosted by a local machine used to validate transactions. It then sends data to the rest of the network through InterPlanetary File System (IPFS) and Ethereum Swarm, forming a closed IoT ecosystem run by blockchain where a number of distributed IoT devices can communicate and interact, thus forming a closed, controlled environment. A prototype has been deployed with three IoT handling units distributed over a wide geographical space in order to examine its feasibility, performance and costs. Initial results indicated that big IoT data retrieval and storage is feasible and interactivity is possible, provided that certain conditions of cost, speed and thorough put are met.

Keywords: IoT devices, blockchain, Ethereum, big data

Procedia PDF Downloads 120
468 Short Text Classification Using Part of Speech Feature to Analyze Students' Feedback of Assessment Components

Authors: Zainab Mutlaq Ibrahim, Mohamed Bader-El-Den, Mihaela Cocea

Abstract:

Students' textual feedback can hold unique patterns and useful information about learning process, it can hold information about advantages and disadvantages of teaching methods, assessment components, facilities, and other aspects of teaching. The results of analysing such a feedback can form a key point for institutions’ decision makers to advance and update their systems accordingly. This paper proposes a data mining framework for analysing end of unit general textual feedback using part of speech feature (PoS) with four machine learning algorithms: support vector machines, decision tree, random forest, and naive bays. The proposed framework has two tasks: first, to use the above algorithms to build an optimal model that automatically classifies the whole data set into two subsets, one subset is tailored to assessment practices (assessment related), and the other one is the non-assessment related data. Second task to use the same algorithms to build an optimal model for whole data set, and the new data subsets to automatically detect their sentiment. The significance of this paper is to compare the performance of the above four algorithms using part of speech feature to the performance of the same algorithms using n-grams feature. The paper follows Knowledge Discovery and Data Mining (KDDM) framework to construct the classification and sentiment analysis models, which is understanding the assessment domain, cleaning and pre-processing the data set, selecting and running the data mining algorithm, interpreting mined patterns, and consolidating the discovered knowledge. The results of this paper experiments show that both models which used both features performed very well regarding first task. But regarding the second task, models that used part of speech feature has underperformed in comparison with models that used unigrams and bigrams.

Keywords: assessment, part of speech, sentiment analysis, student feedback

Procedia PDF Downloads 119
467 Chemical and Physical Properties and Biocompatibility of Ti–6Al–4V Produced by Electron Beam Rapid Manufacturing and Selective Laser Melting for Biomedical Applications

Authors: Bing–Jing Zhao, Chang-Kui Liu, Hong Wang, Min Hu

Abstract:

Electron beam rapid manufacturing (EBRM) or Selective laser melting is an additive manufacturing process that uses 3D CAD data as a digital information source and energy in the form of a high-power laser beam or electron beam to create three-dimensional metal parts by fusing fine metallic powders together.Object:The present study was conducted to evaluate the mechanical properties ,the phase transformation,the corrosivity and the biocompatibility of Ti-6Al-4V by EBRM,SLM and forging technique.Method: Ti-6Al-4V alloy standard test pieces were manufactured by EBRM, SLM and forging technique according to AMS4999,GB/T228 and ISO 10993.The mechanical properties were analyzed by universal test machine. The phase transformation was analyzed by X-ray diffraction and scanning electron microscopy. The corrosivity was analyzed by electrochemical method. The biocompatibility was analyzed by co-culturing with mesenchymal stem cell and analyzed by scanning electron microscopy (SEM) and alkaline phosphatase assay (ALP) to evaluate cell adhesion and differentiation, respectively. Results: The mechanical properties, the phase transformation, the corrosivity and the biocompatibility of Ti-6Al-4V by EBRM、SLM were similar to forging and meet the mechanical property requirements of AMS4999 standard. a­phase microstructure for the EBM production contrast to the a’­phase microstructure of the SLM product. Mesenchymal stem cell adhesion and differentiation were well. Conclusion: The property of the Ti-6Al-4V alloy manufactured by EBRM and SLM technique can meet the medical standard from this study. But some further study should be proceeded in order to applying well in clinical practice.

Keywords: 3D printing, Electron Beam Rapid Manufacturing (EBRM), Selective Laser Melting (SLM), Computer Aided Design (CAD)

Procedia PDF Downloads 435
466 Revolutionizing Autonomous Trucking Logistics with Customer Relationship Management Cloud

Authors: Sharda Kumari, Saiman Shetty

Abstract:

Autonomous trucking is just one of the numerous significant shifts impacting fleet management services. The Society of Automotive Engineers (SAE) has defined six levels of vehicle automation that have been adopted internationally, including by the United States Department of Transportation. On public highways in the United States, organizations are testing driverless vehicles with at least Level 4 automation which indicates that a human is present in the vehicle and can disable automation, which is usually done while the trucks are not engaged in highway driving. However, completely driverless vehicles are presently being tested in the state of California. While autonomous trucking can increase safety, decrease trucking costs, provide solutions to trucker shortages, and improve efficiencies, logistics, too, requires advancements to keep up with trucking innovations. Given that artificial intelligence, machine learning, and automated procedures enable people to do their duties in other sectors with fewer resources, CRM (Customer Relationship Management) can be applied to the autonomous trucking business to provide the same level of efficiency. In a society witnessing significant digital disruptions, fleet management is likewise being transformed by technology. Utilizing strategic alliances to enhance core services is an effective technique for capitalizing on innovations and delivering enhanced services. Utilizing analytics on CRM systems improves cost control of fuel strategy, fleet maintenance, driver behavior, route planning, road safety compliance, and capacity utilization. Integration of autonomous trucks with automated fleet management, yard/terminal management, and customer service is possible, thus having significant power to redraw the lines between the public and private spheres in autonomous trucking logistics.

Keywords: autonomous vehicles, customer relationship management, customer experience, autonomous trucking, digital transformation

Procedia PDF Downloads 77
465 Optimizing Energy Efficiency: Leveraging Big Data Analytics and AWS Services for Buildings and Industries

Authors: Gaurav Kumar Sinha

Abstract:

In an era marked by increasing concerns about energy sustainability, this research endeavors to address the pressing challenge of energy consumption in buildings and industries. This study delves into the transformative potential of AWS services in optimizing energy efficiency. The research is founded on the recognition that effective management of energy consumption is imperative for both environmental conservation and economic viability. Buildings and industries account for a substantial portion of global energy use, making it crucial to develop advanced techniques for analysis and reduction. This study sets out to explore the integration of AWS services with big data analytics to provide innovative solutions for energy consumption analysis. Leveraging AWS's cloud computing capabilities, scalable infrastructure, and data analytics tools, the research aims to develop efficient methods for collecting, processing, and analyzing energy data from diverse sources. The core focus is on creating predictive models and real-time monitoring systems that enable proactive energy management. By harnessing AWS's machine learning and data analytics capabilities, the research seeks to identify patterns, anomalies, and optimization opportunities within energy consumption data. Furthermore, this study aims to propose actionable recommendations for reducing energy consumption in buildings and industries. By combining AWS services with metrics-driven insights, the research strives to facilitate the implementation of energy-efficient practices, ultimately leading to reduced carbon emissions and cost savings. The integration of AWS services not only enhances the analytical capabilities but also offers scalable solutions that can be customized for different building and industrial contexts. The research also recognizes the potential for AWS-powered solutions to promote sustainable practices and support environmental stewardship.

Keywords: energy consumption analysis, big data analytics, AWS services, energy efficiency

Procedia PDF Downloads 40
464 Low Enrollment in Civil Engineering Departments: Challenges and Opportunities

Authors: Alaa Yehia, Ayatollah Yehia, Sherif Yehia

Abstract:

There is a recurring issue of low enrollments across many civil engineering departments in postsecondary institutions. While there have been moments where enrollments begin to increase, civil engineering departments find themselves facing low enrollments at around 60% over the last five years across the Middle East. There are many reasons that could be attributed to this decline, such as low entry-level salaries, over-saturation of civil engineering graduates in the job market, and a lack of construction projects due to the impending or current recession. However, this recurring problem alludes to an intrinsic issue of the curriculum. The societal shift to the usage of high technology such as machine learning (ML) and artificial intelligence (AI) demands individuals who are proficient at utilizing it. Therefore, existing curriculums must adapt to this change in order to provide an education that is suitable for potential and current students. In this paper, In order to provide potential solutions for this issue, the analysis considers two possible implementations of high technology into the civil engineering curriculum. The first approach is to implement a course that introduces applications of high technology in Civil Engineering contexts. While the other approach is to intertwine applications of high technology throughout the degree. Both approaches, however, should meet requirements of accreditation agencies. In addition to the proposed improvement in civil engineering curriculum, a different pedagogical practice must be adapted as well. The passive learning approach might not be appropriate for Gen Z students; current students, now more than ever, need to be introduced to engineering topics and practice following different learning methods to ensure they will have the necessary skills for the job market. Different learning methods that incorporate high technology applications, like AI, must be integrated throughout the curriculum to make the civil engineering degree more attractive to prospective students. Moreover, the paper provides insight on the importance and approach of adapting the Civil Engineering curriculum to address the current low enrollment crisis that civil engineering departments globally, but specifically in the Middle East, are facing.

Keywords: artificial intelligence (AI), civil engineering curriculum, high technology, low enrollment, pedagogy

Procedia PDF Downloads 133
463 Cold Formed Steel Sections: Analysis, Design and Applications

Authors: A. Saha Chaudhuri, D. Sarkar

Abstract:

In steel construction, there are two families of structural members. One is hot rolled steel and another is cold formed steel. Cold formed steel section includes steel sheet, strip, plate or flat bar. Cold formed steel section is manufactured in roll forming machine by press brake or bending operation. Cold formed steel (CFS), also known as Light Gauge Steel (LGS). As cold formed steel is a sustainable material, it is widely used in green building. Cold formed steel can be recycled and reused with no degradation in structural properties. Cold formed steel structures can earn credits for green building ratings such as LEED and similar programs. Cold formed steel construction satisfies international demand for better, more efficient and affordable buildings. Cold formed steel sections are used in building, car body, railway coach, various types of equipment, storage rack, grain bin, highway product, transmission tower, transmission pole, drainage facility, bridge construction etc. Various shapes of cold formed steel sections are available, such as C section, Z section, I section, T section, angle section, hat section, box section, square hollow section (SHS), rectangular hollow section (RHS), circular hollow section (CHS) etc. In building construction cold formed steel is used as eave strut, purlin, girt, stud, header, floor joist, brace, diaphragm and covering for roof, wall and floor. Cold formed steel has high strength to weight ratio and high stiffness. Cold formed steel is non shrinking and non creeping at ambient temperature, it is termite proof and rot proof. CFS is durable, dimensionally stable and non combustible material. CFS is economical in transportation and handling. At present days cold formed steel becomes a competitive building material. In this paper all these applications related present research work are described and how the CFS can be used as blast resistant structural system that is examined.

Keywords: cold form steel sections, applications, present research review, blast resistant design

Procedia PDF Downloads 123
462 Analysis of a IncResU-Net Model for R-Peak Detection in ECG Signals

Authors: Beatriz Lafuente Alcázar, Yash Wani, Amit J. Nimunkar

Abstract:

Cardiovascular Diseases (CVDs) are the leading cause of death globally, and around 80% of sudden cardiac deaths are due to arrhythmias or irregular heartbeats. The majority of these pathologies are revealed by either short-term or long-term alterations in the electrocardiogram (ECG) morphology. The ECG is the main diagnostic tool in cardiology. It is a non-invasive, pain free procedure that measures the heart’s electrical activity and that allows the detecting of abnormal rhythms and underlying conditions. A cardiologist can diagnose a wide range of pathologies based on ECG’s form alterations, but the human interpretation is subjective and it is contingent to error. Moreover, ECG records can be quite prolonged in time, which can further complicate visual diagnosis, and deeply retard disease detection. In this context, deep learning methods have risen as a promising strategy to extract relevant features and eliminate individual subjectivity in ECG analysis. They facilitate the computation of large sets of data and can provide early and precise diagnoses. Therefore, the cardiology field is one of the areas that can most benefit from the implementation of deep learning algorithms. In the present study, a deep learning algorithm is trained following a novel approach, using a combination of different databases as the training set. The goal of the algorithm is to achieve the detection of R-peaks in ECG signals. Its performance is further evaluated in ECG signals with different origins and features to test the model’s ability to generalize its outcomes. Performance of the model for detection of R-peaks for clean and noisy ECGs is presented. The model is able to detect R-peaks in the presence of various types of noise, and when presented with data, it has not been trained. It is expected that this approach will increase the effectiveness and capacity of cardiologists to detect divergences in the normal cardiac activity of their patients.

Keywords: arrhythmia, deep learning, electrocardiogram, machine learning, R-peaks

Procedia PDF Downloads 153
461 Information Visualization Methods Applied to Nanostructured Biosensors

Authors: Osvaldo N. Oliveira Jr.

Abstract:

The control of molecular architecture inherent in some experimental methods to produce nanostructured films has had great impact on devices of various types, including sensors and biosensors. The self-assembly monolayers (SAMs) and the electrostatic layer-by-layer (LbL) techniques, for example, are now routinely used to produce tailored architectures for biosensing where biomolecules are immobilized with long-lasting preserved activity. Enzymes, antigens, antibodies, peptides and many other molecules serve as the molecular recognition elements for detecting an equally wide variety of analytes. The principles of detection are also varied, including electrochemical methods, fluorescence spectroscopy and impedance spectroscopy. In this presentation an overview will be provided of biosensors made with nanostructured films to detect antibodies associated with tropical diseases and HIV, in addition to detection of analytes of medical interest such as cholesterol and triglycerides. Because large amounts of data are generated in the biosensing experiments, use has been made of computational and statistical methods to optimize performance. Multidimensional projection techniques such as Sammon´s mapping have been shown more efficient than traditional multivariate statistical analysis in identifying small concentrations of anti-HIV antibodies and for distinguishing between blood serum samples of animals infected with two tropical diseases, namely Chagas´ disease and Leishmaniasis. Optimization of biosensing may include a combination of another information visualization method, the Parallel Coordinate technique, with artificial intelligence methods in order to identify the most suitable frequencies for reaching higher sensitivity using impedance spectroscopy. Also discussed will be the possible convergence of technologies, through which machine learning and other computational methods may be used to treat data from biosensors within an expert system for clinical diagnosis.

Keywords: clinical diagnosis, information visualization, nanostructured films, layer-by-layer technique

Procedia PDF Downloads 309
460 Quality Assessment of New Zealand Mānuka Honeys Using Hyperspectral Imaging Combined with Deep 1D-Convolutional Neural Networks

Authors: Hien Thi Dieu Truong, Mahmoud Al-Sarayreh, Pullanagari Reddy, Marlon M. Reis, Richard Archer

Abstract:

New Zealand mānuka honey is a honeybee product derived mainly from Leptospermum scoparium nectar. The potent antibacterial activity of mānuka honey derives principally from methylglyoxal (MGO), in addition to the hydrogen peroxide and other lesser activities present in all honey. MGO is formed from dihydroxyacetone (DHA) unique to L. scoparium nectar. Mānuka honey also has an idiosyncratic phenolic profile that is useful as a chemical maker. Authentic mānuka honey is highly valuable, but almost all honey is formed from natural mixtures of nectars harvested by a hive over a time period. Once diluted by other nectars, mānuka honey irrevocably loses value. We aimed to apply hyperspectral imaging to honey frames before bulk extraction to minimise the dilution of genuine mānuka by other honey and ensure authenticity at the source. This technology is non-destructive and suitable for an industrial setting. Chemometrics using linear Partial Least Squares (PLS) and Support Vector Machine (SVM) showed limited efficacy in interpreting chemical footprints due to large non-linear relationships between predictor and predictand in a large sample set, likely due to honey quality variability across geographic regions. Therefore, an advanced modelling approach, one-dimensional convolutional neural networks (1D-CNN), was investigated for analysing hyperspectral data for extraction of biochemical information from honey. The 1D-CNN model showed superior prediction of honey quality (R² = 0.73, RMSE = 2.346, RPD= 2.56) to PLS (R² = 0.66, RMSE = 2.607, RPD= 1.91) and SVM (R² = 0.67, RMSE = 2.559, RPD=1.98). Classification of mono-floral manuka honey from multi-floral and non-manuka honey exceeded 90% accuracy for all models tried. Overall, this study reveals the potential of HSI and deep learning modelling for automating the evaluation of honey quality in frames.

Keywords: mānuka honey, quality, purity, potency, deep learning, 1D-CNN, chemometrics

Procedia PDF Downloads 110
459 Teachers of the Pandemic: Retention, Resilience, and Training

Authors: Theoni Soublis

Abstract:

The COVID-19 pandemic created a severe interruption in teaching and learning in K-12 schools. It is essential that educational researchers, teachers, and administrators understand the long term effects that COVID-19 had on a variety of stakeholders in education. This investigation aims to analyze the research since the beginning of the pandemic that focuses specifically on teacher retention, resilience, and training. The results of this investigation will help to inform future research in order to better understand how the institution of education can continue to be prepared and to better prepare for future significant shifts in the modalities of instruction. The results of this analysis will directly impact the field of education as it will broaden the scope of understanding regarding how COVID- 19 impacted teaching and learning. The themes that will emerge from the data analysis will directly inform policy makers, administrators, and researchers about how to best implement training and curriculum design in order to support teacher effectiveness this in the classroom. Educational researchers have written about how teacher morale plummeted and how many teachers reported early burnout and higher stress levels. Teachers’ stress and anxiety soared during the COVID-19 pandemic, but so has their resilience and dedication to the field of education. This research aims to understand how public-school teachers overcame teaching obstacles presented to them during COVID-19. Research has been conducted to identify a variety of information regarding the impact the pandemic has had on K-12 teachers, students, and families. This research aims to understand how teachers continued to pursue their teaching objectives without significant training of effective online instruction methods. Not many educators even heard of the video conferencing platform Zoom before the spring of 2020. Researchers are interested in understanding how teachers used their expertise, prior knowledge, and training to institute immediate and effective online learning environments, what types of relationships did teachers build with students while teaching 100% remotely, and how did relationships change with students while teaching remotely? Furthermore, did the teacher-student relationship propel teacher resolve to be successful while teaching during a pandemic. Recent world events have significantly impacted the field of public-school teaching. The pandemic forced teachers to shift their paradigm about how to maintain high academic expectations, meet state curriculum standards, and assess students learning gains to make data-informed decisions while simultaneously adapting modes of instruction through multiple outlets with little to no training on remote, synchronous, asynchronous, virtual, and hybrid teaching. While it would be very interesting to study how teaching positively impacted students learning during the pandemic, I am more interested in understanding how teaches stayed the course and maintained their mental health while dealing with the stress and pressure of teaching during COVID-19.

Keywords: teacher retention, COVID-19, teacher education, teacher moral

Procedia PDF Downloads 61
458 Normalized P-Laplacian: From Stochastic Game to Image Processing

Authors: Abderrahim Elmoataz

Abstract:

More and more contemporary applications involve data in the form of functions defined on irregular and topologically complicated domains (images, meshs, points clouds, networks, etc). Such data are not organized as familiar digital signals and images sampled on regular lattices. However, they can be conveniently represented as graphs where each vertex represents measured data and each edge represents a relationship (connectivity or certain affinities or interaction) between two vertices. Processing and analyzing these types of data is a major challenge for both image and machine learning communities. Hence, it is very important to transfer to graphs and networks many of the mathematical tools which were initially developed on usual Euclidean spaces and proven to be efficient for many inverse problems and applications dealing with usual image and signal domains. Historically, the main tools for the study of graphs or networks come from combinatorial and graph theory. In recent years there has been an increasing interest in the investigation of one of the major mathematical tools for signal and image analysis, which are Partial Differential Equations (PDEs) variational methods on graphs. The normalized p-laplacian operator has been recently introduced to model a stochastic game called tug-of-war-game with noise. Part interest of this class of operators arises from the fact that it includes, as particular case, the infinity Laplacian, the mean curvature operator and the traditionnal Laplacian operators which was extensiveley used to models and to solve problems in image processing. The purpose of this paper is to introduce and to study a new class of normalized p-Laplacian on graphs. The introduction is based on the extension of p-harmonious function introduced in as discrete approximation for both infinity Laplacian and p-Laplacian equations. Finally, we propose to use these operators as a framework for solving many inverse problems in image processing.

Keywords: normalized p-laplacian, image processing, stochastic game, inverse problems

Procedia PDF Downloads 490
457 Comparison between the Performances of Different Boring Bars in the Internal Turning of Long Overhangs

Authors: Wallyson Thomas, Zsombor Fulop, Attila Szilagyi

Abstract:

Impact dampers are mainly used in the metal-mechanical industry in operations that generate too much vibration in the machining system. Internal turning processes become unstable during the machining of deep holes, in which the tool holder is used with long overhangs (high length-to-diameter ratios). The devices coupled with active dampers, are expensive and require the use of advanced electronics. On the other hand, passive impact dampers (PID – Particle Impact Dampers) are cheaper alternatives that are easier to adapt to the machine’s fixation system, once that, in this last case, a cavity filled with particles is simply added to the structure of the tool holder. The cavity dimensions and the diameter of the spheres are pre-determined. Thus, when passive dampers are employed during the machining process, the vibration is transferred from the tip of the tool to the structure of the boring bar, where it is absorbed by the fixation system. This work proposes to compare the behaviors of a conventional solid boring bar and a boring bar with a passive impact damper in turning while using the highest possible L/D (length-to-diameter ratio) of the tool and an Easy Fix fixation system (also called: Split Bushing Holding System). It is also intended to optimize the impact absorption parameters, as the filling percentage of the cavity and the diameter of the spheres. The test specimens were made of hardened material and machined in a Computer Numerical Control (CNC) lathe. The laboratory tests showed that when the cavity of the boring bar is totally filled with minimally spaced spheres of the largest diameter, the gain in absorption allowed of obtaining, with an L/D equal to 6, the same surface roughness obtained when using the solid boring bar with an L/D equal to 3.4. The use of the passive particle impact damper resulted in, therefore, increased static stiffness and reduced deflexion of the tool.

Keywords: active damper, fixation system, hardened material, passive damper

Procedia PDF Downloads 189
456 Development of Innovative Nuclear Fuel Pellets Using Additive Manufacturing

Authors: Paul Lemarignier, Olivier Fiquet, Vincent Pateloup

Abstract:

In line with the strong desire of nuclear energy players to have ever more effective products in terms of safety, research programs on E-ATF (Enhanced-Accident Tolerant Fuels) that are more resilient, particularly to the loss of coolant, have been launched in all countries with nuclear power plants. Among the multitude of solutions being developed internationally, carcinoembryonic antigen (CEA) and its partners are investigating a promising solution, which is the realization of CERMET (CERamic-METal) type fuel pellets made of a matrix of fissile material, uranium dioxide UO2, which has a low thermal conductivity, and a metallic phase with a high thermal conductivity to improve heat evacuation. Work has focused on the development by powder metallurgy of micro-structured CERMETs, characterized by networks of metallic phase embedded in the UO₂ matrix. Other types of macro-structured CERMETs, based on concepts proposed by thermal simulation studies, have been developed with a metallic phase with a specific geometry to optimize heat evacuation. This solution could not be developed using traditional processes, so additive manufacturing, which revolutionizes traditional design principles, is used to produce these innovative prototype concepts. At CEA Cadarache, work is first carried out on a non-radioactive surrogate material, alumina, in order to acquire skills and to develop the equipment, in particular the robocasting machine, an additive manufacturing technique selected for its simplicity and the possibility of optimizing the paste formulations. A manufacturing chain was set up, with the pastes production, the 3D printing of pellets, and the associated thermal post-treatment. The work leading to the first elaborations of macro-structured alumina/molybdenum CERMETs will be presented. This work was carried out with the support of Framatome and EdF.

Keywords: additive manufacturing, alumina, CERMET, molybdenum, nuclear safety

Procedia PDF Downloads 56
455 Innovative Screening Tool Based on Physical Properties of Blood

Authors: Basant Singh Sikarwar, Mukesh Roy, Ayush Goyal, Priya Ranjan

Abstract:

This work combines two bodies of knowledge which includes biomedical basis of blood stain formation and fluid communities’ wisdom that such formation of blood stain depends heavily on physical properties. Moreover biomedical research tells that different patterns in stains of blood are robust indicator of blood donor’s health or lack thereof. Based on these valuable insights an innovative screening tool is proposed which can act as an aide in the diagnosis of diseases such Anemia, Hyperlipidaemia, Tuberculosis, Blood cancer, Leukemia, Malaria etc., with enhanced confidence in the proposed analysis. To realize this powerful technique, simple, robust and low-cost micro-fluidic devices, a micro-capillary viscometer and a pendant drop tensiometer are designed and proposed to be fabricated to measure the viscosity, surface tension and wettability of various blood samples. Once prognosis and diagnosis data has been generated, automated linear and nonlinear classifiers have been applied into the automated reasoning and presentation of results. A support vector machine (SVM) classifies data on a linear fashion. Discriminant analysis and nonlinear embedding’s are coupled with nonlinear manifold detection in data and detected decisions are made accordingly. In this way, physical properties can be used, using linear and non-linear classification techniques, for screening of various diseases in humans and cattle. Experiments are carried out to validate the physical properties measurement devices. This framework can be further developed towards a real life portable disease screening cum diagnostics tool. Small-scale production of screening cum diagnostic devices is proposed to carry out independent test.

Keywords: blood, physical properties, diagnostic, nonlinear, classifier, device, surface tension, viscosity, wettability

Procedia PDF Downloads 357
454 Measuring the Unmeasurable: A Project of High Risk Families Prediction and Management

Authors: Peifang Hsieh

Abstract:

The prevention of child abuse has aroused serious concerns in Taiwan because of the disparity between the increasing amount of reported child abuse cases that doubled over the past decade and the scarcity of social workers. New Taipei city, with the most population in Taiwan and over 70% of its 4 million citizens are migrant families in which the needs of children can be easily neglected due to insufficient support from relatives and communities, sees urgency for a social support system, by preemptively identifying and outreaching high-risk families of child abuse, so as to offer timely assistance and preventive measure to safeguard the welfare of the children. Big data analysis is the inspiration. As it was clear that high-risk families of child abuse have certain characteristics in common, New Taipei city decides to consolidate detailed background information data from departments of social affairs, education, labor, and health (for example considering status of parents’ employment, health, and if they are imprisoned, fugitives or under substance abuse), to cross-reference for accurate and prompt identification of the high-risk families in need. 'The Service Center for High-Risk Families' (SCHF) was established to integrate data cross-departmentally. By utilizing the machine learning 'random forest method' to build a risk prediction model which can early detect families that may very likely to have child abuse occurrence, the SCHF marks high-risk families red, yellow, or green to indicate the urgency for intervention, so as to those families concerned can be provided timely services. The accuracy and recall rates of the above model were 80% and 65%. This prediction model can not only improve the child abuse prevention process by helping social workers differentiate the risk level of newly reported cases, which may further reduce their major workload significantly but also can be referenced for future policy-making.

Keywords: child abuse, high-risk families, big data analysis, risk prediction model

Procedia PDF Downloads 115
453 Uplift Segmentation Approach for Targeting Customers in a Churn Prediction Model

Authors: Shivahari Revathi Venkateswaran

Abstract:

Segmenting customers plays a significant role in churn prediction. It helps the marketing team with proactive and reactive customer retention. For the reactive retention, the retention team reaches out to customers who already showed intent to disconnect by giving some special offers. When coming to proactive retention, the marketing team uses churn prediction model, which ranks each customer from rank 1 to 100, where 1 being more risk to churn/disconnect (high ranks have high propensity to churn). The churn prediction model is built by using XGBoost model. However, with the churn rank, the marketing team can only reach out to the customers based on their individual ranks. To profile different groups of customers and to frame different marketing strategies for targeted groups of customers are not possible with the churn ranks. For this, the customers must be grouped in different segments based on their profiles, like demographics and other non-controllable attributes. This helps the marketing team to frame different offer groups for the targeted audience and prevent them from disconnecting (proactive retention). For segmentation, machine learning approaches like k-mean clustering will not form unique customer segments that have customers with same attributes. This paper finds an alternate approach to find all the combination of unique segments that can be formed from the user attributes and then finds the segments who have uplift (churn rate higher than the baseline churn rate). For this, search algorithms like fast search and recursive search are used. Further, for each segment, all customers can be targeted using individual churn ranks from the churn prediction model. Finally, a UI (User Interface) is developed for the marketing team to interactively search for the meaningful segments that are formed and target the right set of audience for future marketing campaigns and prevent them from disconnecting.

Keywords: churn prediction modeling, XGBoost model, uplift segments, proactive marketing, search algorithms, retention, k-mean clustering

Procedia PDF Downloads 50
452 Motivational Profiles of the Entrepreneurial Career in Spanish Businessmen

Authors: Magdalena Suárez-Ortega, M. Fe. Sánchez-García

Abstract:

This paper focuses on the analysis of the motivations that lead people to undertake and consolidate their business. It is addressed from the framework of planned behavior theory, which recognizes the importance of the social environment and cultural values, both in the decision to undertake business and in business consolidation. Similarly, it is also based on theories of career development, which emphasize the importance of career management competencies and their connections to other vital aspects of people, including their roles within their families and other personal activities. This connects directly with the impact of entrepreneurship on the career and the professional-personal project of each individual. This study is part of the project titled Career Design and Talent Management (Ministry of Economy and Competitiveness of Spain, State Plan 2013-2016 Excellence Ref. EDU2013-45704-P). The aim of the study is to identify and describe entrepreneurial competencies and motivational profiles in a sample of 248 Spanish entrepreneurs, considering the consolidated profile and the profile in transition (n = 248).In order to obtain the information, the Questionnaire of Motivation and conditioners of the entrepreneurial career (MCEC) has been applied. This consists of 67 items and includes four scales (E1-Conflicts in conciliation, E2-Satisfaction in the career path, E3-Motivations to undertake, E4-Guidance Needs). Cluster analysis (mixed method, combining k-means clustering with a hierarchical method) was carried out, characterizing the groups profiles according to the categorical variables (chi square, p = 0.05), and the quantitative variables (ANOVA). The results have allowed us to characterize three motivational profiles relevant to the motivation, the degree of conciliation between personal and professional life, and the degree of conflict in conciliation, levels of career satisfaction and orientation needs (in the entrepreneurial project and life-career). The first profile is formed by extrinsically motivated entrepreneurs, professionally satisfied and without conflict of vital roles. The second profile acts with intrinsic motivation and also associated with family models, and although it shows satisfaction with their professional career, it finds a high conflict in their family and professional life. The third is composed of entrepreneurs with high extrinsic motivation, professional dissatisfaction and at the same time, feel the conflict in their professional life by the effect of personal roles. Ultimately, the analysis has allowed us to line the kinds of entrepreneurs to different levels of motivation, satisfaction, needs and articulation in professional and personal life, showing characterizations associated with the use of time for leisure, and the care of the family. Associations related to gender, age, activity sector, environment (rural, urban, virtual), and the use of time for domestic tasks are not identified. The model obtained and its implications for the design of training actions and orientation to entrepreneurs is also discussed.

Keywords: motivation, entrepreneurial career, guidance needs, life-work balance, job satisfaction, assessment

Procedia PDF Downloads 279
451 Comparison of Feedforward Back Propagation and Self-Organizing Map for Prediction of Crop Water Stress Index of Rice

Authors: Aschalew Cherie Workneh, K. S. Hari Prasad, Chandra Shekhar Prasad Ojha

Abstract:

Due to the increase in water scarcity, the crop water stress index (CWSI) is receiving significant attention these days, especially in arid and semiarid regions, for quantifying water stress and effective irrigation scheduling. Nowadays, machine learning techniques such as neural networks are being widely used to determine CWSI. In the present study, the performance of two artificial neural networks, namely, Self-Organizing Maps (SOM) and Feed Forward-Back Propagation Artificial Neural Networks (FF-BP-ANN), are compared while determining the CWSI of rice crop. Irrigation field experiments with varying degrees of irrigation were conducted at the irrigation field laboratory of the Indian Institute of Technology, Roorkee, during the growing season of the rice crop. The CWSI of rice was computed empirically by measuring key meteorological variables (relative humidity, air temperature, wind speed, and canopy temperature) and crop parameters (crop height and root depth). The empirically computed CWSI was compared with SOM and FF-BP-ANN predicted CWSI. The upper and lower CWSI baselines are computed using multiple regression analysis. The regression analysis showed that the lower CWSI baseline for rice is a function of crop height (h), air vapor pressure deficit (AVPD), and wind speed (u), whereas the upper CWSI baseline is a function of crop height (h) and wind speed (u). The performance of SOM and FF-BP-ANN were compared by computing Nash-Sutcliffe efficiency (NSE), index of agreement (d), root mean squared error (RMSE), and coefficient of correlation (R²). It is found that FF-BP-ANN performs better than SOM while predicting the CWSI of rice crops.

Keywords: artificial neural networks; crop water stress index; canopy temperature, prediction capability

Procedia PDF Downloads 86
450 Digital Twins in the Built Environment: A Systematic Literature Review

Authors: Bagireanu Astrid, Bros-Williamson Julio, Duncheva Mila, Currie John

Abstract:

Digital Twins (DT) are an innovative concept of cyber-physical integration of data between an asset and its virtual replica. They have originated in established industries such as manufacturing and aviation and have garnered increasing attention as a potentially transformative technology within the built environment. With the potential to support decision-making, real-time simulations, forecasting abilities and managing operations, DT do not fall under a singular scope. This makes defining and leveraging the potential uses of DT a potential missed opportunity. Despite its recognised potential in established industries, literature on DT in the built environment remains limited. Inadequate attention has been given to the implementation of DT in construction projects, as opposed to its operational stage applications. Additionally, the absence of a standardised definition has resulted in inconsistent interpretations of DT in both industry and academia. There is a need to consolidate research to foster a unified understanding of the DT. Such consolidation is indispensable to ensure that future research is undertaken with a solid foundation. This paper aims to present a comprehensive systematic literature review on the role of DT in the built environment. To accomplish this objective, a review and thematic analysis was conducted, encompassing relevant papers from the last five years. The identified papers are categorised based on their specific areas of focus, and the content of these papers was translated into a through classification of DT. In characterising DT and the associated data processes identified, this systematic literature review has identified 6 DT opportunities specifically relevant to the built environment: Facilitating collaborative procurement methods, Supporting net-zero and decarbonization goals, Supporting Modern Methods of Construction (MMC) and off-site manufacturing (OSM), Providing increased transparency and stakeholders collaboration, Supporting complex decision making (real-time simulations and forecasting abilities) and Seamless integration with Internet of Things (IoT), data analytics and other DT. Finally, a discussion of each area of research is provided. A table of definitions of DT across the reviewed literature is provided, seeking to delineate the current state of DT implementation in the built environment context. Gaps in knowledge are identified, as well as research challenges and opportunities for further advancements in the implementation of DT within the built environment. This paper critically assesses the existing literature to identify the potential of DT applications, aiming to harness the transformative capabilities of data in the built environment. By fostering a unified comprehension of DT, this paper contributes to advancing the effective adoption and utilisation of this technology, accelerating progress towards the realisation of smart cities, decarbonisation, and other envisioned roles for DT in the construction domain.

Keywords: built environment, design, digital twins, literature review

Procedia PDF Downloads 51
449 Reactivation of Hydrated Cement and Recycled Concrete Powder by Thermal Treatment for Partial Replacement of Virgin Cement

Authors: Gustave Semugaza, Anne Zora Gierth, Tommy Mielke, Marianela Escobar Castillo, Nat Doru C. Lupascu

Abstract:

The generation of Construction and Demolition Waste (CDW) has globally increased enormously due to the enhanced need in construction, renovation, and demolition of construction structures. Several studies investigated the use of CDW materials in the production of new concrete and indicated the lower mechanical properties of the resulting concrete. Many other researchers considered the possibility of using the Hydrated Cement Powder (HCP) to replace a part of Ordinary Portland Cement (OPC), but only very few investigated the use of Recycled Concrete Powder (RCP) from CDW. The partial replacement of OPC for making new concrete intends to decrease the CO₂ emissions associated with OPC production. However, the RCP and HCP need treatment to produce the new concrete of required mechanical properties. The thermal treatment method has proven to improve HCP properties before their use. Previous research has stated that for using HCP in concrete, the optimum results are achievable by heating HCP between 400°C and 800°C. The optimum heating temperature depends on the type of cement used to make the Hydrated Cement Specimens (HCS), the crushing and heating method of HCP, and the curing method of the Rehydrated Cement Specimens (RCS). This research assessed the quality of recycled materials by using different techniques such as X-ray Diffraction (XRD), Differential Scanning Calorimetry (DSC) and thermogravimetry (TG), Scanning electron Microscopy (SEM), and X-ray Fluorescence (XRF). These recycled materials were thermally pretreated at different temperatures from 200°C to 1000°C. Additionally, the research investigated to what extent the thermally treated recycled cement could partially replace the OPC and if the new concrete produced would achieve the required mechanical properties. The mechanical properties were evaluated on the RCS, obtained by mixing the Dehydrated Cement Powder and Recycled Powder (DCP and DRP) with water (w/c = 0.6 and w/c = 0.45). The research used the compressive testing machine for compressive strength testing, and the three-point bending test was used to assess the flexural strength.

Keywords: hydrated cement powder, dehydrated cement powder, recycled concrete powder, thermal treatment, reactivation, mechanical performance

Procedia PDF Downloads 127
448 Evaluation of the Impact of Telematics Use on Young Drivers’ Driving Behaviour: A Naturalistic Driving Study

Authors: WonSun Chen, James Boylan, Erwin Muharemovic, Denny Meyer

Abstract:

In Australia, drivers aged between 18 and 24 remained at high risk of road fatality over the last decade. Despite the successful implementation of the Graduated Licensing System (GLS) that supports young drivers in their early phases of driving, the road fatality statistics for these drivers remains high. In response to these statistics, studies conducted in Australia prior to the start of the COVID-19 pandemic have demonstrated the benefits of using telematics devices for improving driving behaviour, However, the impact of COVID-19 lockdown on young drivers’ driving behaviour has emerged as a global concern. Therefore, this naturalistic study aimed to evaluate and compare the driving behaviour(such as acceleration, braking, speeding, etc.) of young drivers with the adoption of in-vehicle telematics devices. Forty-two drivers aged between 18 and 30 and residing in the Australian state of Victoria participated in this study during the period of May to October 2022. All participants drove with the telematics devices during the first 30-day. At the start of the second 30-day, twenty-one participants were randomised to an intervention group where they were provided with an additional telematics ray device that provided visual feedback to the drivers, especially when they committed to aggressive driving behaviour. The remaining twenty-one participants remined their driving journeys without the extra telematics ray device (control group). Such trustworthy data enabled the assessment of changes in the driving behaviour of these young drivers using a machine learning approach in Python. Results are expected to show participants from the intervention group will show improvements in their driving behaviour compared to those from the control group.Furthermore, the telematics data enable the assessment and quantification of such improvements in driving behaviour. The findings from this study are anticipated to shed some light in guiding the development of customised campaigns and interventions to further address the high road fatality among young drivers in Australia.

Keywords: driving behaviour, naturalistic study, telematics data, young drivers

Procedia PDF Downloads 101
447 Formulation Development, Process Optimization and Comparative study of Poorly Compressible Drugs Ibuprofen, Acetaminophen Using Direct Compression and Top Spray Granulation Technique

Authors: Abhishek Pandey

Abstract:

Ibuprofen and Acetaminophen is widely used as prescription & non-prescription medicine. Ibuprofen mainly used in the treatment of mild to moderate pain related to headache, migraine, postoperative condition and in the management of spondylitis, osteoarthritis and rheumatoid arthritis. Acetaminophen is used as an analgesic and antipyretic drug. Ibuprofen having high tendency of sticking to punches of tablet punching machine while Acetaminophen is not ordinarily compressible to tablet formulation because Acetaminophen crystals are very hard and brittle in nature and fracture very easily when compressed producing capping and laminating tablet defects therefore wet granulation method is used to make them compressible. The aim of study was to prepare Ibuprofen and Acetaminophen tablets by direct compression and top spray granulation technique. In this Investigation tablets were prepared by using directly compressible grade excipients. Dibasic calcium phosphate, lactose anhydrous (DCL21), microcrystalline cellulose (Avicel PH 101). In order to obtain best or optimized formulation, nine different formulations were generated among them batch F7, F8, F9 shows good results and within the acceptable limit. Formulation (F7) selected as optimize product on the basis of dissolution study. Furtherly, directly compressible granules of both drugs were prepared by using top spray granulation technique in fluidized bed processor equipment and compressed .In order to obtain best product process optimization was carried out by performing four trials in which various parameters like inlet air temperature, spray rate, peristaltic pump rpm, % LOD, properties of granules, blending time and hardness were optimized. Batch T3 coined as optimized batch on the basis physical & chemical evaluation. Finally formulations prepared by both techniques were compared.

Keywords: direct compression, top spray granulation, process optimization, blending time

Procedia PDF Downloads 338
446 Arabic Light Word Analyser: Roles with Deep Learning Approach

Authors: Mohammed Abu Shquier

Abstract:

This paper introduces a word segmentation method using the novel BP-LSTM-CRF architecture for processing semantic output training. The objective of web morphological analysis tools is to link a formal morpho-syntactic description to a lemma, along with morpho-syntactic information, a vocalized form, a vocalized analysis with morpho-syntactic information, and a list of paradigms. A key objective is to continuously enhance the proposed system through an inductive learning approach that considers semantic influences. The system is currently under construction and development based on data-driven learning. To evaluate the tool, an experiment on homograph analysis was conducted. The tool also encompasses the assumption of deep binary segmentation hypotheses, the arbitrary choice of trigram or n-gram continuation probabilities, language limitations, and morphology for both Modern Standard Arabic (MSA) and Dialectal Arabic (DA), which provide justification for updating this system. Most Arabic word analysis systems are based on the phonotactic morpho-syntactic analysis of a word transmitted using lexical rules, which are mainly used in MENA language technology tools, without taking into account contextual or semantic morphological implications. Therefore, it is necessary to have an automatic analysis tool taking into account the word sense and not only the morpho-syntactic category. Moreover, they are also based on statistical/stochastic models. These stochastic models, such as HMMs, have shown their effectiveness in different NLP applications: part-of-speech tagging, machine translation, speech recognition, etc. As an extension, we focus on language modeling using Recurrent Neural Network (RNN); given that morphological analysis coverage was very low in dialectal Arabic, it is significantly important to investigate deeply how the dialect data influence the accuracy of these approaches by developing dialectal morphological processing tools to show that dialectal variability can support to improve analysis.

Keywords: NLP, DL, ML, analyser, MSA, RNN, CNN

Procedia PDF Downloads 19
445 Modeling of the Dynamic Characteristics of a Spindle with Experimental Validation

Authors: Jhe-Hao Huang, Kun-Da Wu, Wei-Cheng Shih, Jui-Pin Hung

Abstract:

This study presented the investigation on the dynamic characteristics of a spindle tool system by experimental and finite element modeling approaches. As well known facts, the machining stability is greatly determined by the dynamic characteristics of the spindle tool system. Therefore, understanding the factors affecting dynamic behavior of a spindle tooling system is a prerequisite in dominating the final machining performance of machine tool system. To this purpose, a physical spindle unit was employed to assess the dynamic characteristics by vibration tests. Then, a three-dimensional finite element model of a high-speed spindle system integrated with tool holder was created to simulate the dynamic behaviors. For modeling the angular contact bearings, a series of spring elements were introduced between the inner and outer rings. The spring constant can be represented by the contact stiffness of the rolling bearing based on Hertz theory. The interface characteristic between spindle nose and tool holder taper can be quantified from the comparison of the measurements and predictions. According to the results obtained from experiments and finite element predictions, the vibration behavior of the spindle is dominated by the bending deformation of the spindle shaft in different modes, which is further determined by the stiffness of the bearings in spindle housing. Also, the spindle unit with tool holder shows a different dynamic behavior from that of spindle without tool holder. This indicates the interface property between tool holder and spindle nose plays an dominance on the dynamic characteristics the spindle tool system. Overall, the dynamic behaviors the spindle with and without tool holder can be successfully investigated through the finite element model proposed in this study. The prediction accuracy is determined by the modeling of the rolling interface of ball bearings in spindles and the interface characteristics between tool holder and spindle nose. Besides, identifications of the interface characteristics of a ball bearing and spindle tool holder are important for the refinement of the spindle tooling system to achieve the optimum machining performance.

Keywords: contact stiffness, dynamic characteristics, spindle, tool holder interface

Procedia PDF Downloads 272
444 Optimum Turbomachine Preliminary Selection for Power Regeneration in Vapor Compression Cool Production Plants

Authors: Sayyed Benyamin Alavi, Giovanni Cerri, Leila Chennaoui, Ambra Giovannelli, Stefano Mazzoni

Abstract:

Primary energy consumption and emissions of pollutants (including CO2) sustainability call to search methodologies to lower power absorption for unit of a given product. Cool production plants based on vapour compression are widely used for many applications: air conditioning, food conservation, domestic refrigerators and freezers, special industrial processes, etc. In the field of cool production, the amount of Yearly Consumed Primary Energy is enormous, thus, saving some percentage of it, leads to big worldwide impact in the energy consumption and related energy sustainability. Among various techniques to reduce power required by a Vapour Compression Cool Production Plant (VCCPP), the technique based on Power Regeneration by means of Internal Direct Cycle (IDC) will be considered in this paper. Power produced by IDC reduces power need for unit of produced Cool Power by the VCCPP. The paper contains basic concepts that lead to develop IDCs and the proposed options to use the IDC Power. Among various selections for using turbo machines, Best Economically Available Technologies (BEATs) have been explored. Based on vehicle engine turbochargers, they have been taken into consideration for this application. According to BEAT Database and similarity rules, the best turbo machine selection leads to the minimum nominal power required by VCCPP Main Compressor. Results obtained installing the prototype in “ad hoc” designed test bench will be discussed and compared with the expected performance. Forecasts for the upgrading VCCPP, various applications will be given and discussed. 4-6% saving is expected for air conditioning cooling plants and 15-22% is expected for cryogenic plants.

Keywords: Refrigeration Plant, Vapour Pressure Amplifier, Compressor, Expander, Turbine, Turbomachinery Selection, Power Saving

Procedia PDF Downloads 409
443 A Literature Review Evaluating the Use of Online Problem-Based Learning and Case-Based Learning Within Dental Education

Authors: Thomas Turner

Abstract:

Due to the Covid-19 pandemic alternative ways of delivering dental education were required. As a result, many institutions moved teaching online. The impact of this is poorly understood. Is online problem-based learning (PBL) and case-based learning (CBL) effective and is it suitable in the post-pandemic era? PBL and CBL are both types of interactive, group-based learning which are growing in popularity within many dental schools. PBL was first introduced in the 1960’s and can be defined as learning which occurs from collaborative work to resolve a problem. Whereas CBL encourages learning from clinical cases, encourages application of knowledge and helps prepare learners for clinical practice. To evaluate the use of online PBL and CBL. A literature search was conducted using the CINAHL, Embase, PubMed and Web of Science databases. Literature was also identified from reference lists. Studies were only included from dental education. Seven suitable studies were identified. One of the studies found a high learner and facilitator satisfaction rate with online CBL. Interestingly one study found learners preferred CBL over PBL within an online format. A study also found, that within the context of distance learning, learners preferred a hybrid curriculum including PBL over a traditional approach. A further study pointed to the limitations of PBL within an online format, such as reduced interaction, potentially hindering the development of communication skills and the increased time and technology support required. An audience response system was also developed for use within CBL and had a high satisfaction rate. Interestingly one study found achievement of learning outcomes was correlated with the number of student and staff inputs within an online format. Whereas another study found the quantity of learner interactions were important to group performance, however the quantity of facilitator interactions was not. This review identified generally favourable evidence for the benefits of online PBL and CBL. However, there is limited high quality evidence evaluating these teaching methods within dental education and there appears to be limited evidence comparing online and faceto-face versions of these sessions. The importance of the quantity of learner interactions is evident, however the importance of the quantity of facilitator interactions appears to be questionable. An element to this may be down to the quality of interactions, rather than just quantity. Limitations of online learning regarding technological issues and time required for a session are also highlighted, however as learners and facilitators get familiar with online formats, these may become less of an issue. It is also important learners are encouraged to interact and communicate during these sessions, to allow for the development of communication skills. Interestingly CBL appeared to be preferred to PBL in an online format. This may reflect the simpler nature of CBL, however further research is required to explore this finding. Online CBL and PBL appear promising, however further research is required before online formats of these sessions are widely adopted in the post-pandemic era.

Keywords: case-based learning, online, problem-based learning, remote, virtual

Procedia PDF Downloads 62
442 A Personality-Based Behavioral Analysis on eSports

Authors: Halkiopoulos Constantinos, Gkintoni Evgenia, Koutsopoulou Ioanna, Antonopoulou Hera

Abstract:

E-sports and e-gaming have emerged in recent years since the increase in internet use have become universal and e-gamers are the new reality in our homes. The excessive involvement of young adults with e-sports has already been revealed and the adverse consequences have been reported in researches in the past few years, but the issue has not been fully studied yet. The present research is conducted in Greece and studies the psychological profile of video game players and provides information on personality traits, habits and emotional status that affect online gamers’ behaviors in order to help professionals and policy makers address the problem. Three standardized self-report questionnaires were administered to participants who were young male and female adults aged from 19-26 years old. The Profile of Mood States (POMS) scale was used to evaluate people’s perceptions of their everyday life mood; the personality features that can trace back to people’s habits and anticipated reactions were measured by Eysenck Personality Questionnaire (EPQ), and the Trait Emotional Intelligence Questionnaire (TEIQue) was used to measure which cognitive (gamers’ beliefs) and emotional parameters (gamers’ emotional abilities) mainly affected/ predicted gamers’ behaviors and leisure time activities?/ gaming behaviors. Data mining techniques were used to analyze the data, which resulted in machine learning algorithms that were included in the software package R. The research findings attempt to designate the effect of personality traits, emotional status and emotional intelligence influence and correlation with e-sports, gamers’ behaviors and help policy makers and stakeholders take action, shape social policy and prevent the adverse consequences on young adults. The need for further research, prevention and treatment strategies is also addressed.

Keywords: e-sports, e-gamers, personality traits, POMS, emotional intelligence, data mining, R

Procedia PDF Downloads 209