Search results for: intelligent automation
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 1224

Search results for: intelligent automation

114 Analysis of Fuel Adulteration Consequences in Bangladesh

Authors: Mahadehe Hassan

Abstract:

In most countries manufacturing, trading and distribution of gasoline and diesel fuels belongs to the most important sectors of national economy. For Bangladesh, a robust, well-functioning, secure and smartly managed national fuel distribution chain is an essential precondition for achieving Government top priorities in development and modernization of transportation infrastructure, protection of national environment and population health as well as, very importantly, securing due tax revenue for the State Budget. Bangladesh is a developing country with complex fuel supply network, high fuel taxes incidence and – till now - limited possibilities in application of modern, automated technologies for Government national fuel market control. Such environment allows dishonest physical and legal persons and organized criminals to build and profit from illegal fuel distribution schemes and fuel illicit trade. As a result, the market transparency and the country attractiveness for foreign investments, law-abiding economic operators, national consumers, State Budget and the Government ability to finance development projects, and the country at large suffer significantly. Research shows that over 50% of retail petrol stations in major agglomerations of Bangladesh sell adulterated fuels and/or cheat customers on the real volume of the fuel pumped into their vehicles. Other forms of detected fuel illicit trade practices include misdeclaration of fuel quantitative and qualitative parameters during internal transit and selling of non-declared and smuggled fuels. The aim of the study is to recommend the implementation of a National Fuel Distribution Integrity Program (FDIP) in Bangladesh to address and resolve fuel adulteration and illicit trade problems. The program should be customized according to the specific needs of the country and implemented in partnership with providers of advanced technologies. FDIP should enable and further enhance capacity of respective Bangladesh Government authorities in identification and elimination of all forms of fuel illicit trade swiftly and resolutely. FDIP high-technology, IT and automation systems and secure infrastructures should be aimed at the following areas (1) fuel adulteration, misdeclaration and non-declaration; (2) fuel quality and; (3) fuel volume manipulation at retail level. Furthermore, overall concept of FDIP delivery and its interaction with the reporting and management systems used by the Government shall be aligned with and support objectives of the Vision 2041 and Smart Bangladesh Government programs.

Keywords: fuel adulteration, octane, kerosene, diesel, petrol, pollution, carbon emissions

Procedia PDF Downloads 81
113 AI-Enabled Smart Contracts for Reliable Traceability in the Industry 4.0

Authors: Harris Niavis, Dimitra Politaki

Abstract:

The manufacturing industry was collecting vast amounts of data for monitoring product quality thanks to the advances in the ICT sector and dedicated IoT infrastructure is deployed to track and trace the production line. However, industries have not yet managed to unleash the full potential of these data due to defective data collection methods and untrusted data storage and sharing. Blockchain is gaining increasing ground as a key technology enabler for Industry 4.0 and the smart manufacturing domain, as it enables the secure storage and exchange of data between stakeholders. On the other hand, AI techniques are more and more used to detect anomalies in batch and time-series data that enable the identification of unusual behaviors. The proposed scheme is based on smart contracts to enable automation and transparency in the data exchange, coupled with anomaly detection algorithms to enable reliable data ingestion in the system. Before sensor measurements are fed to the blockchain component and the smart contracts, the anomaly detection mechanism uniquely combines artificial intelligence models to effectively detect unusual values such as outliers and extreme deviations in data coming from them. Specifically, Autoregressive integrated moving average, Long short-term memory (LSTM) and Dense-based autoencoders, as well as Generative adversarial networks (GAN) models, are used to detect both point and collective anomalies. Towards the goal of preserving the privacy of industries' information, the smart contracts employ techniques to ensure that only anonymized pointers to the actual data are stored on the ledger while sensitive information remains off-chain. In the same spirit, blockchain technology guarantees the security of the data storage through strong cryptography as well as the integrity of the data through the decentralization of the network and the execution of the smart contracts by the majority of the blockchain network actors. The blockchain component of the Data Traceability Software is based on the Hyperledger Fabric framework, which lays the ground for the deployment of smart contracts and APIs to expose the functionality to the end-users. The results of this work demonstrate that such a system can increase the quality of the end-products and the trustworthiness of the monitoring process in the smart manufacturing domain. The proposed AI-enabled data traceability software can be employed by industries to accurately trace and verify records about quality through the entire production chain and take advantage of the multitude of monitoring records in their databases.

Keywords: blockchain, data quality, industry4.0, product quality

Procedia PDF Downloads 193
112 Shark Detection and Classification with Deep Learning

Authors: Jeremy Jenrette, Z. Y. C. Liu, Pranav Chimote, Edward Fox, Trevor Hastie, Francesco Ferretti

Abstract:

Suitable shark conservation depends on well-informed population assessments. Direct methods such as scientific surveys and fisheries monitoring are adequate for defining population statuses, but species-specific indices of abundance and distribution coming from these sources are rare for most shark species. We can rapidly fill these information gaps by boosting media-based remote monitoring efforts with machine learning and automation. We created a database of shark images by sourcing 24,546 images covering 219 species of sharks from the web application spark pulse and the social network Instagram. We used object detection to extract shark features and inflate this database to 53,345 images. We packaged object-detection and image classification models into a Shark Detector bundle. We developed the Shark Detector to recognize and classify sharks from videos and images using transfer learning and convolutional neural networks (CNNs). We applied these models to common data-generation approaches of sharks: boosting training datasets, processing baited remote camera footage and online videos, and data-mining Instagram. We examined the accuracy of each model and tested genus and species prediction correctness as a result of training data quantity. The Shark Detector located sharks in baited remote footage and YouTube videos with an average accuracy of 89\%, and classified located subjects to the species level with 69\% accuracy (n =\ eight species). The Shark Detector sorted heterogeneous datasets of images sourced from Instagram with 91\% accuracy and classified species with 70\% accuracy (n =\ 17 species). Data-mining Instagram can inflate training datasets and increase the Shark Detector’s accuracy as well as facilitate archiving of historical and novel shark observations. Base accuracy of genus prediction was 68\% across 25 genera. The average base accuracy of species prediction within each genus class was 85\%. The Shark Detector can classify 45 species. All data-generation methods were processed without manual interaction. As media-based remote monitoring strives to dominate methods for observing sharks in nature, we developed an open-source Shark Detector to facilitate common identification applications. Prediction accuracy of the software pipeline increases as more images are added to the training dataset. We provide public access to the software on our GitHub page.

Keywords: classification, data mining, Instagram, remote monitoring, sharks

Procedia PDF Downloads 124
111 The Exploitation of the MOSES Project Outcomes on Supply Chain Optimisation

Authors: Reza Karimpour

Abstract:

Ports play a decisive role in the EU's external and internal trade, as about 74% of imports and exports and 37% of exchanges go through ports. Although ports, especially Deep Sea Shipping (DSS) ports, are integral nodes within multimodal logistic flows, Short Sea Shipping (SSS) and inland waterways are not so well integrated. The automated vessels and supply chain optimisations for sustainable shortsea shipping (MOSES) project aims to enhance the short sea shipping component of the European supply chain by addressing the vulnerabilities and strains related to the operation of large containerships. The MOSES concept can be shortly described as a large containership (mother-vessel) approaching a DSS port (or a large container terminal). Upon her arrival, a combined intelligent mega-system consisting of the MOSES Autonomous tugboat swarm for manoeuvring and the MOSES adapted AutoMoor system. Then, container handling processes are ready to start moving containers to their destination via hinterland connections (trucks and/or rail) or to be shipped to destinations near small ports (on the mainland or island). For the first case, containers are stored in a dedicated port area (Storage area), waiting to be moved via trucks and/or rail. For the second case, containers are stacked by existing port equipment near-dedicated berths of the DSS port. They then are loaded on the MOSES Innovative Feeder Vessel, equipped with the MOSES Robotic Container-Handling System that provides (semi-) autonomous (un) feeding of the feeder. The Robotic Container-Handling System is remotely monitored through a Shore Control Centre. When the MOSES innovative Feeder vessel approaches the small port, where her docking is achieved without tugboats, she automatically unloads the containers using the Robotic Container-Handling System on the quay or directly on trucks. As a result, ports with minimal or no available infrastructure may be effectively integrated with the container supply chain. Then, the MOSES innovative feeder vessel continues her voyage to the next small port, or she returns to the DSS port. MOSES exploitation activity mainly aims to exploit research outcomes beyond the project, facilitate utilisation of the pilot results by others, and continue the pilot service after the project ends. By the mid-lifetime of the project, the exploitation plan introduces the reader to the MOSES project and its key exploitable results. It provides a plan for delivering the MOSES innovations to the market as part of the overall exploitation plan.

Keywords: automated vessels, exploitation, shortsea shipping, supply chain

Procedia PDF Downloads 115
110 MARISTEM: A COST Action Focused on Stem Cells of Aquatic Invertebrates

Authors: Arzu Karahan, Loriano Ballarin, Baruch Rinkevich

Abstract:

Marine invertebrates, the highly diverse phyla of multicellular organisms, represent phenomena that are either not found or highly restricted in the vertebrates. These include phenomena like budding, fission, a fusion of ramets, and high regeneration power, such as the ability to create whole new organisms from either tiny parental fragment, many of which are controlled by totipotent, pluripotent, and multipotent stem cells. Thus, there is very much that can be learned from these organisms on the practical and evolutionary levels, further resembling Darwin's words, “It is not the strongest of the species that survives, nor the most intelligent, but the one most responsive to change”. The ‘stem cell’ notion highlights a cell that has the ability to continuously divide and differentiate into various progenitors and daughter cells. In vertebrates, adult stem cells are rare cells defined as lineage-restricted (multipotent at best) with tissue or organ-specific activities that are located in defined niches and further regulate the machinery of homeostasis, repair, and regeneration. They are usually categorized by their morphology, tissue of origin, plasticity, and potency. The above description not always holds when comparing the vertebrates with marine invertebrates’ stem cells that display wider ranges of plasticity and diversity at the taxonomic and the cellular levels. While marine/aquatic invertebrates stem cells (MISC) have recently raised more scientific interest, the know-how is still behind the attraction they deserve. MISC, not only are highly potent but, in many cases, are abundant (e.g., 1/3 of the entire animal cells), do not locate in permanent niches, participates in delayed-aging and whole-body regeneration phenomena, the knowledge of which can be clinically relevant. Moreover, they have massive hidden potential for the discovery of new bioactive molecules that can be used for human health (antitumor, antimicrobial) and biotechnology. The MARISTEM COST action (Stem Cells of Marine/Aquatic Invertebrates: From Basic Research to Innovative Applications) aims to connect the European fragmented MISC community. Under this scientific umbrella, the action conceptualizes the idea for adult stem cells that do not share many properties with the vertebrates’ stem cells, organizes meetings, summer schools, and workshops, stimulating young researchers, supplying technical and adviser support via short-term scientific studies, making new bridges between the MISC community and biomedical disciplines.

Keywords: aquatic/marine invertebrates, adult stem cell, regeneration, cell cultures, bioactive molecules

Procedia PDF Downloads 172
109 Student Feedback of a Major Curricular Reform Based on Course Integration and Continuous Assessment in Electrical Engineering

Authors: Heikki Valmu, Eero Kupila, Raisa Vartia

Abstract:

A major curricular reform was implemented in Metropolia UAS in 2014. The teaching was to be based on larger course entities and collaborative pedagogy. The most thorough reform was conducted in the department of electrical engineering and automation technology. It has been already shown that the reform has been extremely successful with respect to student progression and drop-out rate. The improvement of the results has been much more significant in this department compared to the other engineering departments making only minor pedagogical changes. In the beginning of the spring term of 2017, a thorough student feedback project was conducted in the department. The study consisted of thirty questions about the implementation of the curriculum, the student workload and other matters related to student satisfaction. The reply rate was more than 40%. The students were divided to four different categories: first year students [cat.1] and students of all the three different majors [categories 2-4]. These categories were found valid since all the students have the same course structure in the first two semesters after which they may freely select the major. All staff members are divided into four teams respectively. The curriculum consists of consecutive 15 credit (ECTS) courses each taught by a group of teachers (3-5). There are to be no end exams and continuous assessment is to be employed. In 2014 the different teacher groups were encouraged to employ innovatively different assessment methods within the given specs. One of these methods has been since used in categories 1 and 2. These students have to complete a number of compulsory tasks each week to pass the course and the actual grade is defined by a smaller number of tests throughout the course. The tasks vary from homework assignments, reports and laboratory exercises to larger projects and the actual smaller tests are usually organized during the regular lecture hours. The teachers of the other two majors have been pedagogically more conservative. The student progression has been better in categories 1 and 2 compared to categories 3 and 4. One of the main goals of this survey was to analyze the reasons for the difference and the assessment methods in detail besides the general student satisfaction. The results show that in the categories following more strictly the specified assessment model much more versatile assessment methods are used and the basic spirit of the new pedagogy is followed. Also, the student satisfaction is significantly better in categories 1 and 2. It may be clearly stated that continuous assessment and teacher cooperation improve the learning outcomes, student progression as well as student satisfaction. Too much academic freedom seems to lead to worse results [cat 3 and 4]. A standardized assessment model is launched for all students in autumn 2017. This model is different from the one used so far in categories 1 and 2 allowing more flexibility to teacher groups, but it will force all the teacher groups to follow the general rules in order to improve the results and the student satisfaction further.

Keywords: continuous assessment, course integration, curricular reform, student feedback

Procedia PDF Downloads 207
108 Analysing Competitive Advantage of IoT and Data Analytics in Smart City Context

Authors: Petra Hofmann, Dana Koniel, Jussi Luukkanen, Walter Nieminen, Lea Hannola, Ilkka Donoghue

Abstract:

The Covid-19 pandemic forced people to isolate and become physically less connected. The pandemic has not only reshaped people’s behaviours and needs but also accelerated digital transformation (DT). DT of cities has become an imperative with the outlook of converting them into smart cities in the future. Embedding digital infrastructure and smart city initiatives as part of normal design, construction, and operation of cities provides a unique opportunity to improve the connection between people. The Internet of Things (IoT) is an emerging technology and one of the drivers in DT. It has disrupted many industries by introducing different services and business models, and IoT solutions are being applied in multiple fields, including smart cities. As IoT and data are fundamentally linked together, IoT solutions can only create value if the data generated by the IoT devices is analysed properly. Extracting relevant conclusions and actionable insights by using established techniques, data analytics contributes significantly to the growth and success of IoT applications and investments. Companies must grasp DT and be prepared to redesign their offerings and business models to remain competitive in today’s marketplace. As there are many IoT solutions available today, the amount of data is tremendous. The challenge for companies is to understand what solutions to focus on and how to prioritise and which data to differentiate from the competition. This paper explains how IoT and data analytics can impact competitive advantage and how companies should approach IoT and data analytics to translate them into concrete offerings and solutions in the smart city context. The study was carried out as a qualitative, literature-based research. A case study is provided to validate the preservation of company’s competitive advantage through smart city solutions. The results of the research contribution provide insights into the different factors and considerations related to creating competitive advantage through IoT and data analytics deployment in the smart city context. Furthermore, this paper proposes a framework that merges the factors and considerations with examples of offerings and solutions in smart cities. The data collected through IoT devices, and the intelligent use of it, can create competitive advantage to companies operating in smart city business. Companies should take into consideration the five forces of competition that shape industries and pay attention to the technological, organisational, and external contexts which define factors for consideration of competitive advantages in the field of IoT and data analytics. Companies that can utilise these key assets in their businesses will most likely conquer the markets and have a strong foothold in the smart city business.

Keywords: data analytics, smart cities, competitive advantage, internet of things

Procedia PDF Downloads 138
107 Application of Improved Semantic Communication Technology in Remote Sensing Data Transmission

Authors: Tingwei Shu, Dong Zhou, Chengjun Guo

Abstract:

Semantic communication is an emerging form of communication that realize intelligent communication by extracting semantic information of data at the source and transmitting it, and recovering the data at the receiving end. It can effectively solve the problem of data transmission under the situation of large data volume, low SNR and restricted bandwidth. With the development of Deep Learning, semantic communication further matures and is gradually applied in the fields of the Internet of Things, Uumanned Air Vehicle cluster communication, remote sensing scenarios, etc. We propose an improved semantic communication system for the situation where the data volume is huge and the spectrum resources are limited during the transmission of remote sensing images. At the transmitting, we need to extract the semantic information of remote sensing images, but there are some problems. The traditional semantic communication system based on Convolutional Neural Network cannot take into account the global semantic information and local semantic information of the image, which results in less-than-ideal image recovery at the receiving end. Therefore, we adopt the improved vision-Transformer-based structure as the semantic encoder instead of the mainstream one using CNN to extract the image semantic features. In this paper, we first perform pre-processing operations on remote sensing images to improve the resolution of the images in order to obtain images with more semantic information. We use wavelet transform to decompose the image into high-frequency and low-frequency components, perform bilinear interpolation on the high-frequency components and bicubic interpolation on the low-frequency components, and finally perform wavelet inverse transform to obtain the preprocessed image. We adopt the improved Vision-Transformer structure as the semantic coder to extract and transmit the semantic information of remote sensing images. The Vision-Transformer structure can better train the huge data volume and extract better image semantic features, and adopt the multi-layer self-attention mechanism to better capture the correlation between semantic features and reduce redundant features. Secondly, to improve the coding efficiency, we reduce the quadratic complexity of the self-attentive mechanism itself to linear so as to improve the image data processing speed of the model. We conducted experimental simulations on the RSOD dataset and compared the designed system with a semantic communication system based on CNN and image coding methods such as BGP and JPEG to verify that the method can effectively alleviate the problem of excessive data volume and improve the performance of image data communication.

Keywords: semantic communication, transformer, wavelet transform, data processing

Procedia PDF Downloads 83
106 Analyzing Competitive Advantage of Internet of Things and Data Analytics in Smart City Context

Authors: Petra Hofmann, Dana Koniel, Jussi Luukkanen, Walter Nieminen, Lea Hannola, Ilkka Donoghue

Abstract:

The Covid-19 pandemic forced people to isolate and become physically less connected. The pandemic hasnot only reshaped people’s behaviours and needs but also accelerated digital transformation (DT). DT of cities has become an imperative with the outlook of converting them into smart cities in the future. Embedding digital infrastructure and smart city initiatives as part of the normal design, construction, and operation of cities provides a unique opportunity to improve connection between people. Internet of Things (IoT) is an emerging technology and one of the drivers in DT. It has disrupted many industries by introducing different services and business models, and IoT solutions are being applied in multiple fields, including smart cities. As IoT and data are fundamentally linked together, IoT solutions can only create value if the data generated by the IoT devices is analysed properly. Extracting relevant conclusions and actionable insights by using established techniques, data analytics contributes significantly to the growth and success of IoT applications and investments. Companies must grasp DT and be prepared to redesign their offerings and business models to remain competitive in today’s marketplace. As there are many IoT solutions available today, the amount of data is tremendous. The challenge for companies is to understand what solutions to focus on and how to prioritise and which data to differentiate from the competition. This paper explains how IoT and data analytics can impact competitive advantage and how companies should approach IoT and data analytics to translate them into concrete offerings and solutions in the smart city context. The study was carried out as a qualitative, literature-based research. A case study is provided to validate the preservation of company’s competitive advantage through smart city solutions. The results of the researchcontribution provide insights into the different factors and considerations related to creating competitive advantage through IoT and data analytics deployment in the smart city context. Furthermore, this paper proposes a framework that merges the factors and considerations with examples of offerings and solutions in smart cities. The data collected through IoT devices, and the intelligent use of it, can create a competitive advantage to companies operating in smart city business. Companies should take into consideration the five forces of competition that shape industries and pay attention to the technological, organisational, and external contexts which define factors for consideration of competitive advantages in the field of IoT and data analytics. Companies that can utilise these key assets in their businesses will most likely conquer the markets and have a strong foothold in the smart city business.

Keywords: internet of things, data analytics, smart cities, competitive advantage

Procedia PDF Downloads 97
105 Efficacy of Pooled Sera in Comparison with Commercially Acquired Quality Control Sample for Internal Quality Control at the Nkwen District Hospital Laboratory

Authors: Diom Loreen Ndum, Omarine Njimanted

Abstract:

With increasing automation in clinical laboratories, the requirements for quality control materials have greatly increased in order to monitor daily performance. The constant use of commercial control material is not economically feasible for many developing countries because of non-availability or the high-cost of the materials. Therefore, preparation and use of in-house quality control serum will be a very cost-effective measure with respect to laboratory needs.The objective of this study was to determine the efficacy of in-house prepared pooled sera with respect to commercially acquired control sample for routine internal quality control at the Nkwen District Hospital Laboratory. This was an analytical study, serum was taken from leftover serum samples of 5 healthy adult blood donors at the blood bank of Nkwen District Hospital, which had been screened negative for human immunodeficiency virus (HIV), hepatitis C virus (HCV) and Hepatitis B antigens (HBsAg), and were pooled together in a sterile container. From the pooled sera, sixty aliquots of 150µL each were prepared. Forty aliquots of 150µL each of commercially acquired samples were prepared after reconstitution and stored in a deep freezer at − 20°C until it was required for analysis. This study started from the 9th June to 12th August 2022. Every day, alongside with commercial control sample, one aliquot of pooled sera was removed from the deep freezer and allowed to thaw before analyzed for the following parameters: blood urea, serum creatinine, aspartate aminotransferase (AST), alanine aminotransferase (ALT), potassium and sodium. After getting the first 20 values for each parameter of pooled sera, the mean, standard deviation and coefficient of variation were calculated, and a Levey-Jennings (L-J) chart established. The mean and standard deviation for commercially acquired control sample was provided by the manufacturer. The following results were observed; pooled sera had lesser standard deviation for creatinine, urea and AST than commercially acquired control samples. There was statistically significant difference (p<0.05) between the mean values of creatinine, urea and AST for in-house quality control when compared with commercial control. The coefficient of variation for the parameters for both commercial control and in-house control samples were less than 30%, which is an acceptable difference. The L-J charts revealed shifts and trends (warning signs), so troubleshooting and corrective measures were taken. In conclusion, in-house quality control sample prepared from pooled serum can be a good control sample for routine internal quality control.

Keywords: internal quality control, levey-jennings chart, pooled sera, shifts, trends, westgard rules

Procedia PDF Downloads 83
104 Diselenide-Linked Redox Stimuli-Responsive Methoxy Poly(Ethylene Glycol)-b-Poly(Lactide-Co-Glycolide) Micelles for the Delivery of Doxorubicin in Cancer Cells

Authors: Yihenew Simegniew Birhan, Hsieh Chih Tsai

Abstract:

The recent advancements in synthetic chemistry and nanotechnology fostered the development of different nanocarriers for enhanced intracellular delivery of pharmaceutical agents to tumor cells. Polymeric micelles (PMs), characterized by small size, appreciable drug loading capacity (DLC), better accumulation in tumor tissue via enhanced permeability and retention (EPR) effect, and the ability to avoid detection and subsequent clearance by the mononuclear phagocyte (MNP) system, are convenient to improve the poor solubility, slow absorption and non-selective biodistribution of payloads embedded in their hydrophobic cores and hence, enhance the therapeutic efficacy of chemotherapeutic agents. Recently, redox-responsive polymeric micelles have gained significant attention for the delivery and controlled release of anticancer drugs in tumor cells. In this study, we synthesized redox-responsive diselenide bond containing amphiphilic polymer, Bi(mPEG-PLGA)-Se₂ from mPEG-PLGA, and 3,3'-diselanediyldipropanoic acid (DSeDPA) using DCC/DMAP as coupling agents. The successful synthesis of the copolymers was verified by different spectroscopic techniques. Above the critical micelle concentration, the amphiphilic copolymer, Bi(mPEG-PLGA)-Se₂, self-assembled into stable micelles. The DLS data indicated that the hydrodynamic diameter of the micelles (123.9 ± 0.85 nm) was suitable for extravasation into the tumor cells through the EPR effect. The drug loading content (DLC) and encapsulation efficiency (EE) of DOX-loaded micelles were found to be 6.61 wt% and 54.9%, respectively. The DOX-loaded micelles showed initial burst release accompanied by sustained release trend where 73.94% and 69.54% of encapsulated DOX was released upon treatment with 6mM GSH and 0.1% H₂O₂, respectively. The biocompatible nature of Bi(mPEG-PLGA)-Se₂ copolymer was confirmed by the cell viability study. In addition, the DOX-loaded micelles exhibited significant inhibition against HeLa cells (44.46%), at a maximum dose of 7.5 µg/mL. The fluorescent microscope images of HeLa cells treated with 3 µg/mL (equivalent DOX concentration) revealed efficient internalization and accumulation of DOX-loaded Bi(mPEG-PLGA)-Se₂ micelles in the cytosol of cancer cells. In conclusion, the intelligent, biocompatible, and the redox stimuli-responsive behavior of Bi(mPEG-PLGA)-Se₂ copolymer marked the potential applications of diselenide-linked mPEG-PLGA micelles for the delivery and on-demand release of chemotherapeutic agents in cancer cells.

Keywords: anticancer drug delivery, diselenide bond, polymeric micelles, redox-responsive

Procedia PDF Downloads 112
103 Security Issues in Long Term Evolution-Based Vehicle-To-Everything Communication Networks

Authors: Mujahid Muhammad, Paul Kearney, Adel Aneiba

Abstract:

The ability for vehicles to communicate with other vehicles (V2V), the physical (V2I) and network (V2N) infrastructures, pedestrians (V2P), etc. – collectively known as V2X (Vehicle to Everything) – will enable a broad and growing set of applications and services within the intelligent transport domain for improving road safety, alleviate traffic congestion and support autonomous driving. The telecommunication research and industry communities and standardization bodies (notably 3GPP) has finally approved in Release 14, cellular communications connectivity to support V2X communication (known as LTE – V2X). LTE – V2X system will combine simultaneous connectivity across existing LTE network infrastructures via LTE-Uu interface and direct device-to-device (D2D) communications. In order for V2X services to function effectively, a robust security mechanism is needed to ensure legal and safe interaction among authenticated V2X entities in the LTE-based V2X architecture. The characteristics of vehicular networks, and the nature of most V2X applications, which involve human safety makes it significant to protect V2X messages from attacks that can result in catastrophically wrong decisions/actions include ones affecting road safety. Attack vectors include impersonation attacks, modification, masquerading, replay, MiM attacks, and Sybil attacks. In this paper, we focus our attention on LTE-based V2X security and access control mechanisms. The current LTE-A security framework provides its own access authentication scheme, the AKA protocol for mutual authentication and other essential cryptographic operations between UEs and the network. V2N systems can leverage this protocol to achieve mutual authentication between vehicles and the mobile core network. However, this protocol experiences technical challenges, such as high signaling overhead, lack of synchronization, handover delay and potential control plane signaling overloads, as well as privacy preservation issues, which cannot satisfy the adequate security requirements for majority of LTE-based V2X services. This paper examines these challenges and points to possible ways by which they can be addressed. One possible solution, is the implementation of the distributed peer-to-peer LTE security mechanism based on the Bitcoin/Namecoin framework, to allow for security operations with minimal overhead cost, which is desirable for V2X services. The proposed architecture can ensure fast, secure and robust V2X services under LTE network while meeting V2X security requirements.

Keywords: authentication, long term evolution, security, vehicle-to-everything

Procedia PDF Downloads 168
102 Team Teaching versus Traditional Pedagogical Method

Authors: L. M. H. Mustonen, S. A. Heikkilä

Abstract:

The focus of the paper is to describe team teaching as a HAMK’s pedagogical method, and its impacts to the teachers work. Background: Traditionally it is thought that teaching is a job where one mostly works alone. More and more teachers feel that their work is getting more stressful. Solutions to these problems have been sought in Häme University of Applied sciences’ (From now on referred to as HAMK). HAMK has made a strategic change to move to the group oriented working of teachers. Instead of isolated study courses, there are now larger 15 credits study modules. Implementation: As examples of the method, two cases are presented: technical project module and summer studies module, which was integrated into the EU development project called Energy Efficiency with Precise Control. In autumn 2017, technical project will be implemented third time. There are at least three teachers involved in it and it is the first module of the new students. Main focus is to learn the basic skills of project working. From communicational viewpoint, they learn the basics of written and oral reporting and the basics of video reporting skills. According to our quality control system, the need for the development is evaluated in the end of the module. There are always some differences in each implementation but the basics are the same. The other case summer studies 2017 is new and part of a larger EU project. For the first time, we took a larger group of first to third year students from different study programmes to the summer studies. The students learned professional skills and also skills from different fields of study, international cooperation, and communication skills. Benefits and challenges: After three years, it is possible to consider what the changes mean in the everyday work of the teachers - and of course – what it means to students and the learning process. The perspective is HAMK’s electrical and automation study programme: At first, the change always means more work. The routines born after many years and the course material used for years may not be valid anymore. Teachers are teaching in modules simultaneously and often with some subjects overlapping. Finding the time to plan the modules together is often difficult. The essential benefit is that the learning outcomes have improved. This can be seen in the feedback given by both the teachers and the students. Conclusions: A new type of working environment is being born. A team of teachers designs a module that matches the objectives and ponders the answers to such questions as what are the knowledge-based targets of the module? Which pedagogical solutions will achieve the desired results? At what point do multiple teachers instruct the class together? How is the module evaluated? How can the module be developed further for the next execution? The team discusses openly and finds the solutions. Collegiate responsibility and support are always present. These are strengthening factors of the new communal university teaching culture. They are also strong sources of pleasure of work.

Keywords: pedagogical development, summer studies, team teaching, well-being at work

Procedia PDF Downloads 111
101 Development and Application of an Intelligent Masonry Modulation in BIM Tools: Literature Review

Authors: Sara A. Ben Lashihar

Abstract:

The heritage building information modelling (HBIM) of the historical masonry buildings has expanded lately to meet the urgent needs for conservation and structural analysis. The masonry structures are unique features for ancient building architectures worldwide that have special cultural, spiritual, and historical significance. However, there is a research gap regarding the reliability of the HBIM modeling process of these structures. The HBIM modeling process of the masonry structures faces significant challenges due to the inherent complexity and uniqueness of their structural systems. Most of these processes are based on tracing the point clouds and rarely follow documents, archival records, or direct observation. The results of these techniques are highly abstracted models where the accuracy does not exceed LOD 200. The masonry assemblages, especially curved elements such as arches, vaults, and domes, are generally modeled with standard BIM components or in-place models, and the brick textures are graphically input. Hence, future investigation is necessary to establish a methodology to generate automatically parametric masonry components. These components are developed algorithmically according to mathematical and geometric accuracy and the validity of the survey data. The main aim of this paper is to provide a comprehensive review of the state of the art of the existing researches and papers that have been conducted on the HBIM modeling of the masonry structural elements and the latest approaches to achieve parametric models that have both the visual fidelity and high geometric accuracy. The paper reviewed more than 800 articles, proceedings papers, and book chapters focused on "HBIM and Masonry" keywords from 2017 to 2021. The studies were downloaded from well-known, trusted bibliographic databases such as Web of Science, Scopus, Dimensions, and Lens. As a starting point, a scientometric analysis was carried out using VOSViewer software. This software extracts the main keywords in these studies to retrieve the relevant works. It also calculates the strength of the relationships between these keywords. Subsequently, an in-depth qualitative review followed the studies with the highest frequency of occurrence and the strongest links with the topic, according to the VOSViewer's results. The qualitative review focused on the latest approaches and the future suggestions proposed in these researches. The findings of this paper can serve as a valuable reference for researchers, and BIM specialists, to make more accurate and reliable HBIM models for historic masonry buildings.

Keywords: HBIM, masonry, structure, modeling, automatic, approach, parametric

Procedia PDF Downloads 170
100 Proactive SoC Balancing of Li-ion Batteries for Automotive Application

Authors: Ali Mashayekh, Mahdiye Khorasani, Thomas weyh

Abstract:

The demand for battery electric vehicles (BEV) is steadily increasing, and it can be assumed that electric mobility will dominate the market for individual transportation in the future. Regarding BEVs, the focus of state-of-the-art research and development is on vehicle batteries since their properties primarily determine vehicles' characteristic parameters, such as price, driving range, charging time, and lifetime. State-of-the-art battery packs consist of invariable configurations of battery cells, connected in series and parallel. A promising alternative is battery systems based on multilevel inverters, which can alter the configuration of the battery cells during operation via semiconductor switches. The main benefit of such topologies is that a three-phase AC voltage can be directly generated from the battery pack, and no separate power inverters are required. Therefore, modular battery systems based on different multilevel inverter topologies and reconfigurable battery systems are currently under investigation. Another advantage of the multilevel concept is that the possibility to reconfigure the battery pack allows battery cells with different states of charge (SoC) to be connected in parallel, and thus low-loss balancing can take place between such cells. In contrast, in conventional battery systems, parallel connected (hard-wired) battery cells are discharged via bleeder resistors to keep the individual SoCs of the parallel battery strands balanced, ultimately reducing the vehicle range. Different multilevel inverter topologies and reconfigurable batteries have been described in the available literature that makes the before-mentioned advantages possible. However, what has not yet been described is how an intelligent operating algorithm needs to look like to keep the SoCs of the individual battery strands of a modular battery system with integrated power electronics balanced. Therefore, this paper suggests an SoC balancing approach for Battery Modular Multilevel Management (BM3) converter systems, which can be similarly used for reconfigurable battery systems or other multilevel inverter topologies with parallel connectivity. The here suggested approach attempts to simultaneously utilize all converter modules (bypassing individual modules should be avoided) because the parallel connection of adjacent modules reduces the phase-strand's battery impedance. Furthermore, the presented approach tries to reduce the number of switching events when changing the switching state combination. Thereby, the ohmic battery losses and switching losses are kept as low as possible. Since no power is dissipated in any designated bleeder resistors and no designated active balancing circuitry is required, the suggested approach can be categorized as a proactive balancing approach. To verify the algorithm's validity, simulations are used.

Keywords: battery management system, BEV, battery modular multilevel management (BM3), SoC balancing

Procedia PDF Downloads 122
99 Evolution of Web Development Progress in Modern Information Technology

Authors: Abdul Basit Kiani

Abstract:

Web development, the art of creating and maintaining websites, has witnessed remarkable advancements. The aim is to provide an overview of some of the cutting-edge developments in the field. Firstly, the rise of responsive web design has revolutionized user experiences across devices. With the increasing prevalence of smartphones and tablets, web developers have adapted to ensure seamless browsing experiences, regardless of screen size. This progress has greatly enhanced accessibility and usability, catering to the diverse needs of users worldwide. Additionally, the evolution of web frameworks and libraries has significantly streamlined the development process. Tools such as React, Angular, and Vue.js have empowered developers to build dynamic and interactive web applications with ease. These frameworks not only enhance efficiency but also bolster scalability, allowing for the creation of complex and feature-rich web solutions. Furthermore, the emergence of progressive web applications (PWAs) has bridged the gap between native mobile apps and web development. PWAs leverage modern web technologies to deliver app-like experiences, including offline functionality, push notifications, and seamless installation. This innovation has transformed the way users interact with websites, blurring the boundaries between traditional web and mobile applications. Moreover, the integration of artificial intelligence (AI) and machine learning (ML) has opened new horizons in web development. Chatbots, intelligent recommendation systems, and personalization algorithms have become integral components of modern websites. These AI-powered features enhance user engagement, provide personalized experiences, and streamline customer support processes, revolutionizing the way businesses interact with their audiences. Lastly, the emphasis on web security and privacy has been a pivotal area of progress. With the increasing incidents of cyber threats, web developers have implemented robust security measures to safeguard user data and ensure secure transactions. Innovations such as HTTPS protocol, two-factor authentication, and advanced encryption techniques have bolstered the overall security of web applications, fostering trust and confidence among users. Hence, recent progress in web development has propelled the industry forward, enabling developers to craft innovative and immersive digital experiences. From responsive design to AI integration and enhanced security, the landscape of web development continues to evolve, promising a future filled with endless possibilities.

Keywords: progressive web applications (PWAs), web security, machine learning (ML), web frameworks, advancement responsive web design

Procedia PDF Downloads 58
98 Enabling Self-Care and Shared Decision Making for People Living with Dementia

Authors: Jonathan Turner, Julie Doyle, Laura O’Philbin, Dympna O’Sullivan

Abstract:

People living with dementia should be at the centre of decision-making regarding goals for daily living. These goals include basic activities (dressing, hygiene, and mobility), advanced activities (finances, transportation, and shopping), and meaningful activities that promote well-being (pastimes and intellectual pursuits). However, there is limited involvement of people living with dementia in the design of technology to support their goals. A project is described that is co-designing intelligent computer-based support for, and with, people affected by dementia and their carers. The technology will support self-management, empower participation in shared decision-making with carers and help people living with dementia remain healthy and independent in their homes for longer. It includes information from the patient’s care plan, which documents medications, contacts, and the patient's wishes on end-of-life care. Importantly for this work, the plan can outline activities that should be maintained or worked towards, such as exercise or social contact. The authors discuss how to integrate care goal information from such a care plan with data collected from passive sensors in the patient’s home in order to deliver individualized planning and interventions for persons with dementia. A number of scientific challenges are addressed: First, to co-design with dementia patients and their carers computerized support for shared decision-making about their care while allowing the patient to share the care plan. Second, to develop a new and open monitoring framework with which to configure sensor technologies to collect data about whether goals and actions specified for a person in their care plan are being achieved. This is developed top-down by associating care quality types and metrics elicited from the co-design activities with types of data that can be collected within the home, from passive and active sensors, and from the patient’s feedback collected through a simple co-designed interface. These activities and data will be mapped to appropriate sensors and technological infrastructure with which to collect the data. Third, the application of machine learning models to analyze data collected via the sensing devices in order to investigate whether and to what extent activities outlined via the care plan are being achieved. The models will capture longitudinal data to track disease progression over time; as the disease progresses and captured data show that activities outlined in the care plan are not being achieved, the care plan may recommend alternative activities. Disease progression may also require care changes, and a data-driven approach can capture changes in a condition more quickly and allow care plans to evolve and be updated.

Keywords: care goals, decision-making, dementia, self-care, sensors

Procedia PDF Downloads 174
97 The French Ekang Ethnographic Dictionary. The Quantum Approach

Authors: Henda Gnakate Biba, Ndassa Mouafon Issa

Abstract:

Dictionaries modeled on the Western model [tonic accent languages] are not suitable and do not account for tonal languages phonologically, which is why the [prosodic and phonological] ethnographic dictionary was designed. It is a glossary that expresses the tones and the rhythm of words. It recreates exactly the speaking or singing of a tonal language, and allows the non-speaker of this language to pronounce the words as if they were a native. It is a dictionary adapted to tonal languages. It was built from ethnomusicological theorems and phonological processes, according to Jean. J. Rousseau 1776 hypothesis /To say and to sing were once the same thing/. Each word in the French dictionary finds its corresponding language, ekaη. And each word ekaη is written on a musical staff. This ethnographic dictionary is also an inventive, original and innovative research thesis, but it is also an inventive, original and innovative research thesis. A contribution to the theoretical, musicological, ethno musicological and linguistic conceptualization of languages, giving rise to the practice of interlocution between the social and cognitive sciences, the activities of artistic creation and the question of modeling in the human sciences: mathematics, computer science, translation automation and artificial intelligence. When you apply this theory to any text of a folksong of a world-tone language, you do not only piece together the exact melody, rhythm, and harmonies of that song as if you knew it in advance but also the exact speaking of this language. The author believes that the issue of the disappearance of tonal languages and their preservation has been structurally resolved, as well as one of the greatest cultural equations related to the composition and creation of tonal, polytonal and random music. The experimentation confirming the theorization designed a semi-digital, semi-analog application which translates the tonal languages of Africa (about 2,100 languages) into blues, jazz, world music, polyphonic music, tonal and anatonal music and deterministic and random music). To test this application, I use a music reading and writing software that allows me to collect the data extracted from my mother tongue, which is already modeled in the musical staves saved in the ethnographic (semiotic) dictionary for automatic translation ( volume 2 of the book). Translation is done (from writing to writing, from writing to speech and from writing to music). Mode of operation: you type a text on your computer, a structured song (chorus-verse), and you command the machine a melody of blues, jazz and, world music or, variety etc. The software runs, giving you the option to choose harmonies, and then you select your melody.

Keywords: music, language, entenglement, science, research

Procedia PDF Downloads 72
96 Smart Irrigation System for Applied Irrigation Management in Tomato Seedling Production

Authors: Catariny C. Aleman, Flavio B. Campos, Matheus A. Caliman, Everardo C. Mantovani

Abstract:

The seedling production stage is a critical point in the vegetable production system. Obtaining high-quality seedlings is a prerequisite for subsequent cropping to occur well and productivity optimization is required. The water management is an important step in agriculture production. The adequate water requirement in horticulture seedlings can provide higher quality and increase field production. The practice of irrigation is indispensable and requires a duly adjusted quality irrigation system, together with a specific water management plan to meet the water demand of the crop. Irrigation management in seedling management requires a great deal of specific information, especially when it involves the use of inputs such as hydrorentering polymers and automation technologies of the data acquisition and irrigation system. The experiment was conducted in a greenhouse at the Federal University of Viçosa, Viçosa - MG. Tomato seedlings (Lycopersicon esculentum Mill) were produced in plastic trays of 128 cells, suspended at 1.25 m from the ground. The seedlings were irrigated by 4 micro sprinklers of fixed jet 360º per tray, duly isolated by sideboards, following the methodology developed for this work. During Phase 1, in January / February 2017 (duration of 24 days), the cultivation coefficient (Kc) of seedlings cultured in the presence and absence of hydrogel was evaluated by weighing lysimeter. In Phase 2, September 2017 (duration of 25 days), the seedlings were submitted to 4 irrigation managements (Kc, timer, 0.50 ETo, and 1.00 ETo), in the presence and absence of hydrogel and then evaluated in relation to quality parameters. The microclimate inside the greenhouse was monitored with the use of air temperature, relative humidity and global radiation sensors connected to a microcontroller that performed hourly calculations of reference evapotranspiration by Penman-Monteith standard method FAO56 modified for the balance of long waves according to Walker, Aldrich, Short (1983), and conducted water balance and irrigation decision making for each experimental treatment. Kc of seedlings cultured on a substrate with hydrogel (1.55) was higher than Kc on a pure substrate (1.39). The use of the hydrogel was a differential for the production of earlier tomato seedlings, with higher final height, the larger diameter of the colon, greater accumulation of a dry mass of shoot, a larger area of crown projection and greater the rate of relative growth. The handling 1.00 ETo promoted higher relative growth rate.

Keywords: automatic system; efficiency of water use; precision irrigation, micro sprinkler.

Procedia PDF Downloads 121
95 Applying Big Data Analysis to Efficiently Exploit the Vast Unconventional Tight Oil Reserves

Authors: Shengnan Chen, Shuhua Wang

Abstract:

Successful production of hydrocarbon from unconventional tight oil reserves has changed the energy landscape in North America. The oil contained within these reservoirs typically will not flow to the wellbore at economic rates without assistance from advanced horizontal well and multi-stage hydraulic fracturing. Efficient and economic development of these reserves is a priority of society, government, and industry, especially under the current low oil prices. Meanwhile, society needs technological and process innovations to enhance oil recovery while concurrently reducing environmental impacts. Recently, big data analysis and artificial intelligence become very popular, developing data-driven insights for better designs and decisions in various engineering disciplines. However, the application of data mining in petroleum engineering is still in its infancy. The objective of this research aims to apply intelligent data analysis and data-driven models to exploit unconventional oil reserves both efficiently and economically. More specifically, a comprehensive database including the reservoir geological data, reservoir geophysical data, well completion data and production data for thousands of wells is firstly established to discover the valuable insights and knowledge related to tight oil reserves development. Several data analysis methods are introduced to analysis such a huge dataset. For example, K-means clustering is used to partition all observations into clusters; principle component analysis is applied to emphasize the variation and bring out strong patterns in the dataset, making the big data easy to explore and visualize; exploratory factor analysis (EFA) is used to identify the complex interrelationships between well completion data and well production data. Different data mining techniques, such as artificial neural network, fuzzy logic, and machine learning technique are then summarized, and appropriate ones are selected to analyze the database based on the prediction accuracy, model robustness, and reproducibility. Advanced knowledge and patterned are finally recognized and integrated into a modified self-adaptive differential evolution optimization workflow to enhance the oil recovery and maximize the net present value (NPV) of the unconventional oil resources. This research will advance the knowledge in the development of unconventional oil reserves and bridge the gap between the big data and performance optimizations in these formations. The newly developed data-driven optimization workflow is a powerful approach to guide field operation, which leads to better designs, higher oil recovery and economic return of future wells in the unconventional oil reserves.

Keywords: big data, artificial intelligence, enhance oil recovery, unconventional oil reserves

Procedia PDF Downloads 286
94 Analyzing the Untenable Corruption Intricate Patterns in Africa and Combating Strategies for the Efficiency of Public Sector Supply Chains

Authors: Charles Mazhazhate

Abstract:

This study interrogates and analyses the intricate kin- and- kith network patterns of corruption and mismanagement of resources prevalent in public sector supply chains bedeviling the developing economies of Sub-Saharan Africa with particular reference to Zimbabwe. This is forcing governments to resort to harsh fiscal policies that see their citizens paying high taxes against a backdrop of incomes below the poverty datum line, and this negatively affects their quality of life. The corporate world is also affected by the various tax-regime instituted. Mismanagement of resources and corrupt practices are rampant in state-owned enterprises to the extent that institutional policies, procedures, and practices are often flouted for the benefit of a clique of individuals. This interwoven in kith and kin blood human relations in organizations where appointments to critical positions are based on ascribed status. People no longer place value in their systems to make them work thereby violating corporate governance principles. Greediness and ‘unholy friendship connections’ are instrumental in fueling the employment of people who know each other from their discrete backgrounds. Such employments or socio-metric unions are meant to protect those at the top by giving them intelligent information through spying on what other subordinates are doing inside and outside the organization. This practice has led to the underperforming of organizations as those employees with connections and their upper echelons favorites connive to abuse resources for their own benefit. Even if culprits are known, no draconian measures are employed as a deterrence measure. Public value along public sector supply chains is lost. The study used a descriptive case study research design on fifty organizations in Zimbabwe mainly state-owned enterprises. Both qualitative and quantitative instrumentations were used. Both Snowball and random sampling techniques were used. The study found out that in all the fifty SOEs, there were employees in key positions related to top management, with tentacles feeding into the law enforcement agents, judiciary, security systems, and the executive. Such employees in public seem not to know each other with but would be involved in dirty scams and then share the proceeds with top people behind the scenes. The study also established that the same employees do not have the necessary competencies, qualifications, abilities, and capabilities to be in those positions. This culture is now strong that it is difficult to bust. The study recommends recruitment of all employees through an independent employment bureau to ensure strategic fit.

Keywords: corruption, state owned enterprises, strategic fit, public sector supply chains, efficiency

Procedia PDF Downloads 165
93 Machine Learning in Patent Law: How Genetic Breeding Algorithms Challenge Modern Patent Law Regimes

Authors: Stefan Papastefanou

Abstract:

Artificial intelligence (AI) is an interdisciplinary field of computer science with the aim of creating intelligent machine behavior. Early approaches to AI have been configured to operate in very constrained environments where the behavior of the AI system was previously determined by formal rules. Knowledge was presented as a set of rules that allowed the AI system to determine the results for specific problems; as a structure of if-else rules that could be traversed to find a solution to a particular problem or question. However, such rule-based systems typically have not been able to generalize beyond the knowledge provided. All over the world and especially in IT-heavy industries such as the United States, the European Union, Singapore, and China, machine learning has developed to be an immense asset, and its applications are becoming more and more significant. It has to be examined how such products of machine learning models can and should be protected by IP law and for the purpose of this paper patent law specifically, since it is the IP law regime closest to technical inventions and computing methods in technical applications. Genetic breeding models are currently less popular than recursive neural network method and deep learning, but this approach can be more easily described by referring to the evolution of natural organisms, and with increasing computational power; the genetic breeding method as a subset of the evolutionary algorithms models is expected to be regaining popularity. The research method focuses on patentability (according to the world’s most significant patent law regimes such as China, Singapore, the European Union, and the United States) of AI inventions and machine learning. Questions of the technical nature of the problem to be solved, the inventive step as such, and the question of the state of the art and the associated obviousness of the solution arise in the current patenting processes. Most importantly, and the key focus of this paper is the problem of patenting inventions that themselves are developed through machine learning. The inventor of a patent application must be a natural person or a group of persons according to the current legal situation in most patent law regimes. In order to be considered an 'inventor', a person must actually have developed part of the inventive concept. The mere application of machine learning or an AI algorithm to a particular problem should not be construed as the algorithm that contributes to a part of the inventive concept. However, when machine learning or the AI algorithm has contributed to a part of the inventive concept, there is currently a lack of clarity regarding the ownership of artificially created inventions. Since not only all European patent law regimes but also the Chinese and Singaporean patent law approaches include identical terms, this paper ultimately offers a comparative analysis of the most relevant patent law regimes.

Keywords: algorithms, inventor, genetic breeding models, machine learning, patentability

Procedia PDF Downloads 111
92 How Consumers Perceive Health and Nutritional Information and How It Affects Their Purchasing Behavior: Comparative Study between Colombia and the Dominican Republic

Authors: Daniel Herrera Gonzalez, Maria Luisa Montas

Abstract:

There are some factors affecting consumer decision-making regarding the use of the front of package labels in order to find benefits to the well-being of the human being. Currently, there are several labels that help influence or change the purchase decision for food products. These labels communicate the impact that food has on human health; therefore, consumers are more critical and intelligent when buying and consuming food products. The research explores the association between front-of-pack labeling and food choice; the association between label content and purchasing decisions is complex and influenced by different factors, including the packaging itself. The main objective of this study was to examine the perception of health labels and nutritional declarations and their influence on buying decisions in the non-alcoholic beverages sector. This comparative study of two developing countries will show how consumers take nutritional labels into account when deciding to buy certain foods. This research applied a quantitative methodology with correlational scope. This study has a correlational approach in order to analyze the degree of association between variables. Likewise, the confirmatory factor analysis (CFA) method and structural equation modeling (SEM) as a powerful multivariate technique was used as statistical technique to find the relationships between observable and unobservable variables. The main findings of this research were the obtaining of three large groups and their perception and effects on nutritional and wellness labels. The first group is characterized by taking an attitude of high interest on the issue of the imposition of the nutritional information label on products and would agree that all products should be packaged given its importance to preventing illnesses in the consumer. Likewise, they almost always care about the brand, the size, the list of ingredients, and nutritional information of the food, and also the effect of these on health. The second group stands out for presenting some interest in the importance of the label on products as a purchase decision, in addition to almost always taking into account the characteristics of size, money, components, etc. of the products to decide on their consumption and almost always They are never interested in the effect of these products on their health or nutrition, and in group 3, it differs from the others by being more neutral regarding the issue of nutritional information labels, and being less interested in the purchase decision and characteristics of the product and also on the influence of these on health and nutrition. This new knowledge is essential for different companies that manufacture and market food products because they will have information to adapt or anticipate the new laws of developing countries as well as the new needs of health-conscious consumers when they buy food products.

Keywords: healthy labels, consumer behavior, nutritional information, healthy products

Procedia PDF Downloads 110
91 Advancing Sustainable Seawater Desalination Technologies: Exploring the Sub-Atmospheric Vapor Pipeline (SAVP) and Energy-Efficient Solution for Urban and Industrial Water Management in Smart, Eco-Friendly, and Green Building Infrastructure

Authors: Mona Shojaei

Abstract:

The Sub-Atmospheric Vapor Pipeline (SAVP) introduces a distinct approach to seawater desalination with promising applications in both land and industrial sectors. SAVP systems exploit the temperature difference between a hot source and a cold environment to facilitate efficient vapor transfer, offering substantial benefits in diverse industrial and field applications. This approach incorporates dynamic boundary conditions, where the temperatures of hot and cold sources vary over time, particularly in natural and industrial environments. Such variations critically influence convection and diffusion processes, introducing challenges that require the refinement of the convection-diffusion equation and the derivation of temperature profiles along the pipeline through advanced engineering mathematics. This study formulates vapor temperature as a function of time and length using two mathematical approaches: Eigen functions and Green’s equation. Combining detailed theoretical modeling, mathematical simulations, and extensive field and industrial tests, this research underscores the SAVP system’s scalability for real-world applications. Results reveal a high degree of accuracy, highlighting SAVP’s significant potential for energy conservation and environmental sustainability. Furthermore, the integration of SAVP technology within smart and green building systems creates new opportunities for sustainable urban water management. By capturing and repurposing vapor for non-potable uses such as irrigation, greywater recycling, and ecosystem support in green spaces, SAVP aligns with the principles of smart and green buildings. Smart buildings emphasize efficient resource management, enhanced system control, and automation for optimal energy and water use, while green buildings prioritize environmental impact reduction and resource conservation. SAVP technology bridges both paradigms, enhancing water self-sufficiency and reducing reliance on external water supplies. The sustainable and energy-efficient properties of SAVP make it a vital component in resilient infrastructure development, addressing urban water scarcity while promoting eco-friendly living. This dual alignment with smart and green building goals positions SAVP as a transformative solution in the pursuit of sustainable urban resource management.

Keywords: sub-atmospheric vapor pipeline, seawater desalination, energy efficiency, vapor transfer dynamics, mathematical modeling, sustainable water solutions, smart buildings

Procedia PDF Downloads 20
90 Artificial Law: Legal AI Systems and the Need to Satisfy Principles of Justice, Equality and the Protection of Human Rights

Authors: Begum Koru, Isik Aybay, Demet Celik Ulusoy

Abstract:

The discipline of law is quite complex and has its own terminology. Apart from written legal rules, there is also living law, which refers to legal practice. Basic legal rules aim at the happiness of individuals in social life and have different characteristics in different branches such as public or private law. On the other hand, law is a national phenomenon. The law of one nation and the legal system applied on the territory of another nation may be completely different. People who are experts in a particular field of law in one country may have insufficient expertise in the law of another country. Today, in addition to the local nature of law, international and even supranational law rules are applied in order to protect basic human values and ensure the protection of human rights around the world. Systems that offer algorithmic solutions to legal problems using artificial intelligence (AI) tools will perhaps serve to produce very meaningful results in terms of human rights. However, algorithms to be used should not be developed by only computer experts, but also need the contribution of people who are familiar with law, values, judicial decisions, and even the social and political culture of the society to which it will provide solutions. Otherwise, even if the algorithm works perfectly, it may not be compatible with the values of the society in which it is applied. The latest developments involving the use of AI techniques in legal systems indicate that artificial law will emerge as a new field in the discipline of law. More AI systems are already being applied in the field of law, with examples such as predicting judicial decisions, text summarization, decision support systems, and classification of documents. Algorithms for legal systems employing AI tools, especially in the field of prediction of judicial decisions and decision support systems, have the capacity to create automatic decisions instead of judges. When the judge is removed from this equation, artificial intelligence-made law created by an intelligent algorithm on its own emerges, whether the domain is national or international law. In this work, the aim is to make a general analysis of this new topic. Such an analysis needs both a literature survey and a perspective from computer experts' and lawyers' point of view. In some societies, the use of prediction or decision support systems may be useful to integrate international human rights safeguards. In this case, artificial law can serve to produce more comprehensive and human rights-protective results than written or living law. In non-democratic countries, it may even be thought that direct decisions and artificial intelligence-made law would be more protective instead of a decision "support" system. Since the values of law are directed towards "human happiness or well-being", it requires that the AI algorithms should always be capable of serving this purpose and based on the rule of law, the principle of justice and equality, and the protection of human rights.

Keywords: AI and law, artificial law, protection of human rights, AI tools for legal systems

Procedia PDF Downloads 78
89 Conflict Resolution in Fuzzy Rule Base Systems Using Temporal Modalities Inference

Authors: Nasser S. Shebka

Abstract:

Fuzzy logic is used in complex adaptive systems where classical tools of representing knowledge are unproductive. Nevertheless, the incorporation of fuzzy logic, as it’s the case with all artificial intelligence tools, raised some inconsistencies and limitations in dealing with increased complexity systems and rules that apply to real-life situations and hinders the ability of the inference process of such systems, but it also faces some inconsistencies between inferences generated fuzzy rules of complex or imprecise knowledge-based systems. The use of fuzzy logic enhanced the capability of knowledge representation in such applications that requires fuzzy representation of truth values or similar multi-value constant parameters derived from multi-valued logic, which set the basis for the three t-norms and their based connectives which are actually continuous functions and any other continuous t-norm can be described as an ordinal sum of these three basic ones. However, some of the attempts to solve this dilemma were an alteration to fuzzy logic by means of non-monotonic logic, which is used to deal with the defeasible inference of expert systems reasoning, for example, to allow for inference retraction upon additional data. However, even the introduction of non-monotonic fuzzy reasoning faces a major issue of conflict resolution for which many principles were introduced, such as; the specificity principle and the weakest link principle. The aim of our work is to improve the logical representation and functional modelling of AI systems by presenting a method of resolving existing and potential rule conflicts by representing temporal modalities within defeasible inference rule-based systems. Our paper investigates the possibility of resolving fuzzy rules conflict in a non-monotonic fuzzy reasoning-based system by introducing temporal modalities and Kripke's general weak modal logic operators in order to expand its knowledge representation capabilities by means of flexibility in classifying newly generated rules, and hence, resolving potential conflicts between these fuzzy rules. We were able to address the aforementioned problem of our investigation by restructuring the inference process of the fuzzy rule-based system. This is achieved by using time-branching temporal logic in combination with restricted first-order logic quantifiers, as well as propositional logic to represent classical temporal modality operators. The resulting findings not only enhance the flexibility of complex rule-base systems inference process but contributes to the fundamental methods of building rule bases in such a manner that will allow for a wider range of applicable real-life situations derived from a quantitative and qualitative knowledge representational perspective.

Keywords: fuzzy rule-based systems, fuzzy tense inference, intelligent systems, temporal modalities

Procedia PDF Downloads 95
88 A Multifactorial Algorithm to Automate Screening of Drug-Induced Liver Injury Cases in Clinical and Post-Marketing Settings

Authors: Osman Turkoglu, Alvin Estilo, Ritu Gupta, Liliam Pineda-Salgado, Rajesh Pandey

Abstract:

Background: Hepatotoxicity can be linked to a variety of clinical symptoms and histopathological signs, posing a great challenge in the surveillance of suspected drug-induced liver injury (DILI) cases in the safety database. Additionally, the majority of such cases are rare, idiosyncratic, highly unpredictable, and tend to demonstrate unique individual susceptibility; these qualities, in turn, lend to a pharmacovigilance monitoring process that is often tedious and time-consuming. Objective: Develop a multifactorial algorithm to assist pharmacovigilance physicians in identifying high-risk hepatotoxicity cases associated with DILI from the sponsor’s safety database (Argus). Methods: Multifactorial selection criteria were established using Structured Query Language (SQL) and the TIBCO Spotfire® visualization tool, via a combination of word fragments, wildcard strings, and mathematical constructs, based on Hy’s law criteria and pattern of injury (R-value). These criteria excluded non-eligible cases from monthly line listings mined from the Argus safety database. The capabilities and limitations of these criteria were verified by comparing a manual review of all monthly cases with system-generated monthly listings over six months. Results: On an average, over a period of six months, the algorithm accurately identified 92% of DILI cases meeting established criteria. The automated process easily compared liver enzyme elevations with baseline values, reducing the screening time to under 15 minutes as opposed to multiple hours exhausted using a cognitively laborious, manual process. Limitations of the algorithm include its inability to identify cases associated with non-standard laboratory tests, naming conventions, and/or incomplete/incorrectly entered laboratory values. Conclusions: The newly developed multifactorial algorithm proved to be extremely useful in detecting potential DILI cases, while heightening the vigilance of the drug safety department. Additionally, the application of this algorithm may be useful in identifying a potential signal for DILI in drugs not yet known to cause liver injury (e.g., drugs in the initial phases of development). This algorithm also carries the potential for universal application, due to its product-agnostic data and keyword mining features. Plans for the tool include improving it into a fully automated application, thereby completely eliminating a manual screening process.

Keywords: automation, drug-induced liver injury, pharmacovigilance, post-marketing

Procedia PDF Downloads 157
87 Dimensionality Reduction in Modal Analysis for Structural Health Monitoring

Authors: Elia Favarelli, Enrico Testi, Andrea Giorgetti

Abstract:

Autonomous structural health monitoring (SHM) of many structures and bridges became a topic of paramount importance for maintenance purposes and safety reasons. This paper proposes a set of machine learning (ML) tools to perform automatic feature selection and detection of anomalies in a bridge from vibrational data and compare different feature extraction schemes to increase the accuracy and reduce the amount of data collected. As a case study, the Z-24 bridge is considered because of the extensive database of accelerometric data in both standard and damaged conditions. The proposed framework starts from the first four fundamental frequencies extracted through operational modal analysis (OMA) and clustering, followed by density-based time-domain filtering (tracking). The fundamental frequencies extracted are then fed to a dimensionality reduction block implemented through two different approaches: feature selection (intelligent multiplexer) that tries to estimate the most reliable frequencies based on the evaluation of some statistical features (i.e., mean value, variance, kurtosis), and feature extraction (auto-associative neural network (ANN)) that combine the fundamental frequencies to extract new damage sensitive features in a low dimensional feature space. Finally, one class classifier (OCC) algorithms perform anomaly detection, trained with standard condition points, and tested with normal and anomaly ones. In particular, a new anomaly detector strategy is proposed, namely one class classifier neural network two (OCCNN2), which exploit the classification capability of standard classifiers in an anomaly detection problem, finding the standard class (the boundary of the features space in normal operating conditions) through a two-step approach: coarse and fine boundary estimation. The coarse estimation uses classics OCC techniques, while the fine estimation is performed through a feedforward neural network (NN) trained that exploits the boundaries estimated in the coarse step. The detection algorithms vare then compared with known methods based on principal component analysis (PCA), kernel principal component analysis (KPCA), and auto-associative neural network (ANN). In many cases, the proposed solution increases the performance with respect to the standard OCC algorithms in terms of F1 score and accuracy. In particular, by evaluating the correct features, the anomaly can be detected with accuracy and an F1 score greater than 96% with the proposed method.

Keywords: anomaly detection, frequencies selection, modal analysis, neural network, sensor network, structural health monitoring, vibration measurement

Procedia PDF Downloads 128
86 Understanding Evidence Dispersal Caused by the Effects of Using Unmanned Aerial Vehicles in Active Indoor Crime Scenes

Authors: Elizabeth Parrott, Harry Pointon, Frederic Bezombes, Heather Panter

Abstract:

Unmanned aerial vehicles (UAV’s) are making a profound effect within policing, forensic and fire service procedures worldwide. These intelligent devices have already proven useful in photographing and recording large-scale outdoor and indoor sites using orthomosaic and three-dimensional (3D) modelling techniques, for the purpose of capturing and recording sites during and post-incident. UAV’s are becoming an established tool as they are extending the reach of the photographer and offering new perspectives without the expense and restrictions of deploying full-scale aircraft. 3D reconstruction quality is directly linked to the resolution of captured images; therefore, close proximity flights are required for more detailed models. As technology advances deployment of UAVs in confined spaces is becoming more common. With this in mind, this study investigates the effects of UAV operation within active crimes scenes with regard to the dispersal of particulate evidence. To date, there has been little consideration given to the potential effects of using UAV’s within active crime scenes aside from a legislation point of view. Although potentially the technology can reduce the likelihood of contamination by replacing some of the roles of investigating practitioners. There is the risk of evidence dispersal caused by the effect of the strong airflow beneath the UAV, from the downwash of the propellers. The initial results of this study are therefore presented to determine the height of least effect at which to fly, and the commercial propeller type to choose to generate the smallest amount of disturbance from the dataset tested. In this study, a range of commercially available 4-inch propellers were chosen as a starting point due to the common availability and their small size makes them well suited for operation within confined spaces. To perform the testing, a rig was configured to support a single motor and propeller powered with a standalone mains power supply and controlled via a microcontroller. This was to mimic a complete throttle cycle and control the device to ensure repeatability. By removing the variances of battery packs and complex UAV structures to allow for a more robust setup. Therefore, the only changing factors were the propeller and operating height. The results were calculated via computer vision analysis of the recorded dispersal of the sample particles placed below the arm-mounted propeller. The aim of this initial study is to give practitioners an insight into the technology to use when operating within confined spaces as well as recognizing some of the issues caused by UAV’s within active crime scenes.

Keywords: dispersal, evidence, propeller, UAV

Procedia PDF Downloads 165
85 The Effect of Improvement Programs in the Mean Time to Repair and in the Mean Time between Failures on Overall Lead Time: A Simulation Using the System Dynamics-Factory Physics Model

Authors: Marcel Heimar Ribeiro Utiyama, Fernanda Caveiro Correia, Dario Henrique Alliprandini

Abstract:

The importance of the correct allocation of improvement programs is of growing interest in recent years. Due to their limited resources, companies must ensure that their financial resources are directed to the correct workstations in order to be the most effective and survive facing the strong competition. However, to our best knowledge, the literature about allocation of improvement programs does not analyze in depth this problem when the flow shop process has two capacity constrained resources. This is a research gap which is deeply studied in this work. The purpose of this work is to identify the best strategy to allocate improvement programs in a flow shop with two capacity constrained resources. Data were collected from a flow shop process with seven workstations in an industrial control and automation company, which process 13.690 units on average per month. The data were used to conduct a simulation with the System Dynamics-Factory Physics model. The main variables considered, due to their importance on lead time reduction, were the mean time between failures and the mean time to repair. The lead time reduction was the output measure of the simulations. Ten different strategies were created: (i) focused time to repair improvement, (ii) focused time between failures improvement, (iii) distributed time to repair improvement, (iv) distributed time between failures improvement, (v) focused time to repair and time between failures improvement, (vi) distributed time to repair and between failures improvement, (vii) hybrid time to repair improvement, (viii) hybrid time between failures improvements, (ix) time to repair improvement strategy towards the two capacity constrained resources, (x) time between failures improvement strategy towards the two capacity constrained resources. The ten strategies tested are variations of the three main strategies for improvement programs named focused, distributed and hybrid. Several comparisons among the effect of the ten strategies in lead time reduction were performed. The results indicated that for the flow shop analyzed, the focused strategies delivered the best results. When it is not possible to perform a large investment on the capacity constrained resources, companies should use hybrid approaches. An important contribution to the academy is the hybrid approach, which proposes a new way to direct the efforts of improvements. In addition, the study in a flow shop with two strong capacity constrained resources (more than 95% of utilization) is an important contribution to the literature. Another important contribution is the problem of allocation with two CCRs and the possibility of having floating capacity constrained resources. The results provided the best improvement strategies considering the different strategies of allocation of improvement programs and different positions of the capacity constrained resources. Finally, it is possible to state that both strategies, hybrid time to repair improvement and hybrid time between failures improvement, delivered best results compared to the respective distributed strategies. The main limitations of this study are mainly regarding the flow shop analyzed. Future work can further investigate different flow shop configurations like a varying number of workstations, different number of products or even different positions of the two capacity constrained resources.

Keywords: allocation of improvement programs, capacity constrained resource, hybrid strategy, lead time, mean time to repair, mean time between failures

Procedia PDF Downloads 126