Search results for: fuel cell technology
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 11925

Search results for: fuel cell technology

645 Learning Instructional Managements between the Problem-Based Learning and Stem Education Methods for Enhancing Students Learning Achievements and their Science Attitudes toward Physics the 12th Grade Level

Authors: Achirawatt Tungsombatsanti, Toansakul Santiboon, Kamon Ponkham

Abstract:

Strategies of the STEM education was aimed to prepare of an interdisciplinary and applied approach for the instructional of science, technology, engineering, and mathematics in an integrated students for enhancing engagement of their science skills to the Problem-Based Learning (PBL) method in Borabu School with a sample consists of 80 students in 2 classes at the 12th grade level of their learning achievements on electromagnetic issue. Research administrations were to separate on two different instructional model groups, the 40-experimental group was designed with the STEM instructional experimenting preparation and induction in a 40-student class and the controlling group using the PBL was designed to students identify what they already know, what they need to know, and how and where to access new information that may lead to the resolution of the problem in other class. The learning environment perceptions were obtained using the 35-item Physics Laboratory Environment Inventory (PLEI). Students’ creating attitude skills’ sustainable development toward physics were assessed with the Test Of Physics-Related Attitude (TOPRA) The term scaling was applied to the attempts to measure the attitude objectively with the TOPRA was used to assess students’ perceptions of their science attitude toward physics. Comparisons between pretest and posttest techniques were assessed students’ learning achievements on each their outcomes from each instructional model, differently. The results of these findings revealed that the efficiency of the PLB and the STEM based on criteria indicate that are higher than the standard level of the 80/80. Statistically, significant of students’ learning achievements to their later outcomes on the controlling and experimental physics class groups with the PLB and the STEM instructional designs were differentiated between groups at the .05 level, evidently. Comparisons between the averages mean scores of students’ responses to their instructional activities in the STEM education method are higher than the average mean scores of the PLB model. Associations between students’ perceptions of their physics classes to their attitudes toward physics, the predictive efficiency R2 values indicate that 77%, and 83% of the variances in students’ attitudes for the PLEI and the TOPRA in physics environment classes were attributable to their perceptions of their physics PLB and the STEM instructional design classes, consequently. An important of these findings was contributed to student understanding of scientific concepts, attitudes, and skills as evidence with STEM instructional ought to higher responding than PBL educational teaching. Statistically significant between students’ learning achievements were differentiated of pre and post assessments which overall on two instructional models.

Keywords: learning instructional managements, problem-based learning, STEM education, method, enhancement, students learning achievements, science attitude, physics classes

Procedia PDF Downloads 209
644 Demonstrating the Efficacy of a Low-Cost Carbon Dioxide-Based Cryoablation Device in Veterinary Medicine for Translation to Third World Medical Applications

Authors: Grace C. Kuroki, Yixin Hu, Bailey Surtees, Rebecca Krimins, Nicholas J. Durr, Dara L. Kraitchman

Abstract:

The purpose of this study was to perform a Phase I veterinary clinical trial with a low-cost, carbon-dioxide-based, passive thaw cryoablation device as proof-of-principle for application in pets and translation to third-world treatment of breast cancer. This study was approved by the institutional animal care and use committee. Client-owned dogs with subcutaneous masses, primarily lipomas or mammary cancers, were recruited for the study. Inclusion was based on clinical history, lesion location, preanesthetic blood work, and fine needle aspirate or biopsy confirmation of mass. Informed consent was obtained from the owners for dogs that met inclusion criteria. Ultrasound assessment of mass extent was performed immediately prior to mass cryoablation. Dogs were placed under general anesthesia and sterilely prepared. A stab incision was created to insert a custom 4.19 OD x 55.9 mm length cryoablation probe (Kubanda Cryotherapy) into the mass. Originally designed for treating breast cancer in low resource settings, this device has demonstrated potential in effectively necrosing subcutaneous masses. A dose escalation study of increasing freeze-thaw cycles (5/4/5, 7/5/7, and 10/7/10 min) was performed to assess the size of the iceball/necrotic extent of cryoablation. Each dog was allowed to recover for ~1-2 weeks before surgical removal of the mass. A single mass was treated in seven dogs (2 mammary masses, a sarcoma, 4 lipomas, and 1 adnexal mass) with most masses exceeding 2 cm in any dimension. Mass involution was most evident in the malignant mammary and adnexal mass. Lipomas showed minimal shrinkage prior to surgical removal, but an area of necrosis was evident along the cryoablation probe path. Gross assessment indicated a clear margin of cryoablation along the cryoprobe independent of tumor type. Detailed histopathology is pending, but complete involution of large lipomas appeared to be unlikely with a 10/7/10 protocol. The low-cost, carbon dioxide-based cryotherapy device permits a minimally invasive technique that may be useful for veterinary applications but is also informative of the unlikely resolution of benign adipose breast masses that may be encountered in third world countries.

Keywords: cryoablation, cryotherapy, interventional oncology, veterinary technology

Procedia PDF Downloads 107
643 Expanding the Atelier: Design Lead Academic Project Using Immersive User-Generated Mobile Images and Augmented Reality

Authors: David Sinfield, Thomas Cochrane, Marcos Steagall

Abstract:

While there is much hype around the potential and development of mobile virtual reality (VR), the two key critical success factors are the ease of user experience and the development of a simple user-generated content ecosystem. Educational technology history is littered with the debris of over-hyped revolutionary new technologies that failed to gain mainstream adoption or were quickly superseded. Examples include 3D television, interactive CDROMs, Second Life, and Google Glasses. However, we argue that this is the result of curriculum design that substitutes new technologies into pre-existing pedagogical strategies that are focused upon teacher-delivered content rather than exploring new pedagogical strategies that enable student-determined learning or heutagogy. Visual Communication design based learning such as Graphic Design, Illustration, Photography and Design process is heavily based on the traditional forms of the classroom environment whereby student interaction takes place both at peer level and indeed teacher based feedback. In doing so, this makes for a healthy creative learning environment, but does raise other issue in terms of student to teacher learning ratios and reduced contact time. Such issues arise when students are away from the classroom and cannot interact with their peers and teachers and thus we see a decline in creative work from the student. Using AR and VR as a means of stimulating the students and to think beyond the limitation of the studio based classroom this paper will discuss the outcomes of a student project considering the virtual classroom and the techniques involved. The Atelier learning environment is especially suited to the Visual Communication model as it deals with the creative processing of ideas that needs to be shared in a collaborative manner. This has proven to have been a successful model over the years, in the traditional form of design education, but has more recently seen a shift in thinking as we move into a more digital model of learning and indeed away from the classical classroom structure. This study focuses on the outcomes of a student design project that employed Augmented Reality and Virtual Reality technologies in order to expand the dimensions of the classroom beyond its physical limits. Augmented Reality when integrated into the learning experience can improve the learning motivation and engagement of students. This paper will outline some of the processes used and the findings from the semester-long project that took place.

Keywords: augmented reality, blogging, design in community, enhanced learning and teaching, graphic design, new technologies, virtual reality, visual communications

Procedia PDF Downloads 222
642 Bauhaus Exhibition 1922: New Weapon of Anti-Colonial Resistance in India

Authors: Suneet Jagdev

Abstract:

The development of the original Bauhaus occurred at a time in the beginning of the 20th century when the industrialization of Germany had reached a climax. The cities were a reflection of the new living conditions of an industrialized society. The Bauhaus can be interpreted as an ambitious attempt to find appropriate answers to the challenges by using architecture-urban development and design. The core elements of the conviction of the day were the belief in the necessary crossing of boundaries between the various disciplines and courage to experiment for a better solution. Even after 100 years, the situation in our cities is shaped by similar complexity. The urban consequences of developments are difficult to estimate and to predict. The paper critically reflected on the central aspects of the history of the Bauhaus and its role in bringing the modernism in India by comparative studies of the methodology adopted by the artists and designer in both the countries. The paper talked in detail about how the Bauhaus Exhibition in 1922 offered Indian artists a new weapon of anti-colonial resistance. The original Bauhaus fought its aesthetic and political battles in the context of economic instability and the rise of German fascism. The Indians had access to dominant global languages and in a particular English. The availability of print media and a vibrant indigenous intellectual culture provided Indian people a tool to accept technology while denying both its dominant role in culture and the inevitability of only one form of modernism. The indigenous was thus less an engagement with their culture as in the West than a tool of anti-colonial struggle. We have shown how the Indian people used Bauhaus as a critique of colonialism itself through an undermining of its typical modes of representation and as a means of incorporating the Indian desire for spirituality into art and as providing the cultural basis for a non-materialistic and anti-industrial form of what we might now term development. The paper reflected how through painting the Bauhaus entered the artistic consciousness of the sub-continent not only for its stylistic and technical innovations but as a tool for a critical and even utopian modernism that could challenge both the hegemony of academic and orientalist art and as the bearer of a transnational avant-garde as much political as it was artistic, and as such the basis of a non-Eurocentric but genuinely cosmopolitan alternative to the hierarchies of oppression and domination that had long bound India and were at that moment rising once again to a tragic crescendo in Europe. We have talked about how the Bauhaus of today can offer an innovative orientation towards discourse around architecture and design.

Keywords: anti-colonial struggle, art over architecture, Bauhaus exhibition of 1922, industrialization

Procedia PDF Downloads 228
641 On-Chip Ku-Band Bandpass Filter with Compact Size and Wide Stopband

Authors: Jyh Sheen, Yang-Hung Cheng

Abstract:

This paper presents a design of a microstrip bandpass filter with a compact size and wide stopband by using 0.15-μm GaAs pHEMT process. The wide stop band is achieved by suppressing the first and second harmonic resonance frequencies. The slow-wave coupling stepped impedance resonator with cross coupled structure is adopted to design the bandpass filter. A two-resonator filter was fabricated with 13.5GHz center frequency and 11% bandwidth was achieved. The devices are simulated using the ADS design software. This device has shown a compact size and very low insertion loss of 2.6 dB. Microstrip planar bandpass filters have been widely adopted in various communication applications due to the attractive features of compact size and ease of fabricating. Various planar resonator structures have been suggested. In order to reach a wide stopband to reduce the interference outside the passing band, various designs of planar resonators have also been submitted to suppress the higher order harmonic frequencies of the designed center frequency. Various modifications to the traditional hairpin structure have been introduced to reduce large design area of hairpin designs. The stepped-impedance, slow-wave open-loop, and cross-coupled resonator structures have been studied to miniaturize the hairpin resonators. In this study, to suppress the spurious harmonic bands and further reduce the filter size, a modified hairpin-line bandpass filter with cross coupled structure is suggested by introducing the stepped impedance resonator design as well as the slow-wave open-loop resonator structure. In this way, very compact circuit size as well as very wide upper stopband can be achieved and realized in a Roger 4003C substrate. On the other hand, filters constructed with integrated circuit technology become more attractive for enabling the integration of the microwave system on a single chip (SOC). To examine the performance of this design structure at the integrated circuit, the filter is fabricated by the 0.15 μm pHEMT GaAs integrated circuit process. This pHEMT process can also provide a much better circuit performance for high frequency designs than those made on a PCB board. The design example was implemented in GaAs with center frequency at 13.5 GHz to examine the performance in higher frequency in detail. The occupied area is only about 1.09×0.97 mm2. The ADS software is used to design those modified filters to suppress the first and second harmonics.

Keywords: microstrip resonator, bandpass filter, harmonic suppression, GaAs

Procedia PDF Downloads 310
640 Gas-Phase Nondestructive and Environmentally Friendly Covalent Functionalization of Graphene Oxide Paper with Amines

Authors: Natalia Alzate-Carvajal, Diego A. Acevedo-Guzman, Victor Meza-Laguna, Mario H. Farias, Luis A. Perez-Rey, Edgar Abarca-Morales, Victor A. Garcia-Ramirez, Vladimir A. Basiuk, Elena V. Basiuk

Abstract:

Direct covalent functionalization of prefabricated free-standing graphene oxide paper (GOP) is considered as the only approach suitable for systematic tuning of thermal, mechanical and electronic characteristics of this important class of carbon nanomaterials. At the same time, the traditional liquid-phase functionalization protocols can compromise physical integrity of the paper-like material up to its total disintegration. To avoid such undesirable effects, we explored the possibility of employing an alternative, solvent-free strategy for facile and nondestructive functionalization of GOP with two representative aliphatic amines, 1-octadecylamine (ODA) and 1,12-diaminododecane (DAD), as well as with two aromatic amines, 1-aminopyrene (AP) and 1,5-diaminonaphthalene (DAN). The functionalization was performed under moderate heating at 150-180 °C in vacuum. Under such conditions, it proceeds through both amidation and epoxy ring opening reactions. Comparative characterization of pristine and amine-functionalized GOP mats was carried out by using Fourier-transform infrared, Raman, and X-ray photoelectron spectroscopy (XPS), thermogravimetric (TGA) and differential thermal analysis, scanning electron and atomic force microscopy (SEM and AFM, respectively). Besides that, we compared the stability in water, wettability, electrical conductivity and elastic (Young's) modulus of GOP mats before and after amine functionalization. The highest content of organic species was obtained in the case of GOP-ODA, followed by GOP-DAD, GOP-AP and GOP-DAN samples. The covalent functionalization increased mechanical and thermal stability of GOP, as well as its electrical conductivity. The magnitude of each effect depends on the particular chemical structure of amine employed, which allows for tuning a given GOP property. Morphological characterization by using SEM showed that, compared to pristine graphene oxide paper, amine-modified GOP mats become relatively ordered layered assemblies, in which individual GO sheets are organized in a near-parallel pattern. Financial support from the National Autonomous University of Mexico (grants DGAPA-IN101118 and IN200516) and from the National Council of Science and Technology of Mexico (CONACYT, grant 250655) is greatly appreciated. The authors also thank David A. Domínguez (CNyN of UNAM) for XPS measurements and Dr. Edgar Alvarez-Zauco (Faculty of Science of UNAM) for the opportunity to use TGA equipment.

Keywords: amines, covalent functionalization, gas-phase, graphene oxide paper

Procedia PDF Downloads 152
639 Criteria to Access Justice in Remote Criminal Trial Implementation

Authors: Inga Žukovaitė

Abstract:

This work aims to present postdoc research on remote criminal proceedings in court in order to streamline the proceedings and, at the same time, ensure the effective participation of the parties in criminal proceedings and the court's obligation to administer substantive and procedural justice. This study tests the hypothesis that remote criminal proceedings do not in themselves violate the fundamental principles of criminal procedure; however, their implementation must ensure the right of the parties to effective legal remedies and a fair trial and, only then, must address the issues of procedural economy, speed and flexibility/functionality of the application of technologies. In order to ensure that changes in the regulation of criminal proceedings are in line with fair trial standards, this research will provide answers to the questions of what conditions -first of all, legal and only then organisational- are required for remote criminal proceedings to ensure respect for the parties and enable their effective participation in public proceedings, to create conditions for quality legal defence and its accessibility, to give a correct impression to the party that they are heard and that the court is impartial and fair. It also seeks to present the results of empirical research in the courts of Lithuania that was made by using the interview method. The research will serve as a basis for developing a theoretical model for remote criminal proceedings in the EU to ensure a balance between the intention to have innovative, cost-effective, and flexible criminal proceedings and the positive obligation of the State to ensure the rights of participants in proceedings to just and fair criminal proceedings. Moreover, developments in criminal proceedings also keep changing the image of the court itself; therefore, in the paper will create preconditions for future research on the impact of remote criminal proceedings on the trust in courts. The study aims at laying down the fundamentals for theoretical models of a remote hearing in criminal proceedings and at making recommendations for the safeguarding of human rights, in particular the rights of the accused, in such proceedings. The following criteria are relevant for the remote form of criminal proceedings: the purpose of judicial instance, the legal position of participants in proceedings, their vulnerability, and the nature of required legal protection. The content of the study consists of: 1. Identification of the factual and legal prerequisites for a decision to organise the entire criminal proceedings by remote means or to carry out one or several procedural actions by remote means 2. After analysing the legal regulation and practice concerning the application of the elements of remote criminal proceedings, distinguish the main legal safeguards for protection of the rights of the accused to ensure: (a) the right of effective participation in a court hearing; (b) the right of confidential consultation with the defence counsel; (c) the right of participation in the examination of evidence, in particular material evidence, as well as the right to question witnesses; and (d) the right to a public trial.

Keywords: remote criminal proceedings, fair trial, right to defence, technology progress

Procedia PDF Downloads 56
638 Analyzing Transit Network Design versus Urban Dispersion

Authors: Hugo Badia

Abstract:

This research answers which is the most suitable transit network structure to serve specific demand requirements in an increasing urban dispersion process. Two main approaches of network design are found in the literature. On the one hand, a traditional answer, widespread in our cities, that develops a high number of lines to connect most of origin-destination pairs by direct trips; an approach based on the idea that users averse to transfers. On the other hand, some authors advocate an alternative design characterized by simple networks where transfer is essential to complete most of trips. To answer which of them is the best option, we use a two-step methodology. First, by means of an analytical model, three basic network structures are compared: a radial scheme, starting point for the other two structures, a direct trip-based network, and a transfer-based one, which represent the two alternative transit network designs. The model optimizes the network configuration with regard to the total cost for each structure. For a scenario of dispersion, the best alternative is the structure with the minimum cost. This dispersion degree is defined in a simple way considering that only a central area attracts all trips. If this area is small, we have a high concentrated mobility pattern; if this area is too large, the city is highly decentralized. In this first step, we can determine the area of applicability for each structure in function to that urban dispersion degree. The analytical results show that a radial structure is suitable when the demand is so centralized, however, when this demand starts to scatter, new transit lines should be implemented to avoid transfers. If the urban dispersion advances, the introduction of more lines is no longer a good alternative, in this case, the best solution is a change of structure, from direct trips to a network based on transfers. The area of applicability of each network strategy is not constant, it depends on the characteristics of demand, city and transport technology. In the second step, we translate analytical results to a real case study by the relationship between the parameters of dispersion of the model and direct measures of dispersion in a real city. Two dimensions of the urban sprawl process are considered: concentration, defined by Gini coefficient, and centralization by area based centralization index. Once it is estimated the real dispersion degree, we are able to identify in which area of applicability the city is located. In summary, from a strategic point of view, we can obtain with this methodology which is the best network design approach for a city, comparing the theoretical results with the real dispersion degree.

Keywords: analytical network design model, network structure, public transport, urban dispersion

Procedia PDF Downloads 214
637 Automatic Aggregation and Embedding of Microservices for Optimized Deployments

Authors: Pablo Chico De Guzman, Cesar Sanchez

Abstract:

Microservices are a software development methodology in which applications are built by composing a set of independently deploy-able, small, modular services. Each service runs a unique process and it gets instantiated and deployed in one or more machines (we assume that different microservices are deployed into different machines). Microservices are becoming the de facto standard for developing distributed cloud applications due to their reduced release cycles. In principle, the responsibility of a microservice can be as simple as implementing a single function, which can lead to the following issues: - Resource fragmentation due to the virtual machine boundary. - Poor communication performance between microservices. Two composition techniques can be used to optimize resource fragmentation and communication performance: aggregation and embedding of microservices. Aggregation allows the deployment of a set of microservices on the same machine using a proxy server. Aggregation helps to reduce resource fragmentation, and is particularly useful when the aggregated services have a similar scalability behavior. Embedding deals with communication performance by deploying on the same virtual machine those microservices that require a communication channel (localhost bandwidth is reported to be about 40 times faster than cloud vendor local networks and it offers better reliability). Embedding can also reduce dependencies on load balancer services since the communication takes place on a single virtual machine. For example, assume that microservice A has two instances, a1 and a2, and it communicates with microservice B, which also has two instances, b1 and b2. One embedding can deploy a1 and b1 on machine m1, and a2 and b2 are deployed on a different machine m2. This deployment configuration allows each pair (a1-b1), (a2-b2) to communicate using the localhost interface without the need of a load balancer between microservices A and B. Aggregation and embedding techniques are complex since different microservices might have incompatible runtime dependencies which forbid them from being installed on the same machine. There is also a security concern since the attack surface between microservices can be larger. Luckily, container technology allows to run several processes on the same machine in an isolated manner, solving the incompatibility of running dependencies and the previous security concern, thus greatly simplifying aggregation/embedding implementations by just deploying a microservice container on the same machine as the aggregated/embedded microservice container. Therefore, a wide variety of deployment configurations can be described by combining aggregation and embedding to create an efficient and robust microservice architecture. This paper presents a formal method that receives a declarative definition of a microservice architecture and proposes different optimized deployment configurations by aggregating/embedding microservices. The first prototype is based on i2kit, a deployment tool also submitted to ICWS 2018. The proposed prototype optimizes the following parameters: network/system performance, resource usage, resource costs and failure tolerance.

Keywords: aggregation, deployment, embedding, resource allocation

Procedia PDF Downloads 179
636 The Relationships among Self-Efficacy, Critical Thinking and Communication Skills Ability in Oncology Nurses for Cancer Immunotherapy in Taiwan

Authors: Yun-Hsiang Lee

Abstract:

Cancer is the main cause of death worldwide. With advances in medical technology, immunotherapy, which is a newly developed advanced treatment, is currently a crucial cancer treatment option. For better quality cancer care, the ability to communicate and critical thinking plays a central role in clinical oncology settings. However, few studies have explored the impact of communication skills on immunotherapy-related issues and their related factors. This study was to (i) explore the current status of communication skill ability for immunotherapy-related issues, self-efficacy for immunotherapy-related care, and critical thinking ability; and (ii) identify factors related to communication skill ability. This is a cross-sectional study. Oncology nurses were recruited from the Taiwan Oncology Nursing Society, in which nurses came from different hospitals distributed across four major geographic regions (North, Center, South, East) of Taiwan. A total of 123 oncology nurses participated in this study. A set of questionnaires were used for collecting data. Communication skill ability for immunotherapy issues, self-efficacy for immunotherapy-related care, critical thinking ability, and background information were assessed in this survey. Independent T-test and one-way ANOVA were used to examine different levels of communication skill ability based on nurses having done oncology courses (yes vs. no) and education years (< 1 year, 1-3 years, and > 3 years), respectively. Spearman correlation was conducted to understand the relationships between communication skill ability and other variables. Among the 123 oncology nurses in the current study, the majority of them were female (98.4%), and most of them were employed at a hospital in the North (46.8%) of Taiwan. Most of them possessed a university degree (78.9%) and had at least 3 years of prior work experience (71.7%). Forty-three of the oncology nurses indicated in the survey that they had not received oncology nurses-related training. Those oncology nurses reported moderate to high levels of communication skill ability for immunotherapy issues (mean=4.24, SD=0.7, range 1-5). Nurses reported moderate levels of self-efficacy for immunotherapy-related care (mean=5.20, SD=1.98, range 0-10) and also had high levels of critical thinking ability (mean=4.76, SD=0.60, range 1-6). Oncology nurses who had received oncology training courses had significantly better communication skill ability than those who had not received oncology training. Oncology nurses who had higher work experience (1-3 years, or > 3 years) had significantly higher levels of communication skill ability for immunotherapy-related issues than those with lower work experience (<1 year). When those nurses reported better communication skill ability, they also had significantly better self-efficacy (r=.42, p<.01) and better critical thinking ability (r=.47, p<.01). Taken altogether, courses designed to improve communication skill ability for immunotherapy-related issues can make a significant impact in clinical settings. Communication skill ability for oncology nurses is the major factor associated with self-efficacy and critical thinking, especially for those with lower work experience (< 1 year).

Keywords: communication skills, critical thinking, immunotherapy, oncology nurses, self-efficacy

Procedia PDF Downloads 74
635 The Use of Information and Communication Technology within and between Emergency Medical Teams during a Disaster: A Qualitative study

Authors: Badryah Alshehri, Kevin Gormley, Gillian Prue, Karen McCutcheon

Abstract:

In a disaster event, sharing patient information between the pre-hospital Emergency Medical Services (EMS) and Emergency Department (ED) hospitals is a complex process during which important information may be altered or lost due to poor communication. The aim of this study was to critically discuss the current evidence base in relation to communication between pre- EMS hospital and ED hospital professionals by the use of Information and Communication Systems (ICT). This study followed the systematic approach; six electronic databases were searched: CINAHL, Medline, Embase, PubMed, Web of Science, and IEEE Xplore Digital Library were comprehensively searched in January 2018 and a second search was completed in April 2020 to capture more recent publications. The study selection process was undertaken independently by the study authors. Both qualitative and quantitative studies were chosen that focused on factors that are positively or negatively associated with coordinated communication between pre-hospital EMS and ED teams in a disaster event. These studies were assessed for quality, and the data were analyzed according to the key screening themes which emerged from the literature search. Twenty-two studies were included. Eleven studies employed quantitative methods, seven studies used qualitative methods, and four studies used mixed methods. Four themes emerged on communication between EMTs (pre-hospital EMS and ED staff) in a disaster event using the ICT. (1) Disaster preparedness plans and coordination. This theme reported that disaster plans are in place in hospitals, and in some cases, there are interagency agreements with pre-hospital and relevant stakeholders. However, the findings showed that the disaster plans highlighted in these studies lacked information regarding coordinated communications within and between the pre-hospital and hospital. (2) Communication systems used in the disaster. This theme highlighted that although various communication systems are used between and within hospitals and pre-hospitals, technical issues have influenced communication between teams during disasters. (3) Integrated information management systems. This theme suggested the need for an integrated health information system that can help pre-hospital and hospital staff to record patient data and ensure the data is shared. (4) Disaster training and drills. While some studies analyzed disaster drills and training, the majority of these studies were focused on hospital departments other than EMTs. These studies suggest the need for simulation disaster training and drills, including EMTs. This review demonstrates that considerable gaps remain in the understanding of the communication between the EMS and ED hospital staff in relation to response in disasters. The review shows that although different types of ICTs are used, various issues remain which affect coordinated communication among the relevant professionals.

Keywords: emergency medical teams, communication, information and communication technologies, disaster

Procedia PDF Downloads 108
634 Healthy Architecture Applied to Inclusive Design for People with Cognitive Disabilities

Authors: Santiago Quesada-García, María Lozano-Gómez, Pablo Valero-Flores

Abstract:

The recent digital revolution, together with modern technologies, is changing the environment and the way people interact with inhabited space. However, in society, the elderly are a very broad and varied group that presents serious difficulties in understanding these modern technologies. Outpatients with cognitive disabilities, such as those suffering from Alzheimer's disease (AD), are distinguished within this cluster. This population group is in constant growth, and they have specific requirements for their inhabited space. According to architecture, which is one of the health humanities, environments are designed to promote well-being and improve the quality of life for all. Buildings, as well as the tools and technologies integrated into them, must be accessible, inclusive, and foster health. In this new digital paradigm, artificial intelligence (AI) appears as an innovative resource to help this population group improve their autonomy and quality of life. Some experiences and solutions, such as those that interact with users through chatbots and voicebots, show the potential of AI in its practical application. In the design of healthy spaces, the integration of AI in architecture will allow the living environment to become a kind of 'exo-brain' that can make up for certain cognitive deficiencies in this population. The objective of this paper is to address, from the discipline of neuroarchitecture, how modern technologies can be integrated into everyday environments and be an accessible resource for people with cognitive disabilities. For this, the methodology has a mixed structure. On the one hand, from an empirical point of view, the research carries out a review of the existing literature about the applications of AI to build space, following the critical review foundations. As a unconventional architectural research, an experimental analysis is proposed based on people with AD as a resource of data to study how the environment in which they live influences their regular activities. The results presented in this communication are part of the progress achieved in the competitive R&D&I project ALZARQ (PID2020-115790RB-I00). These outcomes are aimed at the specific needs of people with cognitive disabilities, especially those with AD, since, due to the comfort and wellness that the solutions entail, they can also be extrapolated to the whole society. As a provisional conclusion, it can be stated that, in the immediate future, AI will be an essential element in the design and construction of healthy new environments. The discipline of architecture has the compositional resources to, through this emerging technology, build an 'exo-brain' capable of becoming a personal assistant for the inhabitants, with whom to interact proactively and contribute to their general well-being. The main objective of this work is to show how this is possible.

Keywords: Alzheimer’s disease, artificial intelligence, healthy architecture, neuroarchitecture, architectural design

Procedia PDF Downloads 40
633 Mikrophonie I (1964) by Karlheinz Stockhausen - Between Idea and Auditory Image

Authors: Justyna Humięcka-Jakubowska

Abstract:

1. Background in music analysis. Traditionally, when we think about a composer’s sketches, the chances are that we are thinking in terms of the working out of detail, rather than the evolution of an overall concept. Since music is a “time art’, it follows that questions of a form cannot be entirely detached from considerations of time. One could say that composers tend to regard time either as a place gradually and partially intuitively filled, or they can look for a specific strategy to occupy it. In my opinion, one thing that sheds light on Stockhausen's compositional thinking is his frequent use of 'form schemas', that is often a single-page representation of the entire structure of a piece. 2. Background in music technology. Sonic Visualiser is a program used to study a musical recording. It is an open source application for viewing, analysing, and annotating music audio files. It contains a number of visualisation tools, which are designed with useful default parameters for musical analysis. Additionally, the Vamp plugin format of SV supports to provide analysis such as for example structural segmentation. 3. Aims. The aim of my paper is to show how SV may be used to obtain a better understanding of the specific musical work, and how the compositional strategy does impact on musical structures and musical surfaces. I want to show that ‘traditional” music analytic methods don’t allow to indicate interrelationships between musical surface (which is perceived) and underlying musical/acoustical structure. 4. Main Contribution. Stockhausen had dealt with the most diverse musical problems by the most varied methods. A characteristic which he had never ceased to be placed at the center of his thought and works, it was the quest for a new balance founded upon an acute connection between speculation and intuition. In the case with Mikrophonie I (1964) for tam-tam and 6 players Stockhausen makes a distinction between the "connection scheme", which indicates the ground rules underlying all versions, and the form scheme, which is associated with a particular version. The preface to the published score includes both the connection scheme, and a single instance of a "form scheme", which is what one can hear on the CD recording. In the current study, the insight into the compositional strategy chosen by Stockhausen was been compared with auditory image, that is, with the perceived musical surface. Stockhausen's musical work is analyzed both in terms of melodic/voice and timbre evolution. 5. Implications The current study shows how musical structures have determined of musical surface. My general assumption is this, that while listening to music we can extract basic kinds of musical information from musical surfaces. It is shown that an interactive strategies of musical structure analysis can offer a very fruitful way of looking directly into certain structural features of music.

Keywords: automated analysis, composer's strategy, mikrophonie I, musical surface, stockhausen

Procedia PDF Downloads 280
632 A Blueprint for Responsible Launch of Small Satellites from a Debris Perspective

Authors: Jeroen Rotteveel, Zeger De Groot

Abstract:

The small satellite community is more and more aware of the need to start operating responsibly and sustainably in order to secure the use of outer space in the long run. On the technical side, many debris mitigation techniques have been investigated and demonstrated on board small satellites, showing that technically, a lot of things can be done to curb the growth of space debris and operate more responsible. However, in the absence of strict laws and constraints, one cannot help but wonder what the incentive is to incur significant costs (paying for debris mitigation systems and the launch mass of these systems) and to lose performance onboard resource limited small satellites (mass, volume, power)? Many small satellite developers are operating under tight budgets, either from their sponsors (in case of academic and research projects) or from their investors (in case of startups). As long as it is not mandatory to act more responsibly, we might need to consider the implementation of incentives to stimulate developers to accommodate deorbiting modules, etc. ISISPACE joined the NetZeroSpace initiative in 2021 with the aim to play its role in secure the use of low earth orbit for the next decades by facilitating more sustainable use of space. The company is in a good position as both a satellite builder, a rideshare launch provider, and a technology development company. ISISPACE operates under one of the stricter space laws in the world in terms of maximum orbital lifetime and has been active in various debris mitigation and debris removal in-orbit demonstration missions in the past 10 years. ISISPACE proposes to introduce together with launch partners and regulators an incentive scheme for CubeSat developers to baseline debris mitigation systems on board their CubeSats in such a way that is does not impose too many additional costs to the project. Much like incentives to switch to electric cars or install solar panels on your house, such an incentive can help to increase market uptake of behavior or solutions prior to legislation or bans of certain practices. This can be achieved by: Introducing an extended launch volume in CubeSat deployers to accommodate debris mitigation systems without compromising available payload space for the payload of the main mission Not charging the fee for the launch mass for the additional debris mitigation module Whenever possible, find ways to further co-fund the purchase price, or otherwise reduce the cost of flying debris mitigation modules onboard the CubeSats. The paper will outline the framework of such an incentive scheme and provides ISISPACE’s way forward to make this happen in the near future.

Keywords: netZerospace, cubesats, debris mitigation, small satellite community

Procedia PDF Downloads 129
631 Enhancing Athlete Training using Real Time Pose Estimation with Neural Networks

Authors: Jeh Patel, Chandrahas Paidi, Ahmed Hambaba

Abstract:

Traditional methods for analyzing athlete movement often lack the detail and immediacy required for optimal training. This project aims to address this limitation by developing a Real-time human pose estimation system specifically designed to enhance athlete training across various sports. This system leverages the power of convolutional neural networks (CNNs) to provide a comprehensive and immediate analysis of an athlete’s movement patterns during training sessions. The core architecture utilizes dilated convolutions to capture crucial long-range dependencies within video frames. Combining this with the robust encoder-decoder architecture to further refine pose estimation accuracy. This capability is essential for precise joint localization across the diverse range of athletic poses encountered in different sports. Furthermore, by quantifying movement efficiency, power output, and range of motion, the system provides data-driven insights that can be used to optimize training programs. Pose estimation data analysis can also be used to develop personalized training plans that target specific weaknesses identified in an athlete’s movement patterns. To overcome the limitations posed by outdoor environments, the project employs strategies such as multi-camera configurations or depth sensing techniques. These approaches can enhance pose estimation accuracy in challenging lighting and occlusion scenarios, where pose estimation accuracy in challenging lighting and occlusion scenarios. A dataset is collected From the labs of Martin Luther King at San Jose State University. The system is evaluated through a series of tests that measure its efficiency and accuracy in real-world scenarios. Results indicate a high level of precision in recognizing different poses, substantiating the potential of this technology in practical applications. Challenges such as enhancing the system’s ability to operate in varied environmental conditions and further expanding the dataset for training were identified and discussed. Future work will refine the model’s adaptability and incorporate haptic feedback to enhance the interactivity and richness of the user experience. This project demonstrates the feasibility of an advanced pose detection model and lays the groundwork for future innovations in assistive enhancement technologies.

Keywords: computer vision, deep learning, human pose estimation, U-NET, CNN

Procedia PDF Downloads 12
630 Utilizing Temporal and Frequency Features in Fault Detection of Electric Motor Bearings with Advanced Methods

Authors: Mohammad Arabi

Abstract:

The development of advanced technologies in the field of signal processing and vibration analysis has enabled more accurate analysis and fault detection in electrical systems. This research investigates the application of temporal and frequency features in detecting faults in electric motor bearings, aiming to enhance fault detection accuracy and prevent unexpected failures. The use of methods such as deep learning algorithms and neural networks in this process can yield better results. The main objective of this research is to evaluate the efficiency and accuracy of methods based on temporal and frequency features in identifying faults in electric motor bearings to prevent sudden breakdowns and operational issues. Additionally, the feasibility of using techniques such as machine learning and optimization algorithms to improve the fault detection process is also considered. This research employed an experimental method and random sampling. Vibration signals were collected from electric motors under normal and faulty conditions. After standardizing the data, temporal and frequency features were extracted. These features were then analyzed using statistical methods such as analysis of variance (ANOVA) and t-tests, as well as machine learning algorithms like artificial neural networks and support vector machines (SVM). The results showed that using temporal and frequency features significantly improves the accuracy of fault detection in electric motor bearings. ANOVA indicated significant differences between normal and faulty signals. Additionally, t-tests confirmed statistically significant differences between the features extracted from normal and faulty signals. Machine learning algorithms such as neural networks and SVM also significantly increased detection accuracy, demonstrating high effectiveness in timely and accurate fault detection. This study demonstrates that using temporal and frequency features combined with machine learning algorithms can serve as an effective tool for detecting faults in electric motor bearings. This approach not only enhances fault detection accuracy but also simplifies and streamlines the detection process. However, challenges such as data standardization and the cost of implementing advanced monitoring systems must also be considered. Utilizing temporal and frequency features in fault detection of electric motor bearings, along with advanced machine learning methods, offers an effective solution for preventing failures and ensuring the operational health of electric motors. Given the promising results of this research, it is recommended that this technology be more widely adopted in industrial maintenance processes.

Keywords: electric motor, fault detection, frequency features, temporal features

Procedia PDF Downloads 18
629 The Effects of Computer Game-Based Pedagogy on Graduate Students Statistics Performance

Authors: Eva Laryea, Clement Yeboah Authors

Abstract:

A pretest-posttest within subjects, experimental design was employed to examine the effects of a computerized basic statistics learning game on achievement and statistics-related anxiety of students enrolled in introductory graduate statistics course. Participants (N = 34) were graduate students in a variety of programs at state-funded research university in the Southeast United States. We analyzed pre-test posttest differences using paired samples t-tests for achievement and for statistics anxiety. The results of the t-test for knowledge in statistics were found to be statistically significant indicating significant mean gains for statistical knowledge as a function of the game-based intervention. Likewise, the results of the t-test for statistics-related anxiety were also statistically significant indicating a decrease in anxiety from pretest to posttest. The implications of the present study are significant for both teachers and students. For teachers, using computer games developed by the researchers can help to create a more dynamic and engaging classroom environment, as well as improve student learning outcomes. For students, playing these educational games can help to develop important skills such as problem solving, critical thinking, and collaboration. Students can develop interest in the subject matter and spend quality time to learn the course as they play the game without knowing that they are even learning the presupposed hard course. The future directions of the present study are promising, as technology continues to advance and become more widely available. Some potential future developments include the integration of virtual and augmented reality into educational games, the use of machine learning and artificial intelligence to create personalized learning experiences, and the development of new and innovative game-based assessment tools. It is also important to consider the ethical implications of computer game-based pedagogy, such as the potential for games to perpetuate harmful stereotypes and biases. As the field continues to evolve, it will be crucial to address these issues and work towards creating inclusive and equitable learning experiences for all students. This study has the potential to revolutionize the way basic statistics graduate students learn and offers exciting opportunities for future development and research. It is an important area of inquiry for educators, researchers, and policymakers, and will continue to be a dynamic and rapidly evolving field for years to come.

Keywords: pretest-posttest within subjects, experimental design, achievement, statistics-related anxiety

Procedia PDF Downloads 45
628 Cassava Plant Architecture: Insights from Genome-Wide Association Studies

Authors: Abiodun Olayinka, Daniel Dzidzienyo, Pangirayi Tongoona, Samuel Offei, Edwige Gaby Nkouaya Mbanjo, Chiedozie Egesi, Ismail Yusuf Rabbi

Abstract:

Cassava (Manihot esculenta Crantz) is a major source of starch for various industrial applications. However, the traditional cultivation and harvesting methods of cassava are labour-intensive and inefficient, limiting the supply of fresh cassava roots for industrial starch production. To achieve improved productivity and quality of fresh cassava roots through mechanized cultivation, cassava cultivars with compact plant architecture and moderate plant height are needed. Plant architecture-related traits, such as plant height, harvest index, stem diameter, branching angle, and lodging tolerance, are critical for crop productivity and suitability for mechanized cultivation. However, the genetics of cassava plant architecture remain poorly understood. This study aimed to identify the genetic bases of the relationships between plant architecture traits and productivity-related traits, particularly starch content. A panel of 453 clones developed at the International Institute of Tropical Agriculture, Nigeria, was genotyped and phenotyped for 18 plant architecture and productivity-related traits at four locations in Nigeria. A genome-wide association study (GWAS) was conducted using the phenotypic data from a panel of 453 clones and 61,238 high-quality Diversity Arrays Technology sequencing (DArTseq) derived Single Nucleotide Polymorphism (SNP) markers that are evenly distributed across the cassava genome. Five significant associations between ten SNPs and three plant architecture component traits were identified through GWAS. We found five SNPs on chromosomes 6 and 16 that were significantly associated with shoot weight, harvest index, and total yield through genome-wide association mapping. We also discovered an essential candidate gene that is co-located with peak SNPs linked to these traits in M. esculenta. A review of the cassava reference genome v7.1 revealed that the SNP on chromosome 6 is in proximity to Manes.06G101600.1, a gene that regulates endodermal differentiation and root development in plants. The findings of this study provide insights into the genetic basis of plant architecture and yield in cassava. Cassava breeders could leverage this knowledge to optimize plant architecture and yield in cassava through marker-assisted selection and targeted manipulation of the candidate gene.

Keywords: Manihot esculenta Crantz, plant architecture, DArtseq, SNP markers, genome-wide association study

Procedia PDF Downloads 48
627 Feedback from a Service Evaluation of a Modified Intrauterine Device Insertor: A First Step to a Changement of the Standard of Iud Insertion Procedure

Authors: Desjardin, Michaels, Martinez, Ulmann

Abstract:

Copper IUD is one of the most efficient and cost-effective contraception. However, pain at insertion hampers the use of this method. This is especially unfortunate in nulliparous women, often younger, who are excellent candidates for this contraception, including Emergency Contraception. Standard insertion procedure of a copper IUD usually involves measurement of uterine cavity with an hysterometer and the use of a tenaculum in order to facilitate device insertion. Both procedures lead to patient pain which often constitutes a limitation of the method. To overcome these issues, we have developed a modified insertor combined with a copper IUD. The singular design of the inserter includes a flexible inflatable membrane technology allowing an easy access to the uterine cavity even in case of abnormal uterine positions or narrow cervical canal. Moreover, this inserter makes possible a direct IUD insertion with no hysterometry and no need for tenaculum. To assess device effectiveness and patient-reported pain, a study was conducted at two clinics in Fance with 31 individuals who wanted to use a copper IUD as contraceptive method. IUD insertions have been performed by four healthcare providers. Operators completed questionnaire and evaluated effectiveness of the procedure (including IUD correct fundal placement and other usability questions) as their satisfaction. Patient also completed questionnaire and pain during procedure was measured on a 10-cm Visual Analogue Scale (VAS). Analysis of the questionnaires indicates that correct IUD placement took place in more than 93% of women, which is a standard efficacy rate. It also demonstrates that IUD insertion resulted in no, light or moderate pain predominantly in nulliparous women. No insertion resulted in severe pain (none above 6cm on a 10-cm VAS). This translated by a high level of satisfaction from both patients and practitioners. In addition, this modified inserter allowed a simplification of the insertion procedure: correct fundal placement was ensured with no need for hysterometry (100%) prior to insertion nor for cervical tenaculum to pull on the cervix (90%). Avoidance of both procedures contributed to the decrease in pain during insertion. Taken together, the results of the study demonstrate that this device constitutes a significant advance in the use of copper IUDs for any woman. It allows a simplification of the insertion procedure: there is no need for pre-insertion hysterometry and no need for traction on the cervix with tenaculum. Increased comfort during insertion should allow a wider use of the method for nulliparous women and for emergency contraception. In addition, pain is often underestimated by practitioners, but fear of pain is obviously one of the blocking factors as indicated by the analysis of the questionnaire. This evaluation brings interesting information on the use of this modified inserter for standard copper IUD and promising perspectives to set up a changement in the standard of IUD insertion procedure.

Keywords: contraceptio, IUD, innovation, pain

Procedia PDF Downloads 58
626 Circular Economy Initiatives in Denmark for the Recycling of Household Plastic Wastes

Authors: Rikke Lybæk

Abstract:

This paper delves into the intricacies of recycling household plastic waste within Denmark, employing an exploratory case study methodology to shed light on the technical, strategic, and market dynamics of the plastic recycling value chain. Focusing on circular economy principles, the research identifies critical gaps and opportunities in recycling processes, particularly regarding plastic packaging waste derived from households, with a notable absence in food packaging reuse initiatives. The study uncovers the predominant practice of downcycling in the current value chain, underscoring a disconnect between the potential for high-quality plastic recycling and the market's readiness to embrace such materials. Through detailed examination of three leading companies in Denmark's plastic industry, the paper highlights the existing support for recycling initiatives, yet points to the necessity of assured quality in sorted plastics to foster broader adoption. The analysis further explores the importance of reuse strategies to complement recycling efforts, aiming to alleviate the pressure on virgin feedstock. The paper ventures into future perspectives, discussing different approaches such as biological degradation methods, watermark technology for plastic traceability, and the potential for bio-based and PtX plastics. These avenues promise not only to enhance recycling efficiency but also to contribute to a more sustainable circular economy by reducing reliance on virgin materials. Despite the challenges outlined, the research demonstrates a burgeoning market for recycled plastics within Denmark, propelled by both environmental considerations and customer demand. However, the study also calls for a more harmonized and effective waste collection and sorting system to elevate the quality and quantity of recyclable plastics. By casting a spotlight on successful case studies and potential technological advancements, the paper advocates for a multifaceted approach to plastic waste management, encompassing not only recycling but also innovative reuse and reduction strategies to foster a more sustainable future. In conclusion, this study underscores the urgent need for innovative, coordinated efforts in the recycling and management of plastic waste to move towards a more sustainable and circular economy in Denmark. It calls for the adoption of comprehensive strategies that include improving recycling technologies, enhancing waste collection systems, and fostering a market environment that values recycled materials, thereby contributing significantly to environmental sustainability goals.

Keywords: case study, circular economy, Denmark, plastic waste, sustainability, waste management

Procedia PDF Downloads 49
625 Radar Cross Section Modelling of Lossy Dielectrics

Authors: Ciara Pienaar, J. W. Odendaal, J. Joubert, J. C. Smit

Abstract:

Radar cross section (RCS) of dielectric objects play an important role in many applications, such as low observability technology development, drone detection, and monitoring as well as coastal surveillance. Various materials are used to construct the targets of interest such as metal, wood, composite materials, radar absorbent materials, and other dielectrics. Since simulated datasets are increasingly being used to supplement infield measurements, as it is more cost effective and a larger variety of targets can be simulated, it is important to have a high level of confidence in the predicted results. Confidence can be attained through validation. Various computational electromagnetic (CEM) methods are capable of predicting the RCS of dielectric targets. This study will extend previous studies by validating full-wave and asymptotic RCS simulations of dielectric targets with measured data. The paper will provide measured RCS data of a number of canonical dielectric targets exhibiting different material properties. As stated previously, these measurements are used to validate numerous CEM methods. The dielectric properties are accurately characterized to reduce the uncertainties in the simulations. Finally, an analysis of the sensitivity of oblique and normal incidence scattering predictions to material characteristics is also presented. In this paper, the ability of several CEM methods, including method of moments (MoM), and physical optics (PO), to calculate the RCS of dielectrics were validated with measured data. A few dielectrics, exhibiting different material properties, were selected and several canonical targets, such as flat plates and cylinders, were manufactured. The RCS of these dielectric targets were measured in a compact range at the University of Pretoria, South Africa, over a frequency range of 2 to 18 GHz and a 360° azimuth angle sweep. This study also investigated the effect of slight variations in the material properties on the calculated RCS results, by varying the material properties within a realistic tolerance range and comparing the calculated RCS results. Interesting measured and simulated results have been obtained. Large discrepancies were observed between the different methods as well as the measured data. It was also observed that the accuracy of the RCS data of the dielectrics can be frequency and angle dependent. The simulated RCS for some of these materials also exhibit high sensitivity to variations in the material properties. Comparison graphs between the measured and simulation RCS datasets will be presented and the validation thereof will be discussed. Finally, the effect that small tolerances in the material properties have on the calculated RCS results will be shown. Thus the importance of accurate dielectric material properties for validation purposes will be discussed.

Keywords: asymptotic, CEM, dielectric scattering, full-wave, measurements, radar cross section, validation

Procedia PDF Downloads 220
624 Colored Image Classification Using Quantum Convolutional Neural Networks Approach

Authors: Farina Riaz, Shahab Abdulla, Srinjoy Ganguly, Hajime Suzuki, Ravinesh C. Deo, Susan Hopkins

Abstract:

Recently, quantum machine learning has received significant attention. For various types of data, including text and images, numerous quantum machine learning (QML) models have been created and are being tested. Images are exceedingly complex data components that demand more processing power. Despite being mature, classical machine learning still has difficulties with big data applications. Furthermore, quantum technology has revolutionized how machine learning is thought of, by employing quantum features to address optimization issues. Since quantum hardware is currently extremely noisy, it is not practicable to run machine learning algorithms on it without risking the production of inaccurate results. To discover the advantages of quantum versus classical approaches, this research has concentrated on colored image data. Deep learning classification models are currently being created on Quantum platforms, but they are still in a very early stage. Black and white benchmark image datasets like MNIST and Fashion MINIST have been used in recent research. MNIST and CIFAR-10 were compared for binary classification, but the comparison showed that MNIST performed more accurately than colored CIFAR-10. This research will evaluate the performance of the QML algorithm on the colored benchmark dataset CIFAR-10 to advance QML's real-time applicability. However, deep learning classification models have not been developed to compare colored images like Quantum Convolutional Neural Network (QCNN) to determine how much it is better to classical. Only a few models, such as quantum variational circuits, take colored images. The methodology adopted in this research is a hybrid approach by using penny lane as a simulator. To process the 10 classes of CIFAR-10, the image data has been translated into grey scale and the 28 × 28-pixel image containing 10,000 test and 50,000 training images were used. The objective of this work is to determine how much the quantum approach can outperform a classical approach for a comprehensive dataset of color images. After pre-processing 50,000 images from a classical computer, the QCNN model adopted a hybrid method and encoded the images into a quantum simulator for feature extraction using quantum gate rotations. The measurements were carried out on the classical computer after the rotations were applied. According to the results, we note that the QCNN approach is ~12% more effective than the traditional classical CNN approaches and it is possible that applying data augmentation may increase the accuracy. This study has demonstrated that quantum machine and deep learning models can be relatively superior to the classical machine learning approaches in terms of their processing speed and accuracy when used to perform classification on colored classes.

Keywords: CIFAR-10, quantum convolutional neural networks, quantum deep learning, quantum machine learning

Procedia PDF Downloads 100
623 Dual-use UAVs in Armed Conflicts: Opportunities and Risks for Cyber and Electronic Warfare

Authors: Piret Pernik

Abstract:

Based on strategic, operational, and technical analysis of the ongoing armed conflict in Ukraine, this paper will examine the opportunities and risks of using small commercial drones (dual-use unmanned aerial vehicles, UAV) for military purposes. The paper discusses the opportunities and risks in the information domain, encompassing both cyber and electromagnetic interference and attacks. The paper will draw conclusions on a possible strategic impact to the battlefield outcomes in the modern armed conflicts by the widespread use of dual-use UAVs. This article will contribute to filling the gap in the literature by examining based on empirical data cyberattacks and electromagnetic interference. Today, more than one hundred states and non-state actors possess UAVs ranging from low cost commodity models, widely are dual-use, available and affordable to anyone, to high-cost combat UAVs (UCAV) with lethal kinetic strike capabilities, which can be enhanced with Artificial Intelligence (AI) and Machine Learning (ML). Dual-use UAVs have been used by various actors for intelligence, reconnaissance, surveillance, situational awareness, geolocation, and kinetic targeting. Thus they function as force multipliers enabling kinetic and electronic warfare attacks and provide comparative and asymmetric operational and tactical advances. Some go as far as argue that automated (or semi-automated) systems can change the character of warfare, while others observe that the use of small drones has not changed the balance of power or battlefield outcomes. UAVs give considerable opportunities for commanders, for example, because they can be operated without GPS navigation, makes them less vulnerable and dependent on satellite communications. They can and have been used to conduct cyberattacks, electromagnetic interference, and kinetic attacks. However, they are highly vulnerable to those attacks themselves. So far, strategic studies, literature, and expert commentary have overlooked cybersecurity and electronic interference dimension of the use of dual use UAVs. The studies that link technical analysis of opportunities and risks with strategic battlefield outcomes is missing. It is expected that dual use commercial UAV proliferation in armed and hybrid conflicts will continue and accelerate in the future. Therefore, it is important to understand specific opportunities and risks related to the crowdsourced use of dual-use UAVs, which can have kinetic effects. Technical countermeasures to protect UAVs differ depending on a type of UAV (small, midsize, large, stealth combat), and this paper will offer a unique analysis of small UAVs both from the view of opportunities and risks for commanders and other actors in armed conflict.

Keywords: dual-use technology, cyber attacks, electromagnetic warfare, case studies of cyberattacks in armed conflicts

Procedia PDF Downloads 81
622 An As-Is Analysis and Approach for Updating Building Information Models and Laser Scans

Authors: Rene Hellmuth

Abstract:

Factory planning has the task of designing products, plants, processes, organization, areas, and the construction of a factory. The requirements for factory planning and the building of a factory have changed in recent years. Regular restructuring of the factory building is becoming more important in order to maintain the competitiveness of a factory. Restrictions in new areas, shorter life cycles of product and production technology as well as a VUCA world (Volatility, Uncertainty, Complexity & Ambiguity) lead to more frequent restructuring measures within a factory. A building information model (BIM) is the planning basis for rebuilding measures and becomes an indispensable data repository to be able to react quickly to changes. Use as a planning basis for restructuring measures in factories only succeeds if the BIM model has adequate data quality. Under this aspect and the industrial requirement, three data quality factors are particularly important for this paper regarding the BIM model: up-to-dateness, completeness, and correctness. The research question is: how can a BIM model be kept up to date with required data quality and which visualization techniques can be applied in a short period of time on the construction site during conversion measures? An as-is analysis is made of how BIM models and digital factory models (including laser scans) are currently being kept up to date. Industrial companies are interviewed, and expert interviews are conducted. Subsequently, the results are evaluated, and a procedure conceived how cost-effective and timesaving updating processes can be carried out. The availability of low-cost hardware and the simplicity of the process are of importance to enable service personnel from facility mnagement to keep digital factory models (BIM models and laser scans) up to date. The approach includes the detection of changes to the building, the recording of the changing area, and the insertion into the overall digital twin. Finally, an overview of the possibilities for visualizations suitable for construction sites is compiled. An augmented reality application is created based on an updated BIM model of a factory and installed on a tablet. Conversion scenarios with costs and time expenditure are displayed. A user interface is designed in such a way that all relevant conversion information is available at a glance for the respective conversion scenario. A total of three essential research results are achieved: As-is analysis of current update processes for BIM models and laser scans, development of a time-saving and cost-effective update process and the conception and implementation of an augmented reality solution for BIM models suitable for construction sites.

Keywords: building information modeling, digital factory model, factory planning, restructuring

Procedia PDF Downloads 89
621 Virtual Metering and Prediction of Heating, Ventilation, and Air Conditioning Systems Energy Consumption by Using Artificial Intelligence

Authors: Pooria Norouzi, Nicholas Tsang, Adam van der Goes, Joseph Yu, Douglas Zheng, Sirine Maleej

Abstract:

In this study, virtual meters will be designed and used for energy balance measurements of an air handling unit (AHU). The method aims to replace traditional physical sensors in heating, ventilation, and air conditioning (HVAC) systems with simulated virtual meters. Due to the inability to manage and monitor these systems, many HVAC systems have a high level of inefficiency and energy wastage. Virtual meters are implemented and applied in an actual HVAC system, and the result confirms the practicality of mathematical sensors for alternative energy measurement. While most residential buildings and offices are commonly not equipped with advanced sensors, adding, exploiting, and monitoring sensors and measurement devices in the existing systems can cost thousands of dollars. The first purpose of this study is to provide an energy consumption rate based on available sensors and without any physical energy meters. It proves the performance of virtual meters in HVAC systems as reliable measurement devices. To demonstrate this concept, mathematical models are created for AHU-07, located in building NE01 of the British Columbia Institute of Technology (BCIT) Burnaby campus. The models will be created and integrated with the system’s historical data and physical spot measurements. The actual measurements will be investigated to prove the models' accuracy. Based on preliminary analysis, the resulting mathematical models are successful in plotting energy consumption patterns, and it is concluded confidently that the results of the virtual meter will be close to the results that physical meters could achieve. In the second part of this study, the use of virtual meters is further assisted by artificial intelligence (AI) in the HVAC systems of building to improve energy management and efficiency. By the data mining approach, virtual meters’ data is recorded as historical data, and HVAC system energy consumption prediction is also implemented in order to harness great energy savings and manage the demand and supply chain effectively. Energy prediction can lead to energy-saving strategies and considerations that can open a window in predictive control in order to reach lower energy consumption. To solve these challenges, the energy prediction could optimize the HVAC system and automates energy consumption to capture savings. This study also investigates AI solutions possibility for autonomous HVAC efficiency that will allow quick and efficient response to energy consumption and cost spikes in the energy market.

Keywords: virtual meters, HVAC, artificial intelligence, energy consumption prediction

Procedia PDF Downloads 81
620 Use of Satellite Altimetry and Moderate Resolution Imaging Technology of Flood Extent to Support Seasonal Outlooks of Nuisance Flood Risk along United States Coastlines and Managed Areas

Authors: Varis Ransibrahmanakul, Doug Pirhalla, Scott Sheridan, Cameron Lee

Abstract:

U.S. coastal areas and ecosystems are facing multiple sea level rise threats and effects: heavy rain events, cyclones, and changing wind and weather patterns all influence coastal flooding, sedimentation, and erosion along critical barrier islands and can strongly impact habitat resiliency and water quality in protected habitats. These impacts are increasing over time and have accelerated the need for new tracking techniques, models and tools of flood risk to support enhanced preparedness for coastal management and mitigation. To address this issue, NOAA National Ocean Service (NOS) evaluated new metrics from satellite altimetry AVISO/Copernicus and MODIS IR flood extents to isolate nodes atmospheric variability indicative of elevated sea level and nuisance flood events. Using de-trended time series of cross-shelf sea surface heights (SSH), we identified specific Self Organizing Maps (SOM) nodes and transitions having a strongest regional association with oceanic spatial patterns (e.g., heightened downwelling favorable wind-stress and enhanced southward coastal transport) indicative of elevated coastal sea levels. Results show the impacts of the inverted barometer effect as well as the effects of surface wind forcing; Ekman-induced transport along broad expanses of the U.S. eastern coastline. Higher sea levels and corresponding localized flooding are associated with either pattern indicative of enhanced on-shore flow, deepening cyclones, or local- scale winds, generally coupled with an increased local to regional precipitation. These findings will support an integration of satellite products and will inform seasonal outlook model development supported through NOAAs Climate Program Office and NOS office of Center for Operational Oceanographic Products and Services (CO-OPS). Overall results will prioritize ecological areas and coastal lab facilities at risk based on numbers of nuisance flood projected and inform coastal management of flood risk around low lying areas subjected to bank erosion.

Keywords: AVISO satellite altimetry SSHA, MODIS IR flood map, nuisance flood, remote sensing of flood

Procedia PDF Downloads 123
619 Bis-Azlactone Based Biodegradable Poly(Ester Amide)s: Design, Synthesis and Study

Authors: Kobauri Sophio, Kantaria Tengiz, Tugushi David, Puiggali Jordi, Katsarava Ramaz

Abstract:

Biodegradable biomaterials (BB) are of high interest for numerous applications in modern medicine as resorbable surgical materials and drug delivery systems. This kind of materials can be cleared from the body after the fulfillment of their function that excludes a surgical intervention for their removal. One of the most promising BBare amino acids based biodegradable poly(ester amide)s (PEAs) which are composed of naturally occurring (α-amino acids) and non-toxic building blocks such as fatty diols and dicarboxylic acids. Key bis-nucleophilic monomers for synthesizing the PEAs are diamine-diesters-di-p-toluenesulfonic acid salts of bis-(α-amino acid)-alkylenediesters (TAADs) which form the PEAs after step-growth polymerization (polycondensation) with bis-electrophilic counter-partners - activated diesters of dicarboxylic acids. The PEAs combine all advantages of the 'parent polymers' – polyesters (PEs) and polyamides (PAs): Ability of biodegradation (PEs), a high affinity with tissues and a wide range of desired mechanical properties (PAs). The scopes of applications of thePEAs can substantially be expanded by their functionalization, e.g. through the incorporation of hydrophobic fragments into the polymeric backbones. Hydrophobically modified PEAs can form non-covalent adducts with various compounds that make them attractive as drug carriers. For hydrophobic modification of the PEAs, we selected so-called 'Azlactone Method' based on the application of p-phenylene-bis-oxazolinons (bis-azlactones, BALs) as active bis-electrophilic monomers in step-growth polymerization with TAADs. Interaction of BALs with TAADs resulted in the PEAs with low MWs (Mw2,800-19,600 Da) and poor material properties. The high-molecular-weight PEAs (Mw up to 100,000) with desirable material properties were synthesized after replacement of a part of BALs with activated diester - di-p-nitrophenylsebacate, or a part of TAAD with alkylenediamine – 1,6-hexamethylenediamine. The new hydrophobically modified PEAs were characterized by FTIR, NMR, GPC, and DSC. It was shown that after the hydrophobic modification the PEAs retain the biodegradability (in vitro study catalyzed by α-chymptrypsin and lipase), and are of interest for constructing resorbable surgical and pharmaceutical devices including drug delivering containers such as microspheres. The new PEAs are insoluble in hydrophobic organic solvents such as chloroform or dichloromethane (swell only) that allowed elaborating a new technology of fabricating microspheres.

Keywords: amino acids, biodegradable polymers, bis-azlactones, microspheres

Procedia PDF Downloads 162
618 Modelling the Antecedents of Supply Chain Enablers in Online Groceries Using Interpretive Structural Modelling and MICMAC Analysis

Authors: Rose Antony, Vivekanand B. Khanapuri, Karuna Jain

Abstract:

Online groceries have transformed the way the supply chains are managed. These are facing numerous challenges in terms of product wastages, low margins, long breakeven to achieve and low market penetration to mention a few. The e-grocery chains need to overcome these challenges in order to survive the competition. The purpose of this paper is to carry out a structural analysis of the enablers in e-grocery chains by applying Interpretive Structural Modeling (ISM) and MICMAC analysis in the Indian context. The research design is descriptive-explanatory in nature. The enablers have been identified from the literature and through semi-structured interviews conducted among the managers having relevant experience in e-grocery supply chains. The experts have been contacted through professional/social networks by adopting a purposive snowball sampling technique. The interviews have been transcribed, and manual coding is carried using open and axial coding method. The key enablers are categorized into themes, and the contextual relationship between these and the performance measures is sought from the Industry veterans. Using ISM, the hierarchical model of the enablers is developed and MICMAC analysis identifies the driver and dependence powers. Based on the driver-dependence power the enablers are categorized into four clusters namely independent, autonomous, dependent and linkage. The analysis found that information technology (IT) and manpower training acts as key enablers towards reducing the lead time and enhancing the online service quality. Many of the enablers fall under the linkage cluster viz., frequent software updating, branding, the number of delivery boys, order processing, benchmarking, product freshness and customized applications for different stakeholders, depicting these as critical in online food/grocery supply chains. Considering the perishability nature of the product being handled, the impact of the enablers on the product quality is also identified. Hence, study aids as a tool to identify and prioritize the vital enablers in the e-grocery supply chain. The work is perhaps unique, which identifies the complex relationships among the supply chain enablers in fresh food for e-groceries and linking them to the performance measures. It contributes to the knowledge of supply chain management in general and e-retailing in particular. The approach focus on the fresh food supply chains in the Indian context and hence will be applicable in developing economies context, where supply chains are evolving.

Keywords: interpretive structural modelling (ISM), India, online grocery, retail operations, supply chain management

Procedia PDF Downloads 184
617 An Analysis of LoRa Networks for Rainforest Monitoring

Authors: Rafael Castilho Carvalho, Edjair de Souza Mota

Abstract:

As the largest contributor to the biogeochemical functioning of the Earth system, the Amazon Rainforest has the greatest biodiversity on the planet, harboring about 15% of all the world's flora. Recognition and preservation are the focus of research that seeks to mitigate drastic changes, especially anthropic ones, which irreversibly affect this biome. Functional and low-cost monitoring alternatives to reduce these impacts are a priority, such as those using technologies such as Low Power Wide Area Networks (LPWAN). Promising, reliable, secure and with low energy consumption, LPWAN can connect thousands of IoT devices, and in particular, LoRa is considered one of the most successful solutions to facilitate forest monitoring applications. Despite this, the forest environment, in particular the Amazon Rainforest, is a challenge for these technologies, requiring work to identify and validate the use of technology in a real environment. To investigate the feasibility of deploying LPWAN in remote water quality monitoring of rivers in the Amazon Region, a LoRa-based test bed consisting of a Lora transmitter and a LoRa receiver was set up, both parts were implemented with Arduino and the LoRa chip SX1276. The experiment was carried out at the Federal University of Amazonas, which contains one of the largest urban forests in Brazil. There are several springs inside the forest, and the main goal is to collect water quality parameters and transmit the data through the forest in real time to the gateway at the uni. In all, there are nine water quality parameters of interest. Even with a high collection frequency, the amount of information that must be sent to the gateway is small. However, for this application, the battery of the transmitter device is a concern since, in the real application, the device must run without maintenance for long periods of time. With these constraints in mind, parameters such as Spreading Factor (SF) and Coding Rate (CR), different antenna heights, and distances were tuned to better the connectivity quality, measured with RSSI and loss rate. A handheld spectrum analyzer RF Explorer was used to get the RSSI values. Distances exceeding 200 m have soon proven difficult to establish communication due to the dense foliage and high humidity. The optimal combinations of SF-CR values were 8-5 and 9-5, showing the lowest packet loss rates, 5% and 17%, respectively, with a signal strength of approximately -120 dBm, these being the best settings for this study so far. The rains and climate changes imposed limitations on the equipment, and more tests are already being conducted. Subsequently, the range of the LoRa configuration must be extended using a mesh topology, especially because at least three different collection points in the same water body are required.

Keywords: IoT, LPWAN, LoRa, coverage, loss rate, forest

Procedia PDF Downloads 61
616 Evaluation and Preservation of Post-War Concrete Architecture: The Case of Lithuania

Authors: Aušra Černauskienė

Abstract:

The heritage of modern architecture is closely related to the materiality and technology used to implement the buildings. Concrete is one of the most ubiquitous post-war building materials with enormous aesthetic and structural potential that architects have creatively used for everyday buildings and exceptional architectural objects that have survived. Concrete's material, structural, and architectural development over the post-war years has produced a remarkably rich and diverse typology of buildings, for implementation of which unique handicraft skills and industrialized novelties were used. Nonetheless, in the opinion of the public, concrete architecture is often treated as ugly and obsolete, and in Lithuania, it also has negative associations with the scarcity of the Soviet era. Moreover, aesthetic non-appreciation is not the only challenge that concrete architecture meets. It also no longer meets the needs of contemporary requirements: buildings are of poor energy class, have little potential for transformation, and have an obsolete surrounding environment. Thus, as a young heritage, concrete architecture is not yet sufficiently appreciated by society and heritage specialists, as it takes a short time to rethink what they mean from a historical perspective. However, concrete architecture is considered ambiguous but has its character and specificity that needs to be carefully studied in terms of cultural heritage to avoid the risk of poor renovation or even demolition, which has increasingly risen in recent decades in Lithuania. For example, several valuable pieces of post-war concrete architecture, such as the Banga restaurant and the Summer Stage in Palanga, were demolished without understanding their cultural value. Many unique concrete structures and raw concrete surfaces were painted or plastered, paying little attention to the appearance of authentic material. Furthermore, it raises a discussion on how to preserve buildings of different typologies: for example, innovative public buildings in their aesthetic, spatial solutions, and mass housing areas built using precast concrete panels. It is evident that the most traditional preservation strategy, conservation, is not the only option for preserving post-war concrete architecture, and more options should be considered. The first step in choosing the right strategy in each case is an appropriate assessment of the cultural significance. For this reason, an evaluation matrix for post-war concrete architecture is proposed. In one direction, an analysis of different typological groups of buildings is suggested, with the designation of ownership rights; in the other direction – the analysis of traditional value aspects such as aesthetic, technological, and relevant for modern architecture such as social, economic, and sustainability factors. By examining these parameters together, three relevant scenarios for preserving post-war concrete architecture were distinguished: conservation, renovation, and reuse, and they are revealed using examples of concrete architecture in Lithuania.

Keywords: modern heritage, value aspects, typology, conservation, upgrade, reuse

Procedia PDF Downloads 116