Search results for: real cash
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 5327

Search results for: real cash

4307 A Service-Learning Experience in the Subject of Adult Nursing

Authors: Eva de Mingo-Fernández, Lourdes Rubio Rico, Carmen Ortega-Segura, Montserrat Querol-García, Raúl González-Jauregui

Abstract:

Today, one of the great challenges that the university faces is to get closer to society and transfer knowledge. The competency-based training approach favours a continuous interaction between practice and theory, which is why it is essential to establish real experiences with reflection and debate and to contrast them with personal and professional knowledge. Service-learning (SL) consists of an integration of academic learning with service in the community, which enables teachers to transfer knowledge with social value and students to be trained on the basis of experience of real needs and problems with the aim of solving them. SLE combines research, teaching, and social value knowledge transfer with the real social needs and problems of a community. Goal: The objective of this study was to design, implement, and evaluate a service-learning program in the subject of adult nursing for second-year nursing students. Methodology: After establishing collaboration with eight associations of people with different pathologies, the students were divided into eight groups, and each group was assigned an association. The groups were made up of 10-12 students. The associations willing to participate were for the following conditions: diabetes, multiple sclerosis, cancer, inflammatory bowel disease, fibromyalgia, heart, lung, and kidney diseases. The methodological design consisting of 5 activities was then applied. Three activities address personal and individual reflections, where the student initially describes what they think it is like to live with a certain disease. They then express their reflections resulting from an interview conducted by peers, in person or online, with a person living with this particular condition, and after sharing the results of their reflections with the rest of the group, they make an oral presentation in which they present their findings to the other students. This is followed by a service task in which the students collaborate in different activities of the association, and finally, a third individual reflection is carried out in which the students express their experience of collaboration. The evaluation of this activity is carried out by means of a rubric for both the reflections and the presentation. It should be noted that the oral presentation is evaluated both by the rest of the classmates and by the teachers. Results: The evaluation of the activity, given by the students, is 7.80/10, commenting that the experience is positive and brings them closer to the reality of the people and the area.

Keywords: academic learning integration, knowledge transfer, service-learning, teaching methodology

Procedia PDF Downloads 58
4306 A Picture is worth a Billion Bits: Real-Time Image Reconstruction from Dense Binary Pixels

Authors: Tal Remez, Or Litany, Alex Bronstein

Abstract:

The pursuit of smaller pixel sizes at ever increasing resolution in digital image sensors is mainly driven by the stringent price and form-factor requirements of sensors and optics in the cellular phone market. Recently, Eric Fossum proposed a novel concept of an image sensor with dense sub-diffraction limit one-bit pixels (jots), which can be considered a digital emulation of silver halide photographic film. This idea has been recently embodied as the EPFL Gigavision camera. A major bottleneck in the design of such sensors is the image reconstruction process, producing a continuous high dynamic range image from oversampled binary measurements. The extreme quantization of the Poisson statistics is incompatible with the assumptions of most standard image processing and enhancement frameworks. The recently proposed maximum-likelihood (ML) approach addresses this difficulty, but suffers from image artifacts and has impractically high computational complexity. In this work, we study a variant of a sensor with binary threshold pixels and propose a reconstruction algorithm combining an ML data fitting term with a sparse synthesis prior. We also show an efficient hardware-friendly real-time approximation of this inverse operator. Promising results are shown on synthetic data as well as on HDR data emulated using multiple exposures of a regular CMOS sensor.

Keywords: binary pixels, maximum likelihood, neural networks, sparse coding

Procedia PDF Downloads 191
4305 Distinct Patterns of Resilience Identified Using Smartphone Mobile Experience Sampling Method (M-ESM) and a Dual Model of Mental Health

Authors: Hussain-Abdulah Arjmand, Nikki S. Rickard

Abstract:

The response to stress can be highly heterogenous, and may be influenced by methodological factors. The integrity of data will be optimized by measuring both positive and negative affective responses to an event, by measuring responses in real time as close to the stressful event as possible, and by utilizing data collection methods that do not interfere with naturalistic behaviours. The aim of the current study was to explore short term prototypical responses to major stressor events on outcome measures encompassing both positive and negative indicators of psychological functioning. A novel mobile experience sampling methodology (m-ESM) was utilized to monitor both effective responses to stressors in real time. A smartphone mental health app (‘Moodprism’) which prompts users daily to report both their positive and negative mood, as well as whether any significant event had occurred in the past 24 hours, was developed for this purpose. A sample of 142 participants was recruited as part of the promotion of this app. Participants’ daily reported experience of stressor events, levels of depressive symptoms and positive affect were collected across a 30 day period as they used the app. For each participant, major stressor events were identified on the subjective severity of the event rated by the user. Depression and positive affect ratings were extracted for the three days following the event. Responses to the event were scaled relative to their general reactivity across the remainder of the 30 day period. Participants were first clustered into groups based on initial reactivity and subsequent recovery following a stressor event. This revealed distinct patterns of responding along depressive symptomatology and positive affect. Participants were then grouped based on allocations to clusters in each outcome variable. A highly individualised nature in which participants respond to stressor events, in symptoms of depression and levels of positive affect, was observed. A complete description of the novel profiles identified will be presented at the conference. These findings suggest that real-time measurement of both positive and negative functioning to stressors yields a more complex set of responses than previously observed with retrospective reporting. The use of smartphone technology to measure individualized responding also proved to shed significant insight.

Keywords: depression, experience sampling methodology, positive functioning, resilience

Procedia PDF Downloads 232
4304 Enhancing Financial Security: Real-Time Anomaly Detection in Financial Transactions Using Machine Learning

Authors: Ali Kazemi

Abstract:

The digital evolution of financial services, while offering unprecedented convenience and accessibility, has also escalated the vulnerabilities to fraudulent activities. In this study, we introduce a distinct approach to real-time anomaly detection in financial transactions, aiming to fortify the defenses of banking and financial institutions against such threats. Utilizing unsupervised machine learning algorithms, specifically autoencoders and isolation forests, our research focuses on identifying irregular patterns indicative of fraud within transactional data, thus enabling immediate action to prevent financial loss. The data we used in this study included the monetary value of each transaction. This is a crucial feature as fraudulent transactions may have distributions of different amounts than legitimate ones, such as timestamps indicating when transactions occurred. Analyzing transactions' temporal patterns can reveal anomalies (e.g., unusual activity in the middle of the night). Also, the sector or category of the merchant where the transaction occurred, such as retail, groceries, online services, etc. Specific categories may be more prone to fraud. Moreover, the type of payment used (e.g., credit, debit, online payment systems). Different payment methods have varying risk levels associated with fraud. This dataset, anonymized to ensure privacy, reflects a wide array of transactions typical of a global banking institution, ranging from small-scale retail purchases to large wire transfers, embodying the diverse nature of potentially fraudulent activities. By engineering features that capture the essence of transactions, including normalized amounts and encoded categorical variables, we tailor our data to enhance model sensitivity to anomalies. The autoencoder model leverages its reconstruction error mechanism to flag transactions that deviate significantly from the learned normal pattern, while the isolation forest identifies anomalies based on their susceptibility to isolation from the dataset's majority. Our experimental results, validated through techniques such as k-fold cross-validation, are evaluated using precision, recall, and the F1 score alongside the area under the receiver operating characteristic (ROC) curve. Our models achieved an F1 score of 0.85 and a ROC AUC of 0.93, indicating high accuracy in detecting fraudulent transactions without excessive false positives. This study contributes to the academic discourse on financial fraud detection and provides a practical framework for banking institutions seeking to implement real-time anomaly detection systems. By demonstrating the effectiveness of unsupervised learning techniques in a real-world context, our research offers a pathway to significantly reduce the incidence of financial fraud, thereby enhancing the security and trustworthiness of digital financial services.

Keywords: anomaly detection, financial fraud, machine learning, autoencoders, isolation forest, transactional data analysis

Procedia PDF Downloads 44
4303 Detection and Distribution Pattern of Prevelant Genotypes of Hepatitis C in a Tertiary Care Hospital of Western India

Authors: Upasana Bhumbla

Abstract:

Background: Hepatitis C virus is a major cause of chronic hepatitis, which can further lead to cirrhosis of the liver and hepatocellular carcinoma. Worldwide the burden of Hepatitis C infection has become a serious threat to the human race. Hepatitis C virus (HCV) has population-specific genotypes and provides valuable epidemiological and therapeutic information. Genotyping and assessment of viral load in HCV patients are important for planning the therapeutic strategies. The aim of the study is to study the changing trends of prevalence and genotypic distribution of hepatitis C virus in a tertiary care hospital in Western India. Methods: It is a retrospective study; blood samples were collected and tested for anti HCV antibodies by ELISA in Dept. of Microbiology. In seropositive Hepatitis C patients, quantification of HCV-RNA was done by real-time PCR and in HCV-RNA positive samples, genotyping was conducted. Results: A total of 114 patients who were seropositive for Anti HCV were recruited in the study, out of which 79 (69.29%) were HCV-RNA positive. Out of these positive samples, 54 were further subjected to genotype determination using real-time PCR. Genotype was not detected in 24 samples due to low viral load; 30 samples were positive for genotype. Conclusion: Knowledge of genotype is crucial for the management of HCV infection and prediction of prognosis. Patients infected with HCV genotype 1 and 4 will have to receive Interferon and Ribavirin for 48 weeks. Patients with these genotypes show a poor sustained viral response when tested 24 weeks after completion of therapy. On the contrary, patients infected with HCV genotype 2 and 3 are reported to have a better response to therapy.

Keywords: hepatocellular, genotype, ribavarin, seropositive

Procedia PDF Downloads 123
4302 CyberSteer: Cyber-Human Approach for Safely Shaping Autonomous Robotic Behavior to Comply with Human Intention

Authors: Vinicius G. Goecks, Gregory M. Gremillion, William D. Nothwang

Abstract:

Modern approaches to train intelligent agents rely on prolonged training sessions, high amounts of input data, and multiple interactions with the environment. This restricts the application of these learning algorithms in robotics and real-world applications, in which there is low tolerance to inadequate actions, interactions are expensive, and real-time processing and action are required. This paper addresses this issue introducing CyberSteer, a novel approach to efficiently design intrinsic reward functions based on human intention to guide deep reinforcement learning agents with no environment-dependent rewards. CyberSteer uses non-expert human operators for initial demonstration of a given task or desired behavior. The trajectories collected are used to train a behavior cloning deep neural network that asynchronously runs in the background and suggests actions to the deep reinforcement learning module. An intrinsic reward is computed based on the similarity between actions suggested and taken by the deep reinforcement learning algorithm commanding the agent. This intrinsic reward can also be reshaped through additional human demonstration or critique. This approach removes the need for environment-dependent or hand-engineered rewards while still being able to safely shape the behavior of autonomous robotic agents, in this case, based on human intention. CyberSteer is tested in a high-fidelity unmanned aerial vehicle simulation environment, the Microsoft AirSim. The simulated aerial robot performs collision avoidance through a clustered forest environment using forward-looking depth sensing and roll, pitch, and yaw references angle commands to the flight controller. This approach shows that the behavior of robotic systems can be shaped in a reduced amount of time when guided by a non-expert human, who is only aware of the high-level goals of the task. Decreasing the amount of training time required and increasing safety during training maneuvers will allow for faster deployment of intelligent robotic agents in dynamic real-world applications.

Keywords: human-robot interaction, intelligent robots, robot learning, semisupervised learning, unmanned aerial vehicles

Procedia PDF Downloads 251
4301 Hybrid Data-Driven Drilling Rate of Penetration Optimization Scheme Guided by Geological Formation and Historical Data

Authors: Ammar Alali, Mahmoud Abughaban, William Contreras Otalvora

Abstract:

Optimizing the drilling process for cost and efficiency requires the optimization of the rate of penetration (ROP). ROP is the measurement of the speed at which the wellbore is created, in units of feet per hour. It is the primary indicator of measuring drilling efficiency. Maximization of the ROP can indicate fast and cost-efficient drilling operations; however, high ROPs may induce unintended events, which may lead to nonproductive time (NPT) and higher net costs. The proposed ROP optimization solution is a hybrid, data-driven system that aims to improve the drilling process, maximize the ROP, and minimize NPT. The system consists of two phases: (1) utilizing existing geological and drilling data to train the model prior, and (2) real-time adjustments of the controllable dynamic drilling parameters [weight on bit (WOB), rotary speed (RPM), and pump flow rate (GPM)] that direct influence on the ROP. During the first phase of the system, geological and historical drilling data are aggregated. After, the top-rated wells, as a function of high instance ROP, are distinguished. Those wells are filtered based on NPT incidents, and a cross-plot is generated for the controllable dynamic drilling parameters per ROP value. Subsequently, the parameter values (WOB, GPM, RPM) are calculated as a conditioned mean based on physical distance, following Inverse Distance Weighting (IDW) interpolation methodology. The first phase is concluded by producing a model of drilling best practices from the offset wells, prioritizing the optimum ROP value. This phase is performed before the commencing of drilling. Starting with the model produced in phase one, the second phase runs an automated drill-off test, delivering live adjustments in real-time. Those adjustments are made by directing the driller to deviate two of the controllable parameters (WOB and RPM) by a small percentage (0-5%), following the Constrained Random Search (CRS) methodology. These minor incremental variations will reveal new drilling conditions, not explored before through offset wells. The data is then consolidated into a heat-map, as a function of ROP. A more optimum ROP performance is identified through the heat-map and amended in the model. The validation process involved the selection of a planned well in an onshore oil field with hundreds of offset wells. The first phase model was built by utilizing the data points from the top-performing historical wells (20 wells). The model allows drillers to enhance decision-making by leveraging existing data and blending it with live data in real-time. An empirical relationship between controllable dynamic parameters and ROP was derived using Artificial Neural Networks (ANN). The adjustments resulted in improved ROP efficiency by over 20%, translating to at least 10% saving in drilling costs. The novelty of the proposed system lays is its ability to integrate historical data, calibrate based geological formations, and run real-time global optimization through CRS. Those factors position the system to work for any newly drilled well in a developing field event.

Keywords: drilling optimization, geological formations, machine learning, rate of penetration

Procedia PDF Downloads 123
4300 Evaluation of Real-Time Background Subtraction Technique for Moving Object Detection Using Fast-Independent Component Analysis

Authors: Naoum Abderrahmane, Boumehed Meriem, Alshaqaqi Belal

Abstract:

Background subtraction algorithm is a larger used technique for detecting moving objects in video surveillance to extract the foreground objects from a reference background image. There are many challenges to test a good background subtraction algorithm, like changes in illumination, dynamic background such as swinging leaves, rain, snow, and the changes in the background, for example, moving and stopping of vehicles. In this paper, we propose an efficient and accurate background subtraction method for moving object detection in video surveillance. The main idea is to use a developed fast-independent component analysis (ICA) algorithm to separate background, noise, and foreground masks from an image sequence in practical environments. The fast-ICA algorithm is adapted and adjusted with a matrix calculation and searching for an optimum non-quadratic function to be faster and more robust. Moreover, in order to estimate the de-mixing matrix and the denoising de-mixing matrix parameters, we propose to convert all images to YCrCb color space, where the luma component Y (brightness of the color) gives suitable results. The proposed technique has been verified on the publicly available datasets CD net 2012 and CD net 2014, and experimental results show that our algorithm can detect competently and accurately moving objects in challenging conditions compared to other methods in the literature in terms of quantitative and qualitative evaluations with real-time frame rate.

Keywords: background subtraction, moving object detection, fast-ICA, de-mixing matrix

Procedia PDF Downloads 88
4299 Evaluation of Neonicotinoids Against Sucking Insect Pests of Cotton in Laboratory and Field Conditions

Authors: Muhammad Sufyan, Muhammad D. Gogi, Muhammad Arshad, Ahmad Nawaz, Muhammad Usman

Abstract:

Cotton (Gossypium hirsutum) universally known as silver fiber and is one of the most important cash crop of Pakistan. A wide array of pests constraints cotton production among which sucking insect pests cause serious losses. Mostly new chemistry insecticides used to control a wide variety of insect pests including sucking insect pests. In the present study efficacy of different neonicotinoids was evaluated against sucking insect pests of cotton in the field and in laboratory for red and dusky cotton bug. The experiment was conducted at Entomology Research Station, University of Agriculture Faisalabad, in a Randomized Complete Block Design (RCBD). Field trial was conducted to evaluate the efficacy of Confidence Ultra (Imidacloprid) 70% SL, Confidor (Imidacloprid) 20% SL, Kendo (Lambda cyhalothrin) 24.7 SC, Actara (Thiamethoxam) 25% WG, Forcast (Tebufenozide+ Emamectin benzoate) 8.8 EW and Timer (Emamectin benzoate) 1.9 EC at their recommended doses. The data was collected on per leaf basis of thrips, aphid, jassid and whitefly before 24 hours of spray. The post treatment data was recorded after 24, 48 and 72 hours. The fresh, non-infested and untreated cotton leaves was collected from the field and brought to the laboratory to assess the efficacy of neonicotinoids against red and dusky cotton bug. After data analysis all the insecticides were found effective against sucking pests. Confidence Ultra was highly effective against the aphid, jassid, and whitefly and gave maximum mortality, while showed non-significant results against thrips. In case of aphid plot which was treated with Kando 24.7 SC showed significant mortality after 72 hours of pesticide application. Similar trends were found in laboratory conditions with all these treatments by making different concentrations and had significant impact on dusky cotton bug and red cotton bug population after 24, 48 and 72 hours after application.

Keywords: cotton, laboratory and field conditions, neonicotinoids, sucking insect pests

Procedia PDF Downloads 231
4298 The Research on Diesel Bus Emissions in Ulaanbaatar City: Mongolia

Authors: Tsetsegmaa A., Bayarsuren B., Altantsetseg Ts.

Abstract:

To make the best decision on reducing harmful emissions from buses, we need to have a clear understanding of the current state of their actual emissions. The emissions from city buses running on high sulfur fuel, particularly particulate matter (PM) and nitrogen oxides (NOx) from the exhaust gases of conventional diesel engines, have been studied and measured with and without diesel particulate filter (DPF) in Ulaanbaatar city. The study was conducted by using the PEMS (Portable Emissions Measurement System) and gravimetric method in real traffic conditions. The obtained data were used to determine the actual emission rates and to evaluate the effectiveness of the selected particulate filters. Actual road and daily PM emissions from city buses were determined during the warm and cold seasons. A bus with an average daily mileage of 242 km was found to emit 166.155 g of PM into the city's atmosphere on average per day, with 141.3 g in summer and 175.8 g in winter. The actual PM of the city bus is 0.6866 g/km. The concentration of NOx in the exhaust gas averages 1410.94 ppm. The use of DPF reduced the exhaust gas opacity of 24 buses by an average of 97% and filtered a total of 340.4 kg of soot from these buses over a period of six months. Retrofitting an old conventional diesel engine with cassette-type silicon carbide (SiC) DPF, despite the laboriousness of cleaning, can significantly reduce particulate matter emissions. Innovation: First comprehensive road PM and NOx emission dataset and actual road emissions from public buses have been identified. PM and NOx mathematical model equations have been estimated as a function of the bus technical speed and engine revolution with and without DPF.

Keywords: conventional diesel, silicon carbide, real-time onboard measurements, particulate matter, diesel retrofit, fuel sulphur

Procedia PDF Downloads 149
4297 Design of an Improved Distributed Framework for Intrusion Detection System Based on Artificial Immune System and Neural Network

Authors: Yulin Rao, Zhixuan Li, Burra Venkata Durga Kumar

Abstract:

Intrusion detection refers to monitoring the actions of internal and external intruders on the system and detecting the behaviours that violate security policies in real-time. In intrusion detection, there has been much discussion about the application of neural network technology and artificial immune system (AIS). However, many solutions use static methods (signature-based and stateful protocol analysis) or centralized intrusion detection systems (CIDS), which are unsuitable for real-time intrusion detection systems that need to process large amounts of data and detect unknown intrusions. This article proposes a framework for a distributed intrusion detection system (DIDS) with multi-agents based on the concept of AIS and neural network technology to detect anomalies and intrusions. In this framework, multiple agents are assigned to each host and work together, improving the system's detection efficiency and robustness. The trainer agent in the central server of the framework uses the artificial neural network (ANN) rather than the negative selection algorithm of AIS to generate mature detectors. Mature detectors can distinguish between self-files and non-self-files after learning. Our analyzer agents use genetic algorithms to generate memory cell detectors. This kind of detector will effectively reduce false positive and false negative errors and act quickly on known intrusions.

Keywords: artificial immune system, distributed artificial intelligence, multi-agent, intrusion detection system, neural network

Procedia PDF Downloads 101
4296 The Effective Method for Postering Thinking Dispositions of Learners

Authors: H. Jalahi, A. Yazdanpanah Nozari

Abstract:

Background and Purpose: Assessment of learners’ performance is an important factors in teaching-learning process. When a factor is sensitive and has high influence on life, their assessment should be done precisely. Thinking dispositions are very important factors in medical education because of its specific condition. In this study a model is designed for fostering thinking dispositions of learners in which authentic assessment is an important element. Materials and Methods: Objective based research is developmental, and such a model was not designed for curricula. Data collection and comparing approaches about assessment and analyzing current assessments offered applied proposals. Results: Based on research findings, the current assessments are response-based, that is students instead of product of response, only offers the specific response which the teachers expects; but authentic assessment is a form of assessment in which students are asked to perform real-word tasks that demonstrate meaningful application of essential knowledge and skills. Conclusion: Because of the difficulties and unexpected problems in life and individuals needs to lifelong learning and conditions in medical course that require decision making in specific times, we must pay attention to reach thinking dispositions and it should be included in curriculum. Authentic assessment as an important aspect of curriculum can help fostering thinking dispositions of learners. Using this kind of assessments which focus on application of information and skills to solve real-word tasks have more important role in medical courses.

Keywords: assessment, authentic, medical courses, developmental

Procedia PDF Downloads 357
4295 Effect of Needle Height on Discharge Coefficient and Cavitation Number

Authors: Mohammadreza Nezamirad, Sepideh Amirahmadian, Nasim Sabetpour, Azadeh Yazdi, Amirmasoud Hamedi

Abstract:

Cavitation inside diesel injector nozzle is investigated using Reynolds-Stress-Navier Stokes equations. Schnerr-Sauer cavitation model is used for modeling cavitation inside diesel injector nozzle. The carrying fluid utilized in the current study is diesel fuel. The flow is verified at the beginning by comparing with the previous experimental data, and it was found that K-Epsilon turbulent model could lead to a better accuracy comparing to K-Omega turbulent model. Moreover, the mass flow rate obtained numerically is compared with the experimental value, and the discrepancy was found to be less than 5 percent which shows the accuracy of the current results. Finally, a real-size four-hole nozzle is investigated, and the flow inside it is visualized based on velocity profile, discharge coefficient, and cavitation number. It was found that the mesh density could be reduced significantly by utilizing periodic boundary conditions. Velocity contour at the mid nozzle showed that the maximum value of velocity occurs at the end of the needle before entering the orifice area. Last but not least, at the same boundary conditions, when different needle heights were utilized, it was found that as needle height increases with an increase in cavitation number, discharge coefficient increases, while the mentioned increases are more tangible at smaller values of needle heights.

Keywords: cavitation, diesel fuel, CFD, real size nozzle, mass flow rate

Procedia PDF Downloads 139
4294 The Foundation Binary-Signals Mechanics and Actual-Information Model of Universe

Authors: Elsadig Naseraddeen Ahmed Mohamed

Abstract:

In contrast to the uncertainty and complementary principle, it will be shown in the present paper that the probability of the simultaneous occupation event of any definite values of coordinates by any definite values of momentum and energy at any definite instance of time can be described by a binary definite function equivalent to the difference between their numbers of occupation and evacuation epochs up to that time and also equivalent to the number of exchanges between those occupation and evacuation epochs up to that times modulus two, these binary definite quantities can be defined at all point in the time’s real-line so it form a binary signal represent a complete mechanical description of physical reality, the time of these exchanges represent the boundary of occupation and evacuation epochs from which we can calculate these binary signals using the fact that the time of universe events actually extends in the positive and negative of time’s real-line in one direction of extension when these number of exchanges increase, so there exists noninvertible transformation matrix can be defined as the matrix multiplication of invertible rotation matrix and noninvertible scaling matrix change the direction and magnitude of exchange event vector respectively, these noninvertible transformation will be called actual transformation in contrast to information transformations by which we can navigate the universe’s events transformed by actual transformations backward and forward in time’s real-line, so these information transformations will be derived as an elements of a group can be associated to their corresponded actual transformations. The actual and information model of the universe will be derived by assuming the existence of time instance zero before and at which there is no coordinate occupied by any definite values of momentum and energy, and then after that time, the universe begin its expanding in spacetime, this assumption makes the need for the existence of Laplace’s demon who at one moment can measure the positions and momentums of all constituent particle of the universe and then use the law of classical mechanics to predict all future and past of universe’s events, superfluous, we only need for the establishment of our analog to digital converters to sense the binary signals that determine the boundaries of occupation and evacuation epochs of the definite values of coordinates relative to its origin by the definite values of momentum and energy as present events of the universe from them we can predict approximately in high precision it's past and future events.

Keywords: binary-signal mechanics, actual-information model of the universe, actual-transformation, information-transformation, uncertainty principle, Laplace's demon

Procedia PDF Downloads 170
4293 Task Validity in Neuroimaging Studies: Perspectives from Applied Linguistics

Authors: L. Freeborn

Abstract:

Recent years have seen an increasing number of neuroimaging studies related to language learning as imaging techniques such as fMRI and EEG have become more widely accessible to researchers. By using a variety of structural and functional neuroimaging techniques, these studies have already made considerable progress in terms of our understanding of neural networks and processing related to first and second language acquisition. However, the methodological designs employed in neuroimaging studies to test language learning have been questioned by applied linguists working within the field of second language acquisition (SLA). One of the major criticisms is that tasks designed to measure language learning gains rarely have a communicative function, and seldom assess learners’ ability to use the language in authentic situations. This brings the validity of many neuroimaging tasks into question. The fundamental reason why people learn a language is to communicate, and it is well-known that both first and second language proficiency are developed through meaningful social interaction. With this in mind, the SLA field is in agreement that second language acquisition and proficiency should be measured through learners’ ability to communicate in authentic real-life situations. Whilst authenticity is not always possible to achieve in a classroom environment, the importance of task authenticity should be reflected in the design of language assessments, teaching materials, and curricula. Tasks that bear little relation to how language is used in real-life situations can be considered to lack construct validity. This paper first describes the typical tasks used in neuroimaging studies to measure language gains and proficiency, then analyses to what extent these tasks can validly assess these constructs.

Keywords: neuroimaging studies, research design, second language acquisition, task validity

Procedia PDF Downloads 122
4292 Augmented Reality for Children Vocabulary Learning: Case Study in a Macau Kindergarten

Authors: R. W. Chan, Kan Kan Chan

Abstract:

Augmented Reality (AR), with the affordance of bridging between real world and virtual world, brings users immersive experience. It has been applied in education gradually and even come into practice in student daily learning. However, a systematic review shows that there are limited researches in the area of vocabulary acquisition in early childhood education. Since kindergarten is a key stage where children acquire language and AR as an emerging and potential technology to support the vocabulary acquisition, this study aims to explore its value in in real classroom with teacher’s view. Participants were a class of 5 to 6 years old kids studying in a Macau school that follows Cambridge curriculum and emphasizes multicultural ethos. There were 11 boys, 13 girls, and in a total of 24 kids. They learnt animal vocabulary using mobile device and AR flashcards, IPad to scan AR flashcards and interact with pop-up virtual objects. In order to estimate the effectiveness of using Augmented Reality, children attended vocabulary pre-posttest. In addition, teacher interview was administrated after this learning activity to seek practitioner’s opinion towards this technology. For data analysis, paired samples t-test was utilized to measure the instructional effect based on the pre-posttest data. Result shows that Augmented Reality could significantly enhance children vocabulary learning with large effect size. Teachers indicated that children enjoyed the AR learning activity but clear instruction is needed. Suggestions for the future implementation of vocabulary acquisition using AR are suggested.

Keywords: augmented reality, kindergarten children, vocabulary learning, Macau

Procedia PDF Downloads 140
4291 Preparing a Library of Abnormal Masses for Designing a Long-Lasting Anatomical Breast Phantom for Ultrasonography Training

Authors: Nasibullina A., Leonov D.

Abstract:

The ultrasonography method is actively used for the early diagnosis of various le-sions in the human body, including the mammary gland. The incidence of breast cancer has increased by more than 20%, and mortality by 14% since 2008. The correctness of the diagnosis often directly depends on the qualifications and expe-rience of a diagnostic medical sonographer. That is why special attention should be paid to the practical training of future specialists. Anatomical phantoms are ex-cellent teaching tools because they accurately imitate the characteristics of real hu-man tissues and organs. The purpose of this work is to create a breast phantom for practicing ultrasound diagnostic skills in grayscale and elastography imaging, as well as ultrasound-guided biopsy sampling. We used silicone-like compounds ranging from 3 to 17 on the Shore scale hardness units to simulate soft tissue and lesions. Impurities with experimentally selected concentrations were added to give the phantom the necessary attenuation and reflection parameters. We used 3D modeling programs and 3D printing with PLA plastic to create the casting mold. We developed a breast phantom with inclusions of varying shape, elasticity and echogenicity. After testing the created phantom in B-mode and elastography mode, we performed a survey asking 19 participants how realistic the sonograms of the phantom were. The results showed that the closest to real was the model of the cyst with 9.5 on the 0-10 similarity scale. Thus, the developed breast phantom can be used for ultrasonography, elastography, and ultrasound-guided biopsy training.

Keywords: breast ultrasound, mammary gland, mammography, training phantom, tissue-mimicking materials

Procedia PDF Downloads 78
4290 Embedded System of Signal Processing on FPGA: Underwater Application Architecture

Authors: Abdelkader Elhanaoui, Mhamed Hadji, Rachid Skouri, Said Agounad

Abstract:

The purpose of this paper is to study the phenomenon of acoustic scattering by using a new method. The signal processing (Fast Fourier Transform FFT Inverse Fast Fourier Transform iFFT and BESSEL functions) is widely applied to obtain information with high precision accuracy. Signal processing has a wider implementation in general-purpose pro-cessors. Our interest was focused on the use of FPGAs (Field-Programmable Gate Ar-rays) in order to minimize the computational complexity in single processor architecture, then be accelerated on FPGA and meet real-time and energy efficiency requirements. Gen-eral-purpose processors are not efficient for signal processing. We implemented the acous-tic backscattered signal processing model on the Altera DE-SOC board and compared it to Odroid xu4. By comparison, the computing latency of Odroid xu4 and FPGA is 60 sec-onds and 3 seconds, respectively. The detailed SoC FPGA-based system has shown that acoustic spectra are performed up to 20 times faster than the Odroid xu4 implementation. FPGA-based system of processing algorithms is realized with an absolute error of about 10⁻³. This study underlines the increasing importance of embedded systems in underwater acoustics, especially in non-destructive testing. It is possible to obtain information related to the detection and characterization of submerged cells. So we have achieved good exper-imental results in real-time and energy efficiency.

Keywords: DE1 FPGA, acoustic scattering, form function, signal processing, non-destructive testing

Procedia PDF Downloads 71
4289 Revolutionizing Healthcare Facility Maintenance: A Groundbreaking AI, BIM, and IoT Integration Framework

Authors: Mina Sadat Orooje, Mohammad Mehdi Latifi, Behnam Fereydooni Eftekhari

Abstract:

The integration of cutting-edge Internet of Things (IoT) technologies with advanced Artificial Intelligence (AI) systems is revolutionizing healthcare facility management. However, the current landscape of hospital building maintenance suffers from slow, repetitive, and disjointed processes, leading to significant financial, resource, and time losses. Additionally, the potential of Building Information Modeling (BIM) in facility maintenance is hindered by a lack of data within digital models of built environments, necessitating a more streamlined data collection process. This paper presents a robust framework that harmonizes AI with BIM-IoT technology to elevate healthcare Facility Maintenance Management (FMM) and address these pressing challenges. The methodology begins with a thorough literature review and requirements analysis, providing insights into existing technological landscapes and associated obstacles. Extensive data collection and analysis efforts follow to deepen understanding of hospital infrastructure and maintenance records. Critical AI algorithms are identified to address predictive maintenance, anomaly detection, and optimization needs alongside integration strategies for BIM and IoT technologies, enabling real-time data collection and analysis. The framework outlines protocols for data processing, analysis, and decision-making. A prototype implementation is executed to showcase the framework's functionality, followed by a rigorous validation process to evaluate its efficacy and gather user feedback. Refinement and optimization steps are then undertaken based on evaluation outcomes. Emphasis is placed on the scalability of the framework in real-world scenarios and its potential applications across diverse healthcare facility contexts. Finally, the findings are meticulously documented and shared within the healthcare and facility management communities. This framework aims to significantly boost maintenance efficiency, cut costs, provide decision support, enable real-time monitoring, offer data-driven insights, and ultimately enhance patient safety and satisfaction. By tackling current challenges in healthcare facility maintenance management it paves the way for the adoption of smarter and more efficient maintenance practices in healthcare facilities.

Keywords: artificial intelligence, building information modeling, healthcare facility maintenance, internet of things integration, maintenance efficiency

Procedia PDF Downloads 45
4288 Students' ExperiEnce Enhancement Through Simulaton. A Process Flow in Logistics and Transportation Field

Authors: Nizamuddin Zainuddin, Adam Mohd Saifudin, Ahmad Yusni Bahaudin, Mohd Hanizan Zalazilah, Roslan Jamaluddin

Abstract:

Students’ enhanced experience through simulation is a crucial factor that brings reality to the classroom. The enhanced experience is all about developing, enriching and applications of a generic process flow in the field of logistics and transportations. As educational technology has improved, the effective use of simulations has greatly increased to the point where simulations should be considered a valuable, mainstream pedagogical tool. Additionally, in this era of ongoing (some say never-ending) assessment, simulations offer a rich resource for objective measurement and comparisons. Simulation is not just another in the long line of passing fads (or short-term opportunities) in educational technology. It is rather a real key to helping our students understand the world. It is a way for students to acquire experience about how things and systems in the world behave and react, without actually touching them. In short, it is about interactive pretending. Simulation is all about representing the real world which includes grasping the complex issues and solving intricate problems. Therefore, it is crucial before stimulate the real process of inbound and outbound logistics and transportation a generic process flow shall be developed. The paper will be focusing on the validization of the process flow by looking at the inputs gains from the sample. The sampling of the study focuses on multi-national and local manufacturing companies, third party companies (3PL) and government agency, which are selected in Peninsular Malaysia. A simulation flow chart was proposed in the study that will be the generic flow in logistics and transportation. A qualitative approach was mainly conducted to gather data in the study. It was found out from the study that the systems used in the process of outbound and inbound are System Application Products (SAP) and Material Requirement Planning (MRP). Furthermore there were some companies using Enterprises Resources Planning (ERP) and Electronic Data Interchange (EDI) as part of the Suppliers Own Inventories (SOI) networking as a result of globalized business between one countries to another. Computerized documentations and transactions were all mandatory requirement by the Royal Custom and Excise Department. The generic process flow will be the basis of developing a simulation program that shall be used in the classroom with the objective of further enhanced the students’ learning experience. Thus it will contributes to the body of knowledge on the enrichment of the student’s employability and also shall be one of the way to train new workers in the logistics and transportation filed.

Keywords: enhancement, simulation, process flow, logistics, transportation

Procedia PDF Downloads 323
4287 Simulation of Utility Accrual Scheduling and Recovery Algorithm in Multiprocessor Environment

Authors: A. Idawaty, O. Mohamed, A. Z. Zuriati

Abstract:

This paper presents the development of an event based Discrete Event Simulation (DES) for a recovery algorithm known Backward Recovery Global Preemptive Utility Accrual Scheduling (BR_GPUAS). This algorithm implements the Backward Recovery (BR) mechanism as a fault recovery solution under the existing Time/Utility Function/ Utility Accrual (TUF/UA) scheduling domain for multiprocessor environment. The BR mechanism attempts to take the faulty tasks back to its initial safe state and then proceeds to re-execute the affected section of the faulty tasks to enable recovery. Considering that faults may occur in the components of any system; a fault tolerance system that can nullify the erroneous effect is necessary to be developed. Current TUF/UA scheduling algorithm uses the abortion recovery mechanism and it simply aborts the erroneous task as their fault recovery solution. None of the existing algorithm in TUF/UA scheduling domain in multiprocessor scheduling environment have considered the transient fault and implement the BR mechanism as a fault recovery mechanism to nullify the erroneous effect and solve the recovery problem in this domain. The developed BR_GPUAS simulator has derived the set of parameter, events and performance metrics according to a detailed analysis of the base model. Simulation results revealed that BR_GPUAS algorithm can saved almost 20-30% of the accumulated utilities making it reliable and efficient for the real-time application in the multiprocessor scheduling environment.

Keywords: real-time system (RTS), time utility function/ utility accrual (TUF/UA) scheduling, backward recovery mechanism, multiprocessor, discrete event simulation (DES)

Procedia PDF Downloads 299
4286 Hybrid GNN Based Machine Learning Forecasting Model For Industrial IoT Applications

Authors: Atish Bagchi, Siva Chandrasekaran

Abstract:

Background: According to World Bank national accounts data, the estimated global manufacturing value-added output in 2020 was 13.74 trillion USD. These manufacturing processes are monitored, modelled, and controlled by advanced, real-time, computer-based systems, e.g., Industrial IoT, PLC, SCADA, etc. These systems measure and manipulate a set of physical variables, e.g., temperature, pressure, etc. Despite the use of IoT, SCADA etc., in manufacturing, studies suggest that unplanned downtime leads to economic losses of approximately 864 billion USD each year. Therefore, real-time, accurate detection, classification and prediction of machine behaviour are needed to minimise financial losses. Although vast literature exists on time-series data processing using machine learning, the challenges faced by the industries that lead to unplanned downtimes are: The current algorithms do not efficiently handle the high-volume streaming data from industrial IoTsensors and were tested on static and simulated datasets. While the existing algorithms can detect significant 'point' outliers, most do not handle contextual outliers (e.g., values within normal range but happening at an unexpected time of day) or subtle changes in machine behaviour. Machines are revamped periodically as part of planned maintenance programmes, which change the assumptions on which original AI models were created and trained. Aim: This research study aims to deliver a Graph Neural Network(GNN)based hybrid forecasting model that interfaces with the real-time machine control systemand can detect, predict machine behaviour and behavioural changes (anomalies) in real-time. This research will help manufacturing industries and utilities, e.g., water, electricity etc., reduce unplanned downtimes and consequential financial losses. Method: The data stored within a process control system, e.g., Industrial-IoT, Data Historian, is generally sampled during data acquisition from the sensor (source) and whenpersistingin the Data Historian to optimise storage and query performance. The sampling may inadvertently discard values that might contain subtle aspects of behavioural changes in machines. This research proposed a hybrid forecasting and classification model which combines the expressive and extrapolation capability of GNN enhanced with the estimates of entropy and spectral changes in the sampled data and additional temporal contexts to reconstruct the likely temporal trajectory of machine behavioural changes. The proposed real-time model belongs to the Deep Learning category of machine learning and interfaces with the sensors directly or through 'Process Data Historian', SCADA etc., to perform forecasting and classification tasks. Results: The model was interfaced with a Data Historianholding time-series data from 4flow sensors within a water treatment plantfor45 days. The recorded sampling interval for a sensor varied from 10 sec to 30 min. Approximately 65% of the available data was used for training the model, 20% for validation, and the rest for testing. The model identified the anomalies within the water treatment plant and predicted the plant's performance. These results were compared with the data reported by the plant SCADA-Historian system and the official data reported by the plant authorities. The model's accuracy was much higher (20%) than that reported by the SCADA-Historian system and matched the validated results declared by the plant auditors. Conclusions: The research demonstrates that a hybrid GNN based approach enhanced with entropy calculation and spectral information can effectively detect and predict a machine's behavioural changes. The model can interface with a plant's 'process control system' in real-time to perform forecasting and classification tasks to aid the asset management engineers to operate their machines more efficiently and reduce unplanned downtimes. A series of trialsare planned for this model in the future in other manufacturing industries.

Keywords: GNN, Entropy, anomaly detection, industrial time-series, AI, IoT, Industry 4.0, Machine Learning

Procedia PDF Downloads 137
4285 Timetabling for Interconnected LRT Lines: A Package Solution Based on a Real-world Case

Authors: Huazhen Lin, Ruihua Xu, Zhibin Jiang

Abstract:

In this real-world case, timetabling the LRT network as a whole is rather challenging for the operator: they are supposed to create a timetable to avoid various route conflicts manually while satisfying a given interval and the number of rolling stocks, but the outcome is not satisfying. Therefore, the operator adopts a computerised timetabling tool, the Train Plan Maker (TPM), to cope with this problem. However, with various constraints in the dual-line network, it is still difficult to find an adequate pairing of turnback time, interval and rolling stocks’ number, which requires extra manual intervention. Aiming at current problems, a one-off model for timetabling is presented in this paper to simplify the procedure of timetabling. Before the timetabling procedure starts, this paper presents how the dual-line system with a ring and several branches is turned into a simpler structure. Then, a non-linear programming model is presented in two stages. In the first stage, the model sets a series of constraints aiming to calculate a proper timing for coordinating two lines by adjusting the turnback time at termini. Then, based on the result of the first stage, the model introduces a series of inequality constraints to avoid various route conflicts. With this model, an analysis is conducted to reveal the relation between the ratio of trains in different directions and the possible minimum interval, observing that the more imbalance the ratio is, the less possible to provide frequent service under such strict constraints.

Keywords: light rail transit (LRT), non-linear programming, railway timetabling, timetable coordination

Procedia PDF Downloads 60
4284 Integrated Free Space Optical Communication and Optical Sensor Network System with Artificial Intelligence Techniques

Authors: Yibeltal Chanie Manie, Zebider Asire Munyelet

Abstract:

5G and 6G technology offers enhanced quality of service with high data transmission rates, which necessitates the implementation of the Internet of Things (IoT) in 5G/6G architecture. In this paper, we proposed the integration of free space optical communication (FSO) with fiber sensor networks for IoT applications. Recently, free-space optical communications (FSO) are gaining popularity as an effective alternative technology to the limited availability of radio frequency (RF) spectrum. FSO is gaining popularity due to flexibility, high achievable optical bandwidth, and low power consumption in several applications of communications, such as disaster recovery, last-mile connectivity, drones, surveillance, backhaul, and satellite communications. Hence, high-speed FSO is an optimal choice for wireless networks to satisfy the full potential of 5G/6G technology, offering 100 Gbit/s or more speed in IoT applications. Moreover, machine learning must be integrated into the design, planning, and optimization of future optical wireless communication networks in order to actualize this vision of intelligent processing and operation. In addition, fiber sensors are important to achieve real-time, accurate, and smart monitoring in IoT applications. Moreover, we proposed deep learning techniques to estimate the strain changes and peak wavelength of multiple Fiber Bragg grating (FBG) sensors using only the spectrum of FBGs obtained from the real experiment.

Keywords: optical sensor, artificial Intelligence, Internet of Things, free-space optics

Procedia PDF Downloads 50
4283 Vascular Targeted Photodynamic Therapy Monitored by Real-Time Laser Speckle Imaging

Authors: Ruth Goldschmidt, Vyacheslav Kalchenko, Lilah Agemy, Rachel Elmoalem, Avigdor Scherz

Abstract:

Vascular Targeted Photodynamic therapy (VTP) is a new modality for selective cancer treatment that leads to the complete tumor ablation. A photosensitizer, a bacteriochlorophyll derivative in our case, is first administered to the patient and followed by the illumination of the tumor area, by a near-IR laser for its photoactivation. The photoactivated drug releases reactive oxygen species (ROS) in the circulation, which reacts with blood cells and the endothelium leading to the occlusion of the blood vasculature. If the blood vessels are only partially closed, the tumor may recover, and cancer cells could survive. On the other hand, excessive treatment may lead to toxicity of healthy tissues nearby. Simultaneous VTP monitoring and image processing independent of the photoexcitation laser has not yet been reported, to our knowledge. Here we present a method for blood flow monitoring, using a real-time laser speckle imaging (RTLSI) in the tumor during VTP. We have synthesized over the years a library of bacteriochlorophyll derivatives, among them WST11 and STL-6014. Both are water soluble derivatives that are retained in the blood vasculature through their partial binding to HSA. WST11 has been approved in Mexico for VTP treatment of prostate cancer at a certain drug dose, and time/intensity of illumination. Application to other bacteriochlorophyll derivatives or other cancers may require different treatment parameters (such as light/drug administration). VTP parameters for STL-6014 are still under study. This new derivative mainly differs from WST11 by its lack of the central Palladium, and its conjugation to an Arg-Gly-Asp (RGD) sequence. RGD is a tumor-specific ligand that is used for targeting the necrotic tumor domains through its affinity to αVβ3 integrin receptors. This enables the study of cell-targeted VTP. We developed a special RTLSI module, based on Labview software environment for data processing. The new module enables to acquire raw laser speckle images and calculate the values of the laser temporal statistics of time-integrated speckles in real time, without additional off-line processing. Using RTLSI, we could monitor the tumor’s blood flow following VTP in a CT26 colon carcinoma ear model. VTP with WST11 induced an immediate slow down of the blood flow within the tumor and a complete final flow arrest, after some sporadic reperfusions. If the irradiation continued further, the blood flow stopped also in the blood vessels of the surrounding healthy tissue. This emphasizes the significance of light dose control. Using our RTLSI system, we could prevent any additional healthy tissue damage by controlling the illumination time and restrict blood flow arrest within the tumor only. In addition, we found that VTP with STL-6014 was the most effective when the photoactivation was conducted 4h post-injection, in terms of tumor ablation success in-vivo and blood vessel flow arrest. In conclusion, RTSLI application should allow to optimize VTP efficacy vs. toxicity in both the preclinical and clinical arenas.

Keywords: blood vessel occlusion, cancer treatment, photodynamic therapy, real time imaging

Procedia PDF Downloads 216
4282 Mental Health Challenges, Internalizing and Externalizing Behavior Problems, and Academic Challenges among Adolescents from Broken Families

Authors: Fadzai Munyuki

Abstract:

Parental divorce is one of youth's most stressful life events and is associated with long-lasting emotional and behavioral problems. Over the last few decades, research has consistently found strong associations between divorce and adverse health effects in adolescents. Parental divorce has been hypothesized to lead to psychosocial development problems, mental health challenges, internalizing and externalizing behavior problems, and low academic performance among adolescents. This is supported by the Positive youth development theory, which states that a family setup has a major role to play in adolescent development and well-being. So, the focus of this research will be to test this hypothesized process model among adolescents in five provinces in Zimbabwe. A cross-sectional study will be conducted to test this hypothesis, and 1840 (n = 1840) adolescents aged between 14 to 17 will be employed for this study. A Stress and Questionnaire scale, a Child behavior checklist scale, and an academic concept scale will be used for this study. Data analysis will be done using Structural Equations Modeling. This study has many limitations, including the lack of a 'real-time' study, a few cross-sectional studies, a lack of a thorough and validated population measure, and many studies that have been done that have focused on one variable in relation to parental divorce. Therefore, this study seeks to bridge this gap between past research and current literature by using a validated population measure, a real-time study, and combining three latent variables in this study.

Keywords: mental health, internalizing and externalizing behavior, divorce, academic achievements

Procedia PDF Downloads 64
4281 Reflective Thinking and Experiential Learning – A Quasi-Experimental Quanti-Quali Response to Greater Diversification of Activities, Greater Integration of Student Profiles

Authors: Paulo Sérgio Ribeiro de Araújo Bogas

Abstract:

Although several studies have assumed (at least implicitly) that learners' approaches to learning develop into deeper approaches to higher education, there appears to be no clear theoretical basis for this assumption and no empirical evidence. As a scientific contribution to this discussion, a pedagogical intervention of a quasi-experimental nature was developed, with a mixed methodology, evaluating the intervention within a single curricular unit of Marketing, using cases based on real challenges of brands, business simulation, and customer projects. Primary and secondary experiences were incorporated in the intervention: the primary experiences are the experiential activities themselves; the secondary experiences result from the primary experience, such as reflection and discussion in work teams. A diversified learning relationship was encouraged through the various connections between the different members of the learning community. The present study concludes that in the same context, the student's responses can be described as students who reinforce the initial deep approach, students who maintain the initial deep approach level, and others who change from an emphasis on the deep approach to one closer to superficial. This typology did not always confirm studies reported in the literature, namely, whether the initial level of deep processing would influence the superficial and the opposite. The result of this investigation points to the inclusion of pedagogical and didactic activities that integrate different motivations and initial strategies, leading to the possible adoption of deep approaches to learning since it revealed statistically significant differences in the difference in the scores of the deep/superficial approach and the experiential level. In the case of real challenges, the categories of “attribution of meaning and meaning of studied” and the possibility of “contact with an aspirational context” for their future professional stand out. In this category, the dimensions of autonomy that will be required of them were also revealed when comparing the classroom context of real cases and the future professional context and the impact they may have on the world. Regarding the simulated practice, two categories of response stand out: on the one hand, the motivation associated with the possibility of measuring the results of the decisions taken, an awareness of oneself, and, on the other hand, the additional effort that this practice required for some of the students.

Keywords: experiential learning, higher education, mixed methods, reflective learning, marketing

Procedia PDF Downloads 76
4280 The Methodology of Hand-Gesture Based Form Design in Digital Modeling

Authors: Sanghoon Shim, Jaehwan Jung, Sung-Ah Kim

Abstract:

As the digital technology develops, studies on the TUI (Tangible User Interface) that links the physical environment utilizing the human senses with the virtual environment through the computer are actively being conducted. In addition, there has been a tremendous advance in computer design making through the use of computer-aided design techniques, which enable optimized decision-making through comparison with machine learning and parallel comparison of alternatives. However, a complex design that can respond to user requirements or performance can emerge through the intuition of the designer, but it is difficult to actualize the emerged design by the designer's ability alone. Ancillary tools such as Gaudí's Sandbag can be an instrument to reinforce and evolve emerged ideas from designers. With the advent of many commercial tools that support 3D objects, designers' intentions are easily reflected in their designs, but the degree of their reflection reflects their intentions according to the proficiency of design tools. This study embodies the environment in which the form can be implemented by the fingers of the most basic designer in the initial design phase of the complex type building design. Leapmotion is used as a sensor to recognize the hand motions of the designer, and it is converted into digital information to realize an environment that can be linked in real time in virtual reality (VR). In addition, the implemented design can be linked with Rhino™, a 3D authoring tool, and its plug-in Grasshopper™ in real time. As a result, it is possible to design sensibly using TUI, and it can serve as a tool for assisting designer intuition.

Keywords: design environment, digital modeling, hand gesture, TUI, virtual reality

Procedia PDF Downloads 361
4279 Integration Process and Analytic Interface of different Environmental Open Data Sets with Java/Oracle and R

Authors: Pavel H. Llamocca, Victoria Lopez

Abstract:

The main objective of our work is the comparative analysis of environmental data from Open Data bases, belonging to different governments. This means that you have to integrate data from various different sources. Nowadays, many governments have the intention of publishing thousands of data sets for people and organizations to use them. In this way, the quantity of applications based on Open Data is increasing. However each government has its own procedures to publish its data, and it causes a variety of formats of data sets because there are no international standards to specify the formats of the data sets from Open Data bases. Due to this variety of formats, we must build a data integration process that is able to put together all kind of formats. There are some software tools developed in order to give support to the integration process, e.g. Data Tamer, Data Wrangler. The problem with these tools is that they need data scientist interaction to take part in the integration process as a final step. In our case we don’t want to depend on a data scientist, because environmental data are usually similar and these processes can be automated by programming. The main idea of our tool is to build Hadoop procedures adapted to data sources per each government in order to achieve an automated integration. Our work focus in environment data like temperature, energy consumption, air quality, solar radiation, speeds of wind, etc. Since 2 years, the government of Madrid is publishing its Open Data bases relative to environment indicators in real time. In the same way, other governments have published Open Data sets relative to the environment (like Andalucia or Bilbao). But all of those data sets have different formats and our solution is able to integrate all of them, furthermore it allows the user to make and visualize some analysis over the real-time data. Once the integration task is done, all the data from any government has the same format and the analysis process can be initiated in a computational better way. So the tool presented in this work has two goals: 1. Integration process; and 2. Graphic and analytic interface. As a first approach, the integration process was developed using Java and Oracle and the graphic and analytic interface with Java (jsp). However, in order to open our software tool, as second approach, we also developed an implementation with R language as mature open source technology. R is a really powerful open source programming language that allows us to process and analyze a huge amount of data with high performance. There are also some R libraries for the building of a graphic interface like shiny. A performance comparison between both implementations was made and no significant differences were found. In addition, our work provides with an Official Real-Time Integrated Data Set about Environment Data in Spain to any developer in order that they can build their own applications.

Keywords: open data, R language, data integration, environmental data

Procedia PDF Downloads 305
4278 Uncovering Hidden Bugs: An Exploratory Approach

Authors: Sagar Jitendra Mahendrakar

Abstract:

Exploratory testing is a dynamic and adaptable method of software quality assurance that is frequently praised for its ability to find hidden flaws and improve the overall quality of the product. Instead of using preset test cases, exploratory testing allows testers to explore the software application dynamically. This is in contrast to scripted testing methodologies, which primarily rely on tester intuition, creativity, and adaptability. There are several tools and techniques that can aid testers in the exploratory testing process which we will be discussing in this talk.Tests of this kind are able to find bugs of this kind that are harder to find during structured testing or that other testing methods may have overlooked.The purpose of this abstract is to examine the nature and importance of exploratory testing in modern software development methods. It explores the fundamental ideas of exploratory testing, highlighting the value of domain knowledge and tester experience in spotting possible problems that may escape the notice of traditional testing methodologies. Throughout the software development lifecycle, exploratory testing promotes quick feedback loops and continuous improvement by giving testers the ability to make decisions in real time based on their observations. This abstract also clarifies the unique features of exploratory testing, like its non-linearity and capacity to replicate user behavior in real-world settings. Testers can find intricate bugs, usability problems, and edge cases in software through impromptu exploration that might go undetected. Exploratory testing's flexible and iterative structure fits in well with agile and DevOps processes, allowing for a quicker time to market without sacrificing the quality of the final product.

Keywords: exploratory, testing, automation, quality

Procedia PDF Downloads 38