Search results for: visual object tracking
2514 An Android Geofencing App for Autonomous Remote Switch Control
Authors: Jamie Wong, Daisy Sang, Chang-Shyh Peng
Abstract:
Geofence is a virtual fence defined by a preset physical radius around a target location. Geofencing App provides location-based services which define the actionable operations upon the crossing of a geofence. Geofencing requires continual location tracking, which can consume noticeable amount of battery power. Additionally, location updates need to be frequent and accurate or order so that actions can be triggered within an expected time window after the mobile user navigate through the geofence. In this paper, we build an Android mobile geofencing Application to remotely and autonomously control a power switch.Keywords: location based service, geofence, autonomous, remote switch
Procedia PDF Downloads 3172513 Influence of Irregularities in Plan and Elevation on the Dynamic Behavior of the Building
Authors: Yassine Sadji
Abstract:
Some architectural conditions required some shapes often lead to an irregular distribution of masses, rigidities, and resistances. The main object of the present study consists in estimating the influence of the irregularity both in plan and in elevation which presenting some structures on the dynamic characteristics and his influence on the behavior of this structures. To do this, it is necessary to make apply both dynamic methods proposed by the RPA99 (spectral modal method and method of analysis by accélérogramme) on certain similar prototypes and to analyze the parameters measuring the answer of these structures and to proceed to a comparison of the results.Keywords: irregularity, seismic, response, structure, ductility
Procedia PDF Downloads 2792512 Systems Versioning: A Features-Based Meta-Modeling Approach
Authors: Ola A. Younis, Said Ghoul
Abstract:
Systems running these days are huge, complex and exist in many versions. Controlling these versions and tracking their changes became a very hard process as some versions are created using meaningless names or specifications. Many versions of a system are created with no clear difference between them. This leads to mismatching between a user’s request and the version he gets. In this paper, we present a system versions meta-modeling approach that produces versions based on system’s features. This model reduced the number of steps needed to configure a release and gave each version its unique specifications. This approach is applicable for systems that use features in its specification.Keywords: features, meta-modeling, semantic modeling, SPL, VCS, versioning
Procedia PDF Downloads 4462511 Low Cost LiDAR-GNSS-UAV Technology Development for PT Garam’s Three Dimensional Stockpile Modeling Needs
Authors: Mohkammad Nur Cahyadi, Imam Wahyu Farid, Ronny Mardianto, Agung Budi Cahyono, Eko Yuli Handoko, Daud Wahyu Imani, Arizal Bawazir, Luki Adi Triawan
Abstract:
Unmanned aerial vehicle (UAV) technology has cost efficiency and data retrieval time advantages. Using technologies such as UAV, GNSS, and LiDAR will later be combined into one of the newest technologies to cover each other's deficiencies. This integration system aims to increase the accuracy of calculating the volume of the land stockpile of PT. Garam (Salt Company). The use of UAV applications to obtain geometric data and capture textures that characterize the structure of objects. This study uses the Taror 650 Iron Man drone with four propellers, which can fly for 15 minutes. LiDAR can classify based on the number of image acquisitions processed in the software, utilizing photogrammetry and structural science principles from Motion point cloud technology. LiDAR can perform data acquisition that enables the creation of point clouds, three-dimensional models, Digital Surface Models, Contours, and orthomosaics with high accuracy. LiDAR has a drawback in the form of coordinate data positions that have local references. Therefore, researchers use GNSS, LiDAR, and drone multi-sensor technology to map the stockpile of salt on open land and warehouses every year, carried out by PT. Garam twice, where the previous process used terrestrial methods and manual calculations with sacks. Research with LiDAR needs to be combined with UAV to overcome data acquisition limitations because it only passes through the right and left sides of the object, mainly when applied to a salt stockpile. The UAV is flown to assist data acquisition with a wide coverage with the help of integration of the 200-gram LiDAR system so that the flying angle taken can be optimal during the flight process. Using LiDAR for low-cost mapping surveys will make it easier for surveyors and academics to obtain pretty accurate data at a more economical price. As a survey tool, LiDAR is included in a tool with a low price, around 999 USD; this device can produce detailed data. Therefore, to minimize the operational costs of using LiDAR, surveyors can use Low-Cost LiDAR, GNSS, and UAV at a price of around 638 USD. The data generated by this sensor is in the form of a visualization of an object shape made in three dimensions. This study aims to combine Low-Cost GPS measurements with Low-Cost LiDAR, which are processed using free user software. GPS Low Cost generates data in the form of position-determining latitude and longitude coordinates. The data generates X, Y, and Z values to help georeferencing process the detected object. This research will also produce LiDAR, which can detect objects, including the height of the entire environment in that location. The results of the data obtained are calibrated with pitch, roll, and yaw to get the vertical height of the existing contours. This study conducted an experimental process on the roof of a building with a radius of approximately 30 meters.Keywords: LiDAR, unmanned aerial vehicle, low-cost GNSS, contour
Procedia PDF Downloads 952510 Streamlining .NET Data Access: Leveraging JSON for Data Operations in .NET
Authors: Tyler T. Procko, Steve Collins
Abstract:
New features in .NET (6 and above) permit streamlined access to information residing in JSON-capable relational databases, such as SQL Server (2016 and above). Traditional methods of data access now comparatively involve unnecessary steps which compromise system performance. This work posits that the established ORM (Object Relational Mapping) based methods of data access in applications and APIs result in common issues, e.g., object-relational impedance mismatch. Recent developments in C# and .NET Core combined with a framework of modern SQL Server coding conventions have allowed better technical solutions to the problem. As an amelioration, this work details the language features and coding conventions which enable this streamlined approach, resulting in an open-source .NET library implementation called Codeless Data Access (CODA). Canonical approaches rely on ad-hoc mapping code to perform type conversions between the client and back-end database; with CODA, no mapping code is needed, as JSON is freely mapped to SQL and vice versa. CODA streamlines API data access by improving on three aspects of immediate concern to web developers, database engineers and cybersecurity professionals: Simplicity, Speed and Security. Simplicity is engendered by cutting out the “middleman” steps, effectively making API data access a whitebox, whereas traditional methods are blackbox. Speed is improved because of the fewer translational steps taken, and security is improved as attack surfaces are minimized. An empirical evaluation of the speed of the CODA approach in comparison to ORM approaches ] is provided and demonstrates that the CODA approach is significantly faster. CODA presents substantial benefits for API developer workflows by simplifying data access, resulting in better speed and security and allowing developers to focus on productive development rather than being mired in data access code. Future considerations include a generalization of the CODA method and extension outside of the .NET ecosystem to other programming languages.Keywords: API data access, database, JSON, .NET core, SQL server
Procedia PDF Downloads 662509 Reflections of Narrative Architecture in Transformational Representations on the Architectural Design Studio
Authors: M. Mortas, H. Asar, P. Dursun Cebi
Abstract:
The visionary works of architectural representation in the 21st century's present situation, are practiced through the methodologies which try to expose the intellectual and theoretical essences of futurologist positions that are revealed with this era's interactions. Expansions of conceptual and contextual inputs related to one architectural design representation, depend on its deepness of critical attitudes, its interactions with the concepts such as experience, meaning, affection, psychology, perception and aura, as well as its communication with spatial, cultural and environmental factors. The purpose of this research study is to be able to offer methodological application areas for the design dimensions of experiential practices into architectural design studios, by focusing on the architectural representative narrations of 'transformation,' 'metamorphosis,' 'morphogenesis,' 'in-betweenness', 'superposition' and 'intertwine’ in which they affect and are affected by the today’s spatiotemporal hybridizations of architecture. The narrative representations and the visual theory paradigms of the designers are chosen under the main title of 'transformation' for the investigation of these visionary and critical representations' dismantlings and decodings. Case studies of this research area are chosen from Neil Spiller, Bryan Cantley, Perry Kulper and Dan Slavinsky’s transformative, morphogenetic representations. The theoretical dismantlings and decodings which are obtained from these artists’ contemporary architectural representations are tried to utilize and practice in the structural design studios as alternative methodologies when to approach architectural design processes, for enriching, differentiating, diversifying and 'transforming' the applications of so far used design process precedents. The research aims to indicate architectural students about how they can reproduce, rethink and reimagine their own representative lexicons and so languages of their architectural imaginations, regarding the newly perceived tectonics of prosthetic, biotechnology, synchronicity, nanotechnology or machinery into various experiential design workshops. The methodology of this work can be thought as revealing the technical and theoretical tools, lexicons and meanings of contemporary-visionary architectural representations of our decade, with the essential contents and components of hermeneutics, etymology, existentialism, post-humanism, phenomenology and avant-gardism disciplines to re-give meanings the architectural visual theorists’ transformative representations of our decade. The value of this study may be to emerge the superposed and overlapped atmospheres of futurologist architectural representations for the students who need to rethink on the transcultural, deterritorialized and post-humanist critical theories to create and use the representative visual lexicons of themselves for their architectural soft machines and beings by criticizing the now, to be imaginative for the future of architecture.Keywords: architectural design studio, visionary lexicon, narrative architecture, transformative representation
Procedia PDF Downloads 1412508 Preoperative Anxiety Evaluation: Comparing the Visual Facial Anxiety Scale/Yumul Faces Anxiety Scale, Numerical Verbal Rating Scale, Categorization Scale, and the State-Trait Anxiety Inventory
Authors: Roya Yumul, Chse, Ofelia Loani Elvir Lazo, David Chernobylsky, Omar Durra
Abstract:
Background: Preoperative anxiety has been shown to be caused by the fear associated with surgical and anesthetic complications; however, the current gold standard for assessing patient anxiety, the STAI, is problematic to use in the preoperative setting given the duration and concentration required to complete the 40-item extensive questionnaire. Our primary aim in the study is to investigate the correlation of the Visual Facial Anxiety Scale (VFAS) and Numerical Verbal Rating Scale (NVRS) to State-Trait Anxiety Inventory (STAI) to determine the optimal anxiety scale to use in the perioperative setting. Methods: A clinical study of patients undergoing various surgeries was conducted utilizing each of the preoperative anxiety scales. Inclusion criteria included patients undergoing elective surgeries, while exclusion criteria included patients with anesthesia contraindications, inability to comprehend instructions, impaired judgement, substance abuse history, and those pregnant or lactating. 293 patients were analyzed in terms of demographics, anxiety scale survey results, and anesthesia data via Spearman Coefficients, Chi-Squared Analysis, and Fischer’s exact test utilized for comparison analysis. Results: Statistical analysis showed that VFAS had a higher correlation to STAI than NVRS (rs=0.66, p<0.0001 vs. rs=0.64, p<0.0001). The combined VFAS-Categorization Scores showed the highest correlation with the gold standard (rs=0.72, p<0.0001). Subgroup analysis showed similar results. STAI evaluation time (247.7 ± 54.81 sec) far exceeds VFAS (7.29 ± 1.61 sec), NVRS (7.23 ± 1.60 sec), and Categorization scales (7.29 ± 1.99 sec). Patients preferred VFAS (54.4%), Categorization (11.6%), and NVRS (8.8%). Anesthesiologists preferred VFAS (63.9%), NVRS (22.1%), and Categorization Scales (14.0%). Of note, the top five causes of preoperative anxiety were determined to be waiting (56.5%), pain (42.5%), family concerns (40.5%), no information about surgery (40.1%), or anesthesia (31.6%). Conclusions: Combined VFAS-Categorization Score (VCS) demonstrates the highest correlation to the gold standard, STAI. Both VFAS and Categorization tests also take significantly less time than STAI, which is critical in the preoperative setting. Among both patients and anesthesiologists, VFAS was the most preferred scale. This forms the basis of the Yumul FACES Anxiety Scale, designed for quick quantization and assessment in the preoperative setting while maintaining a high correlation to the golden standard. Additional studies using the formulated Yumul FACES Anxiety Scale are merited.Keywords: numerical verbal anxiety scale, preoperative anxiety, state-trait anxiety inventory, visual facial anxiety scale
Procedia PDF Downloads 1412507 Visual Improvement Outcome of Pars Plana Vitrectomy Combined Endofragmentation and Secondary IOL Implantation for Dropped Nucleus After Cataract Surgery : A Case Report
Authors: Saut Samuel Simamora
Abstract:
PURPOSE: Nucleus drop is one of the most feared and severe complications of modern cataract surgery. The lens material may drop through iatrogenic breaks of the posterior capsule. The incidence of the nucleus as the complication of phacoemulsification increases concomitant to the increased frequency of phacoemulsification. Pars plana vitrectomy (PPV) followed by endofragmentation and secondary intraocular lens (IOL) implantation is the choice of management procedure. This case report aims to present the outcome of PPV for the treatment dropped nucleus after cataract surgery METHODS: A 65 year old female patient came to Vitreoretina department with chief complaints blurry vision in her left eye after phacoemulsification one month before. Ophthalmological examination revealed visual acuity of the right eye (VA RE) was 6/15, and the left eye (VA LE) was hand movement. The intraocular pressure (IOP) on the right eye was 18 mmHg, and on the left eye was 59 mmHg. On her left eye, there were aphakic, dropped lens nucleus and secondary glaucoma.RESULTS: The patient got antiglaucoma agent until her IOP was decreased. She underwent pars plana vitrectomy to remove dropped nucleus and iris fixated IOL. One week post operative evaluation revealed VA LE was 6/7.5 and iris fixated IOL in proper position. CONCLUSIONS: Nucleus drop generally occurs in phacoemulsification cataract surgery techniques. Retained lens nucleus or fragments in the vitreous may cause severe intraocular inflammation leading to secondary glaucoma. The proper and good management for retained lens fragments in nucleus drop give excellent outcome to patient.Keywords: secondary glaucoma, complication of phacoemulsification, nucleus drop, pars plana vitrectomy
Procedia PDF Downloads 792506 Analyzing Use of Figurativeness, Visual Elements, Allegory, Scenic Imagery as Support System in Punjabi Contemporary Theatre for Escaping Censorship
Authors: Shazia Anwer
Abstract:
This paper has discussed the unusual form of resistance in theatre against censorship board in Pakistan. The atypical approach of dramaturgy created massive space for performers and audiences to integrate and communicate. The social and religious absolutes creates suffocation in Pakistani society, strict control over all Fine and Performing Art has made art political, contemporary dramatics has started an amalgamated theatre to avoid censorship. Contemporary Punjabi theatre techniques are directly dependent on human cognition. The idea of indirect thought processing is not unique but dependent on spectators. The paper has provided an account of these techniques and their specific use for conveying specific messages across the audiences. For the Dramaturge of today, theatre space is an expression representing a linguistic formulation that includes qualities of experimental and non-traditional use of classical theatrical space in the context of fulfilling the concept of open theatre. Paper has explained the transformation of the theatrical experience into an event where the actor and the audience are co-existing and co-experiencing the dramatical experience. The denial of the existence of the 4th -Wall made two-way communication possible. This paper has elaborated that the previously marginalized genres such as naach, jugat, miras, are extensively included to counter the censorship board. Figurativeness, visual elements, allegory, scenic imagery are basic support system for contemporary Punjabi theatre. The body of the actor is used as a source for non-verbal communication, and for an escape from traditional theatrical space which by every means has every element that could be controlled and reprimanded by the controlling authority.Keywords: communication, Punjabi theatre, figurativeness, censorship
Procedia PDF Downloads 1342505 SVM-DTC Using for PMSM Speed Tracking Control
Authors: Kendouci Khedidja, Mazari Benyounes, Benhadria Mohamed Rachid, Dadi Rachida
Abstract:
In recent years, direct torque control (DTC) has become an alternative to the well-known vector control especially for permanent magnet synchronous motor (PMSM). However, it presents a problem of field linkage and torque ripple. In order to solve this problem, the conventional DTC is combined with space vector pulse width modulation (SVPWM). This control theory has achieved great success in the control of PMSM. That has become a hotspot for resolving. The main objective of this paper gives us an introduction of the DTC and SVPWM-DTC control theory of PMSM which has been simulating on each part of the system via Matlab/Simulink based on the mathematical modeling. Moreover, the outcome of the simulation proved that the improved SVPWM- DTC of PMSM has a good dynamic and static performance.Keywords: PMSM, DTC, SVM, speed control
Procedia PDF Downloads 3892504 Kuehne + Nagel's PharmaChain: IoT-Enabled Product Monitoring Using Radio Frequency Identification
Authors: Rebecca Angeles
Abstract:
This case study features the Kuehne + Nagel PharmaChain solution for ‘cold chain’ pharmaceutical and biologic product shipments with IOT-enabled features for shipment temperature and location tracking. Using the case study method and content analysis, this research project investigates the application of the structurational model of technology theory introduced by Orlikowski in order to interpret the firm’s entry and participation in the IOT-impelled marketplace.Keywords: Internet of Things (IOT), radio frequency identification (RFID), structurational model of technology (Orlikowski), supply chain management
Procedia PDF Downloads 2322503 Virtual Science Hub: An Open Source Platform to Enrich Science Teaching
Authors: Enrique Barra, Aldo Gordillo, Juan Quemada
Abstract:
This paper presents the Virtual Science Hub platform. It is an open source platform that combines a social network, an e-learning authoring tool, a video conference service and a learning object repository for science teaching enrichment. These four main functionalities fit very well together. The platform was released in April 2012 and since then it has not stopped growing. Finally we present the results of the surveys conducted and the statistics gathered to validate this approach.Keywords: e-learning, platform, authoring tool, science teaching, educational sciences
Procedia PDF Downloads 3972502 CERD: Cost Effective Route Discovery in Mobile Ad Hoc Networks
Authors: Anuradha Banerjee
Abstract:
A mobile ad hoc network is an infrastructure less network, where nodes are free to move independently in any direction. The nodes have limited battery power; hence, we require energy efficient route discovery technique to enhance their lifetime and network performance. In this paper, we propose an energy-efficient route discovery technique CERD that greatly reduces the number of route requests flooded into the network and also gives priority to the route request packets sent from the routers that has communicated with the destination very recently, in single or multi-hop paths. This does not only enhance the lifetime of nodes but also decreases the delay in tracking the destination.Keywords: ad hoc network, energy efficiency, flooding, node lifetime, route discovery
Procedia PDF Downloads 3472501 MRI Quality Control Using Texture Analysis and Spatial Metrics
Authors: Kumar Kanudkuri, A. Sandhya
Abstract:
Typically, in a MRI clinical setting, there are several protocols run, each indicated for a specific anatomy and disease condition. However, these protocols or parameters within them can change over time due to changes to the recommendations by the physician groups or updates in the software or by the availability of new technologies. Most of the time, the changes are performed by the MRI technologist to account for either time, coverage, physiological, or Specific Absorbtion Rate (SAR ) reasons. However, giving properly guidelines to MRI technologist is important so that they do not change the parameters that negatively impact the image quality. Typically a standard American College of Radiology (ACR) MRI phantom is used for Quality Control (QC) in order to guarantee that the primary objectives of MRI are met. The visual evaluation of quality depends on the operator/reviewer and might change amongst operators as well as for the same operator at various times. Therefore, overcoming these constraints is essential for a more impartial evaluation of quality. This makes quantitative estimation of image quality (IQ) metrics for MRI quality control is very important. So in order to solve this problem, we proposed that there is a need for a robust, open-source, and automated MRI image control tool. The Designed and developed an automatic analysis tool for measuring MRI image quality (IQ) metrics like Signal to Noise Ratio (SNR), Signal to Noise Ratio Uniformity (SNRU), Visual Information Fidelity (VIF), Feature Similarity (FSIM), Gray level co-occurrence matrix (GLCM), slice thickness accuracy, slice position accuracy, High contrast spatial resolution) provided good accuracy assessment. A standardized quality report has generated that incorporates metrics that impact diagnostic quality.Keywords: ACR MRI phantom, MRI image quality metrics, SNRU, VIF, FSIM, GLCM, slice thickness accuracy, slice position accuracy
Procedia PDF Downloads 1702500 Usability Evaluation of Rice Doctor as a Diagnostic Tool for Agricultural Extension Workers in Selected Areas in the Philippines
Authors: Jerome Cayton Barradas, Rowely Parico, Lauro Atienza, Poornima Shankar
Abstract:
The effective agricultural extension is essential in facilitating improvements in various agricultural areas. One way of doing this is through Information and communication technologies (ICTs) like Rice Doctor (RD), an app-based diagnostic tool that provides accurate and timely diagnosis and management recommendations for more than 80 crop problems. This study aims to evaluate the RD usability by determining the effectiveness, efficiency, and user satisfaction of RD in making an accurate and timely diagnosis. It also aims to identify other factors that affect RD usability. This will be done by comparing RD with two other diagnostic methods: visual identification-based diagnosis and reference-guided diagnosis. The study was implemented in three rice-producing areas and has involved 96 extension workers. Respondents accomplished a self-administered survey and participated in group discussions. Data collected was then subjected to qualitative and quantitative analysis. Most of the respondents were satisfied with RD and believed that references are needed in assuring the accuracy of diagnosis. The majority found it efficient and easy to use. Some found it confusing and complicated, but this is because of their unfamiliarity with RD. Most users were also able to achieve accurate diagnosis proving effectiveness. Lastly, although users have reservations, they are satisfied and open to using RD. The study also found out the importance of visual identification skills in using RD and the need for capacity development and improvement of access to RD devices. From these results, the following are recommended to improve RD usability: review and upgrade diagnostic keys, expand further RD content, initiate capacity development for AEWs, and prepare and implement an RD communication plan.Keywords: agricultural extension, crop protection, information and communication technologies, rice doctor
Procedia PDF Downloads 2542499 Spatial Distribution of Land Use in the North Canal of Beijing Subsidiary Center and Its Impact on the Water Quality
Authors: Alisa Salimova, Jiane Zuo, Christopher Homer
Abstract:
The objective of this study is to analyse the North Canal riparian zone land use with the help of remote sensing analysis in ArcGis using 30 cloudless Landsat8 open-source satellite images from May to August of 2013 and 2017. Land cover, urban construction, heat island effect, vegetation cover, and water system change were chosen as the main parameters and further analysed to evaluate its impact on the North Canal water quality. The methodology involved the following steps: firstly, 30 cloudless satellite images were collected from the Landsat TM image open-source database. The visual interpretation method was used to determine different land types in a catchment area. After primary and secondary classification, 28 land cover types in total were classified. Visual interpretation method was used with the help ArcGIS for the grassland monitoring, US Landsat TM remote sensing image processing with a resolution of 30 meters was used to analyse the vegetation cover. The water system was analysed using the visual interpretation method on the GIS software platform to decode the target area, water use and coverage. Monthly measurements of water temperature, pH, BOD, COD, ammonia nitrogen, total nitrogen and total phosphorus in 2013 and 2017 were taken from three locations of the North Canal in Tongzhou district. These parameters were used for water quality index calculation and compared to land-use changes. The results of this research were promising. The vegetation coverage of North Canal riparian zone in 2017 was higher than the vegetation coverage in 2013. The surface brightness temperature value was positively correlated with the vegetation coverage density and the distance from the surface of the water bodies. This indicates that the vegetation coverage and water system have a great effect on temperature regulation and urban heat island effect. Surface temperature in 2017 was higher than in 2013, indicating a global warming effect. The water volume in the river area has been partially reduced, indicating the potential water scarcity risk in North Canal watershed. Between 2013 and 2017, urban residential, industrial and mining storage land areas significantly increased compared to other land use types; however, water quality has significantly improved in 2017 compared to 2013. This observation indicates that the Tongzhou Water Restoration Plan showed positive results and water management of Tongzhou district had been improved.Keywords: North Canal, land use, riparian vegetation, river ecology, remote sensing
Procedia PDF Downloads 1132498 Similar Script Character Recognition on Kannada and Telugu
Authors: Gurukiran Veerapur, Nytik Birudavolu, Seetharam U. N., Chandravva Hebbi, R. Praneeth Reddy
Abstract:
This work presents a robust approach for the recognition of characters in Telugu and Kannada, two South Indian scripts with structural similarities in characters. To recognize the characters exhaustive datasets are required, but there are only a few publicly available datasets. As a result, we decided to create a dataset for one language (source language),train the model with it, and then test it with the target language.Telugu is the target language in this work, whereas Kannada is the source language. The suggested method makes use of Canny edge features to increase character identification accuracy on pictures with noise and different lighting. A dataset of 45,150 images containing printed Kannada characters was created. The Nudi software was used to automatically generate printed Kannada characters with different writing styles and variations. Manual labelling was employed to ensure the accuracy of the character labels. The deep learning models like CNN (Convolutional Neural Network) and Visual Attention neural network (VAN) are used to experiment with the dataset. A Visual Attention neural network (VAN) architecture was adopted, incorporating additional channels for Canny edge features as the results obtained were good with this approach. The model's accuracy on the combined Telugu and Kannada test dataset was an outstanding 97.3%. Performance was better with Canny edge characteristics applied than with a model that solely used the original grayscale images. The accuracy of the model was found to be 80.11% for Telugu characters and 98.01% for Kannada words when it was tested with these languages. This model, which makes use of cutting-edge machine learning techniques, shows excellent accuracy when identifying and categorizing characters from these scripts.Keywords: base characters, modifiers, guninthalu, aksharas, vattakshara, VAN
Procedia PDF Downloads 532497 Vehicle Timing Motion Detection Based on Multi-Dimensional Dynamic Detection Network
Authors: Jia Li, Xing Wei, Yuchen Hong, Yang Lu
Abstract:
Detecting vehicle behavior has always been the focus of intelligent transportation, but with the explosive growth of the number of vehicles and the complexity of the road environment, the vehicle behavior videos captured by traditional surveillance have been unable to satisfy the study of vehicle behavior. The traditional method of manually labeling vehicle behavior is too time-consuming and labor-intensive, but the existing object detection and tracking algorithms have poor practicability and low behavioral location detection rate. This paper proposes a vehicle behavior detection algorithm based on the dual-stream convolution network and the multi-dimensional video dynamic detection network. In the videos, the straight-line behavior of the vehicle will default to the background behavior. The Changing lanes, turning and turning around are set as target behaviors. The purpose of this model is to automatically mark the target behavior of the vehicle from the untrimmed videos. First, the target behavior proposals in the long video are extracted through the dual-stream convolution network. The model uses a dual-stream convolutional network to generate a one-dimensional action score waveform, and then extract segments with scores above a given threshold M into preliminary vehicle behavior proposals. Second, the preliminary proposals are pruned and identified using the multi-dimensional video dynamic detection network. Referring to the hierarchical reinforcement learning, the multi-dimensional network includes a Timer module and a Spacer module, where the Timer module mines time information in the video stream and the Spacer module extracts spatial information in the video frame. The Timer and Spacer module are implemented by Long Short-Term Memory (LSTM) and start from an all-zero hidden state. The Timer module uses the Transformer mechanism to extract timing information from the video stream and extract features by linear mapping and other methods. Finally, the model fuses time information and spatial information and obtains the location and category of the behavior through the softmax layer. This paper uses recall and precision to measure the performance of the model. Extensive experiments show that based on the dataset of this paper, the proposed model has obvious advantages compared with the existing state-of-the-art behavior detection algorithms. When the Time Intersection over Union (TIoU) threshold is 0.5, the Average-Precision (MP) reaches 36.3% (the MP of baselines is 21.5%). In summary, this paper proposes a vehicle behavior detection model based on multi-dimensional dynamic detection network. This paper introduces spatial information and temporal information to extract vehicle behaviors in long videos. Experiments show that the proposed algorithm is advanced and accurate in-vehicle timing behavior detection. In the future, the focus will be on simultaneously detecting the timing behavior of multiple vehicles in complex traffic scenes (such as a busy street) while ensuring accuracy.Keywords: vehicle behavior detection, convolutional neural network, long short-term memory, deep learning
Procedia PDF Downloads 1302496 Factors Impacting Geostatistical Modeling Accuracy and Modeling Strategy of Fluvial Facies Models
Authors: Benbiao Song, Yan Gao, Zhuo Liu
Abstract:
Geostatistical modeling is the key technic for reservoir characterization, the quality of geological models will influence the prediction of reservoir performance greatly, but few studies have been done to quantify the factors impacting geostatistical reservoir modeling accuracy. In this study, 16 fluvial prototype models have been established to represent different geological complexity, 6 cases range from 16 to 361 wells were defined to reproduce all those 16 prototype models by different methodologies including SIS, object-based and MPFS algorithms accompany with different constraint parameters. Modeling accuracy ratio was defined to quantify the influence of each factor, and ten realizations were averaged to represent each accuracy ratio under the same modeling condition and parameters association. Totally 5760 simulations were done to quantify the relative contribution of each factor to the simulation accuracy, and the results can be used as strategy guide for facies modeling in the similar condition. It is founded that data density, geological trend and geological complexity have great impact on modeling accuracy. Modeling accuracy may up to 90% when channel sand width reaches up to 1.5 times of well space under whatever condition by SIS and MPFS methods. When well density is low, the contribution of geological trend may increase the modeling accuracy from 40% to 70%, while the use of proper variogram may have very limited contribution for SIS method. It can be implied that when well data are dense enough to cover simple geobodies, few efforts were needed to construct an acceptable model, when geobodies are complex with insufficient data group, it is better to construct a set of robust geological trend than rely on a reliable variogram function. For object-based method, the modeling accuracy does not increase obviously as SIS method by the increase of data density, but kept rational appearance when data density is low. MPFS methods have the similar trend with SIS method, but the use of proper geological trend accompany with rational variogram may have better modeling accuracy than MPFS method. It implies that the geological modeling strategy for a real reservoir case needs to be optimized by evaluation of dataset, geological complexity, geological constraint information and the modeling objective.Keywords: fluvial facies, geostatistics, geological trend, modeling strategy, modeling accuracy, variogram
Procedia PDF Downloads 2642495 Disabling Barriers to Community Participation in Everyday Environments from the Perspective of People with Disabilities
Authors: Leah Samples
Abstract:
Barriers to participation persist for people with disabilities despite a long history of legislation designed to support equal opportunity for people with disabilities. Historically, the focus has been solely placed on structural barriers, but newer research highlights the importance of looking at social and informational barriers to participation. Collectively, these barriers prevent people with disabilities from fully engaging in community life and consequently from achieving full citizenship. Disability is crucial to understanding the meaning of citizenship. Drawing upon the influences of feminist, critical race and human rights theorists, citizenship can be defined as a set of rights and responsibilities that an individual has because they are a part of a community. However, when those rights are taken away or denied one’s citizenship is in question. Employing this definition of citizenship allows one to examine how barriers to citizenship present themselves in societies that are built on an ideal of a non-disabled person. To understand at a deeper level how this notion of citizenship manifests itself, this study seeks to unearth commonly experienced barriers to participation in the lives of visually-impaired adults in everyday environments. The purpose of this qualitative study is to explore commonly-experienced barriers to participation in the lives of visually impaired adults in leisure settings (e.g. restaurants, stores, etc.). Thirty adults with visual impairments participated in semi-structured interviews, as well as participant observations. The results suggest that barriers to participation are still pervasive in everyday environments and subsequently have an adverse effect on participation and belonging for people with visual impairments. This study highlights the importance of exploring and acknowledging the daily tensions that persons with disabilities face in their communities. A full exploration of these tensions is necessary in order to develop solutions and tools to create more just communities for everyone.Keywords: barriers, citizenship, belonging, everyday environments
Procedia PDF Downloads 4172494 Gesture-Controlled Interface Using Computer Vision and Python
Authors: Vedant Vardhan Rathour, Anant Agrawal
Abstract:
The project aims to provide a touchless, intuitive interface for human-computer interaction, enabling users to control their computer using hand gestures and voice commands. The system leverages advanced computer vision techniques using the MediaPipe framework and OpenCV to detect and interpret real time hand gestures, transforming them into mouse actions such as clicking, dragging, and scrolling. Additionally, the integration of a voice assistant powered by the Speech Recognition library allows for seamless execution of tasks like web searches, location navigation and gesture control on the system through voice commands.Keywords: gesture recognition, hand tracking, machine learning, convolutional neural networks
Procedia PDF Downloads 122493 A Comparative Study of Cognitive Functions in Relapsing-Remitting Multiple Sclerosis Patients, Secondary-Progressive Multiple Sclerosis Patients and Normal People
Authors: Alireza Pirkhaefi
Abstract:
Background: Multiple sclerosis (MS) is one of the most common diseases of the central nervous system (brain and spinal cord). Given the importance of cognitive disorders in patients with multiple sclerosis, the present study was in order to compare cognitive functions (Working memory, Attention and Centralization, and Visual-spatial perception) in patients with relapsing- remitting multiple sclerosis (RRMS) and secondary progressive multiple sclerosis (SPMS). Method: Present study was performed as a retrospective study. This research was conducted with Ex-Post Facto method. The samples of research consisted of 60 patients with multiple sclerosis (30 patients relapsing-retrograde and 30 patients secondary progressive), who were selected from Tehran Community of MS Patients Supported as convenience sampling. 30 normal persons were also selected as a comparison group. Montreal Cognitive Assessment (MOCA) was used to assess cognitive functions. Data were analyzed using multivariate analysis of variance. Results: The results showed that there were significant differences among cognitive functioning in patients with RRMS, SPMS, and normal individuals. There were not significant differences in working memory between two groups of patients with RRMS and SPMS; while significant differences in these variables were seen between the two groups and normal individuals. Also, results showed significant differences in attention and centralization and visual-spatial perception among three groups. Conclusions: Results showed that there are differences between cognitive functions of RRMS and SPMS patients so that the functions of RRMS patients are better than SPMS patients. These results have a critical role in improvement of cognitive functions; reduce the factors causing disability due to cognitive impairment, and especially overall health of society.Keywords: multiple sclerosis, cognitive function, secondary-progressive, normal subjects
Procedia PDF Downloads 2392492 Pion/Muon Identification in a Nuclear Emulsion Cloud Chamber Using Neural Networks
Authors: Kais Manai
Abstract:
The main part of this work focuses on the study of pion/muon separation at low energy using a nuclear Emulsion Cloud Chamber (ECC) made of lead and nuclear emulsion films. The work consists of two parts: particle reconstruction algorithm and a Neural Network that assigns to each reconstructed particle the probability to be a muon or a pion. The pion/muon separation algorithm has been optimized by using a detailed Monte Carlo simulation of the ECC and tested on real data. The algorithm allows to achieve a 60% muon identification efficiency with a pion misidentification smaller than 3%.Keywords: nuclear emulsion, particle identification, tracking, neural network
Procedia PDF Downloads 5062491 Communicative Language between Doctors and Patients in Healthcare
Authors: Anita Puspawati
Abstract:
A failure in obtaining informed consent from patient occurs because there is not effective communication skill in doctors. Therefore, the language is very important in communication between doctor and patient. This study uses descriptive analysis method, that is a method used mainly in researching the status of a group of people, an object, a condition, a system of thought or a class of events in the present. The result of this study indicates that the communicative language between doctors and patients will increase the trust of patients to their doctors and accordingşy, patients will provide the informed consent voluntarily.Keywords: communicative, language, doctor, patient
Procedia PDF Downloads 2922490 Attention and Memory in the Music Learning Process in Individuals with Visual Impairments
Authors: Lana Burmistrova
Abstract:
Introduction: The influence of visual impairments on several cognitive processes used in the music learning process is an increasingly important area in special education and cognitive musicology. Many children have several visual impairments due to the refractive errors and irreversible inhibitors. However, based on the compensatory neuroplasticity and functional reorganization, congenitally blind (CB) and early blind (EB) individuals use several areas of the occipital lobe to perceive and process auditory and tactile information. CB individuals have greater memory capacity, memory reliability, and less false memory mechanisms are used while executing several tasks, they have better working memory (WM) and short-term memory (STM). Blind individuals use several strategies while executing tactile and working memory n-back tasks: verbalization strategy (mental recall), tactile strategy (tactile recall) and combined strategies. Methods and design: The aim of the pilot study was to substantiate similar tendencies while executing attention, memory and combined auditory tasks in blind and sighted individuals constructed for this study, and to investigate attention, memory and combined mechanisms used in the music learning process. For this study eight (n=8) blind and eight (n=8) sighted individuals aged 13-20 were chosen. All respondents had more than five years music performance and music learning experience. In the attention task, all respondents had to identify pitch changes in tonal and randomized melodic pairs. The memory task was based on the mismatch negativity (MMN) proportion theory: 80 percent standard (not changed) and 20 percent deviant (changed) stimuli (sequences). Every sequence was named (na-na, ra-ra, za-za) and several items (pencil, spoon, tealight) were assigned for each sequence. Respondents had to recall the sequences, to associate them with the item and to detect possible changes. While executing the combined task, all respondents had to focus attention on the pitch changes and had to detect and describe these during the recall. Results and conclusion: The results support specific features in CB and EB, and similarities between late blind (LB) and sighted individuals. While executing attention and memory tasks, it was possible to observe the tendency in CB and EB by using more precise execution tactics and usage of more advanced periodic memory, while focusing on auditory and tactile stimuli. While executing memory and combined tasks, CB and EB individuals used passive working memory to recall standard sequences, active working memory to recall deviant sequences and combined strategies. Based on the observation results, assessment of blind respondents and recording specifics, following attention and memory correlations were identified: reflective attention and STM, reflective attention and periodic memory, auditory attention and WM, tactile attention and WM, auditory tactile attention and STM. The results and the summary of findings highlight the attention and memory features used in the music learning process in the context of blindness, and the tendency of the several attention and memory types correlated based on the task, strategy and individual features.Keywords: attention, blindness, memory, music learning, strategy
Procedia PDF Downloads 1842489 Astronomical Object Classification
Authors: Alina Muradyan, Lina Babayan, Arsen Nanyan, Gohar Galstyan, Vigen Khachatryan
Abstract:
We present a photometric method for identifying stars, galaxies and quasars in multi-color surveys, which uses a library of ∼> 65000 color templates for comparison with observed objects. The method aims for extracting the information content of object colors in a statistically correct way, and performs a classification as well as a redshift estimation for galaxies and quasars in a unified approach based on the same probability density functions. For the redshift estimation, we employ an advanced version of the Minimum Error Variance estimator which determines the redshift error from the redshift dependent probability density function itself. The method was originally developed for the Calar Alto Deep Imaging Survey (CADIS), but is now used in a wide variety of survey projects. We checked its performance by spectroscopy of CADIS objects, where the method provides high reliability (6 errors among 151 objects with R < 24), especially for the quasar selection, and redshifts accurate within σz ≈ 0.03 for galaxies and σz ≈ 0.1 for quasars. For an optimization of future survey efforts, a few model surveys are compared, which are designed to use the same total amount of telescope time but different sets of broad-band and medium-band filters. Their performance is investigated by Monte-Carlo simulations as well as by analytic evaluation in terms of classification and redshift estimation. If photon noise were the only error source, broad-band surveys and medium-band surveys should perform equally well, as long as they provide the same spectral coverage. In practice, medium-band surveys show superior performance due to their higher tolerance for calibration errors and cosmic variance. Finally, we discuss the relevance of color calibration and derive important conclusions for the issues of library design and choice of filters. The calibration accuracy poses strong constraints on an accurate classification, which are most critical for surveys with few, broad and deeply exposed filters, but less severe for surveys with many, narrow and less deep filters.Keywords: VO, ArVO, DFBS, FITS, image processing, data analysis
Procedia PDF Downloads 802488 Representational Issues in Learning Solution Chemistry at Secondary School
Authors: Lam Pham, Peter Hubber, Russell Tytler
Abstract:
Students’ conceptual understandings of chemistry concepts/phenomena involve capability to coordinate across the three levels of Johnston’s triangle model. This triplet model is based on reasoning about chemical phenomena across macro, sub-micro and symbolic levels. In chemistry education, there is a need for further examining inquiry-based approaches that enhance students’ conceptual learning and problem solving skills. This research adopted a directed inquiry pedagogy based on students constructing and coordinating representations, to investigate senior school students’ capabilities to flexibly move across Johnston’ levels when learning dilution and molar concentration concepts. The participants comprise 50 grade 11 and 20 grade 10 students and 4 chemistry teachers who were selected from 4 secondary schools located in metropolitan Melbourne, Victoria. This research into classroom practices used ethnographic methodology, involved teachers working collaboratively with the research team to develop representational activities and lesson sequences in the instruction of a unit on solution chemistry. The representational activities included challenges (Representational Challenges-RCs) that used ‘representational tools’ to assist students to move across Johnson’s three levels for dilution phenomena. In this report, the ‘representational tool’ called ‘cross and portion’ model was developed and used in teaching and learning the molar concentration concept. Students’ conceptual understanding and problem solving skills when learning with this model are analysed through group case studies of year 10 and 11 chemistry students. In learning dilution concepts, students in both group case studies actively conducted a practical experiment, used their own language and visualisation skills to represent dilution phenomena at macroscopic level (RC1). At the sub-microscopic level, students generated and negotiated representations of the chemical interactions between solute and solvent underpinning the dilution process. At the symbolic level, students demonstrated their understandings about dilution concepts by drawing chemical structures and performing mathematical calculations. When learning molar concentration with a ‘cross and portion’ model (RC2), students coordinated across visual and symbolic representational forms and Johnson’s levels to construct representations. The analysis showed that in RC1, Year 10 students needed more ‘scaffolding’ in inducing to representations to explicit the form and function of sub-microscopic representations. In RC2, Year 11 students showed clarity in using visual representations (drawings) to link to mathematics to solve representational challenges about molar concentration. In contrast, year 10 students struggled to get match up the two systems, symbolic system of mole per litre (‘cross and portion’) and visual representation (drawing). These conceptual problems do not lie in the students’ mathematical calculation capability but rather in students’ capability to align visual representations with the symbolic mathematical formulations. This research also found that students in both group case studies were able to coordinate representations when probed about the use of ‘cross and portion’ model (in RC2) to demonstrate molar concentration of diluted solutions (in RC1). Students mostly succeeded in constructing ‘cross and portion’ models to represent the reduction of molar concentration of the concentration gradients. In conclusion, this research demonstrated how the strategic introduction and coordination of chemical representations across modes and across the macro, sub-micro and symbolic levels, supported student reasoning and problem solving in chemistry.Keywords: cross and portion, dilution, Johnston's triangle, molar concentration, representations
Procedia PDF Downloads 1372487 Simulation of Elastic Bodies through Discrete Element Method, Coupled with a Nested Overlapping Grid Fluid Flow Solver
Authors: Paolo Sassi, Jorge Freiria, Gabriel Usera
Abstract:
In this work, a finite volume fluid flow solver is coupled with a discrete element method module for the simulation of the dynamics of free and elastic bodies in interaction with the fluid and between themselves. The open source fluid flow solver, caffa3d.MBRi, includes the capability to work with nested overlapping grids in order to easily refine the grid in the region where the bodies are moving. To do so, it is necessary to implement a recognition function able to identify the specific mesh block in which the device is moving in. The set of overlapping finer grids might be displaced along with the set of bodies being simulated. The interaction between the bodies and the fluid is computed through a two-way coupling. The velocity field of the fluid is first interpolated to determine the drag force on each object. After solving the objects displacements, subject to the elastic bonding among them, the force is applied back onto the fluid through a Gaussian smoothing considering the cells near the position of each object. The fishnet is represented as lumped masses connected by elastic lines. The internal forces are derived from the elasticity of these lines, and the external forces are due to drag, gravity, buoyancy and the load acting on each element of the system. When solving the ordinary differential equations system, that represents the motion of the elastic and flexible bodies, it was found that the Runge Kutta solver of fourth order is the best tool in terms of performance, but requires a finer grid than the fluid solver to make the system converge, which demands greater computing power. The coupled solver is demonstrated by simulating the interaction between the fluid, an elastic fishnet and a set of free bodies being captured by the net as they are dragged by the fluid. The deformation of the net, as well as the wake produced in the fluid stream are well captured by the method, without requiring the fluid solver mesh to adapt for the evolving geometry. Application of the same strategy to the simulation of elastic structures subject to the action of wind is also possible with the method presented, and one such application is currently under development.Keywords: computational fluid dynamics, discrete element method, fishnets, nested overlapping grids
Procedia PDF Downloads 4162486 An Exponential Field Path Planning Method for Mobile Robots Integrated with Visual Perception
Authors: Magdy Roman, Mostafa Shoeib, Mostafa Rostom
Abstract:
Global vision, whether provided by overhead fixed cameras, on-board aerial vehicle cameras, or satellite images can always provide detailed information on the environment around mobile robots. In this paper, an intelligent vision-based method of path planning and obstacle avoidance for mobile robots is presented. The method integrates visual perception with a new proposed field-based path-planning method to overcome common path-planning problems such as local minima, unreachable destination and unnecessary lengthy paths around obstacles. The method proposes an exponential angle deviation field around each obstacle that affects the orientation of a close robot. As the robot directs toward, the goal point obstacles are classified into right and left groups, and a deviation angle is exponentially added or subtracted to the orientation of the robot. Exponential field parameters are chosen based on Lyapunov stability criterion to guarantee robot convergence to the destination. The proposed method uses obstacles' shape and location, extracted from global vision system, through a collision prediction mechanism to decide whether to activate or deactivate obstacles field. In addition, a search mechanism is developed in case of robot or goal point is trapped among obstacles to find suitable exit or entrance. The proposed algorithm is validated both in simulation and through experiments. The algorithm shows effectiveness in obstacles' avoidance and destination convergence, overcoming common path planning problems found in classical methods.Keywords: path planning, collision avoidance, convergence, computer vision, mobile robots
Procedia PDF Downloads 1952485 Model Predictive Control of Three Phase Inverter for PV Systems
Authors: Irtaza M. Syed, Kaamran Raahemifar
Abstract:
This paper presents a model predictive control (MPC) of a utility interactive three phase inverter (TPI) for a photovoltaic (PV) system at commercial level. The proposed model uses phase locked loop (PLL) to synchronize TPI with the power electric grid (PEG) and performs MPC control in a dq reference frame. TPI model consists of boost converter (BC), maximum power point tracking (MPPT) control, and a three leg voltage source inverter (VSI). Operational model of VSI is used to synthesize sinusoidal current and track the reference. Model is validated using a 35.7 kW PV system in Matlab/Simulink. Implementation and results show simplicity and accuracy, as well as reliability of the model.Keywords: model predictive control, three phase voltage source inverter, PV system, Matlab/simulink
Procedia PDF Downloads 596