Search results for: computer science unplugged
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 4718

Search results for: computer science unplugged

3548 Disaster Victim Identification: A Social Science Perspective

Authors: Victor Toom

Abstract:

Albeit it is never possible to anticipate the full range of difficulties after a catastrophe, efforts to identify victims of mass casualty events have become institutionalized and standardized with the aim of effectively and efficiently addressing the many challenges and contingencies. Such ‘disaster victim identification’ (DVI) practices are dependent on the forensic sciences, are subject of national legislation, and are reliant on technical and organizational protocols to mitigate the many complexities in the wake of catastrophe. Apart from such technological, legal and bureaucratic elements constituting a DVI operation, victims’ families and their emotions are also part and parcel of any effort to identify casualties of mass human fatality incidents. Take for example the fact that forensic experts require (antemortem) information from the group of relatives to make identification possible. An identified body or body part is also repatriated to kin. Relatives are thus main stakeholders in DVI operations. Much has been achieved in years past regarding facilitating victims’ families’ issues and their emotions. Yet, how families are dealt with by experts and authorities is still considered a difficult topic. Due to sensitivities and required emphatic interaction with families on the one hand, and the rationalized DVI efforts, on the other hand, there is still scope for improving communication, providing information and meaningful inclusion of relatives in the DVI effort. This paper aims to bridge the standardized world of DVI efforts and families’ experienced realities and makes suggestions to further improve DVI efforts through inclusion of victims’ families. Based on qualitative interviews, the paper narrates involvement and experiences of inter alia DVI practitioners, victims’ families, advocates and clergy in the wake of the 1995 Srebrenica genocide which killed approximately 8,000 men, and the 9/11 in New York City with 2,750 victims. The paper shows that there are several models of including victims’ families into a DVI operation, and it argues for a model of where victims’ families become a partner in DVI operations.

Keywords: disaster victim identification (DVI), victims’ families, social science (qualitative), 9/11 attacks, Srebrenica genocide

Procedia PDF Downloads 230
3547 Automatic Identification and Monitoring of Wildlife via Computer Vision and IoT

Authors: Bilal Arshad, Johan Barthelemy, Elliott Pilton, Pascal Perez

Abstract:

Getting reliable, informative, and up-to-date information about the location, mobility, and behavioural patterns of animals will enhance our ability to research and preserve biodiversity. The fusion of infra-red sensors and camera traps offers an inexpensive way to collect wildlife data in the form of images. However, extracting useful data from these images, such as the identification and counting of animals remains a manual, time-consuming, and costly process. In this paper, we demonstrate that such information can be automatically retrieved by using state-of-the-art deep learning methods. Another major challenge that ecologists are facing is the recounting of one single animal multiple times due to that animal reappearing in other images taken by the same or other camera traps. Nonetheless, such information can be extremely useful for tracking wildlife and understanding its behaviour. To tackle the multiple count problem, we have designed a meshed network of camera traps, so they can share the captured images along with timestamps, cumulative counts, and dimensions of the animal. The proposed method takes leverage of edge computing to support real-time tracking and monitoring of wildlife. This method has been validated in the field and can be easily extended to other applications focusing on wildlife monitoring and management, where the traditional way of monitoring is expensive and time-consuming.

Keywords: computer vision, ecology, internet of things, invasive species management, wildlife management

Procedia PDF Downloads 135
3546 Preparation of Metal Containing Epoxy Polymer and Investigation of Their Properties as Fluorescent Probe

Authors: Ertuğ Yıldırım, Dile Kara, Salih Zeki Yıldız

Abstract:

Metal containing polymers (MCPs) are macro molecules usually containing metal-ligand coordination units and are a multidisciplinary research field mainly based at the interface between coordination chemistry and polymer science. The progress of this area has also been reinforced by the growth of several other closely related disciplines including macro molecular engineering, crystal engineering, organic synthesis, supra molecular chemistry and colloidal and material science. Schiff base ligands are very effective in constructing supra molecular architectures such as coordination polymers, double helical and triple helical complexes. In addition, Schiff base derivatives incorporating a fluorescent moiety are appealing tools for optical sensing of metal ions. MCPs are well-known systems in which the combinations of local parameters are possible by means of fluoro metric techniques. Generally, without incorporation of the fluorescent groups with polymers is unspecific, and it is not useful to analyze their fluorescent properties. Therefore, it is necessary to prepare a new type epoxy polymers with fluorescent groups in terms of metal sensing prop and the other photo chemical applications. In the present study metal containing polymers were prepared via poly functional monomeric Schiff base metal chelate complexes in the presence of dis functional monomers such as diglycidyl ether Bisphenol A (DGEBA). The synthesized complexes and polymers were characterized by FTIR, UV-VIS and mass spectroscopies. The preparations of epoxy polymers have been carried out at 185 °C. The prepared composites having sharp and narrow excitation/emission properties are expected to be applicable in various systems such as heat-resistant polymers and photo voltaic devices. The prepared composite is also ideal for various applications, easily prepared, safe, and maintain good fluorescence properties.

Keywords: Schiff base ligands, crystal engineering, fluorescence properties, Metal Containing Polymers (MCPs)

Procedia PDF Downloads 343
3545 Using Audio-Visual Aids and Computer-Assisted Language Instruction to Overcome Learning Difficulties of Vocabulary in Students of Special Needs

Authors: Sadeq Al Yaari, Ayman Al Yaari, Adham Al Yaari, Montaha Al Yaari, Aayah Al Yaari, Sajedah Al Yaar

Abstract:

Objectives: To assess the effect of using audio-visual aids and computer-assisted/ aided language instruction (CALI) in the performance of students of special needs studying vocabulary course. Methods: The performance of forty students of special needs (males and females) who used audiovisual aids and CALI in their vocabulary course at al-Malādh school for students of special needs was compared to that of another group (control group) of the same number and age (8-18). Again, subjects in the experimental group were given lessons using audio-visual aids and CALI, while those in the control group were given lessons using ordinary educational aids only, although both groups almost shared the same features (class environment, speech language therapist (SLT), etc.). Pre-andposttest was given at the beginning and end of the semester and a qualitative and quantitative analysis followed. Results & conclusions: Results of the present experimental study's pre-and-posttests indicated that the performance of the students in the first group was higher than that of those of the second group (34.27%, 73.82% vs. 33.57%, 34.92%, respectively). Compared with females, males’ performance was higher (1515 scores vs. 1438 scores). Such findings suggest that the presence of these audiovisual aids and CALI in the classes of students of special needs, especially if they are studying vocabulary building course is very important due to their usefulness in the improvement of performance of the students of special needs.

Keywords: language components, vocabulary, audio-visual aids, CALI, special needs, students, SLTs

Procedia PDF Downloads 39
3544 Bioinformatics High Performance Computation and Big Data

Authors: Javed Mohammed

Abstract:

Right now, bio-medical infrastructure lags well behind the curve. Our healthcare system is dispersed and disjointed; medical records are a bit of a mess; and we do not yet have the capacity to store and process the crazy amounts of data coming our way from widespread whole-genome sequencing. And then there are privacy issues. Despite these infrastructure challenges, some researchers are plunging into bio medical Big Data now, in hopes of extracting new and actionable knowledge. They are doing delving into molecular-level data to discover bio markers that help classify patients based on their response to existing treatments; and pushing their results out to physicians in novel and creative ways. Computer scientists and bio medical researchers are able to transform data into models and simulations that will enable scientists for the first time to gain a profound under-standing of the deepest biological functions. Solving biological problems may require High-Performance Computing HPC due either to the massive parallel computation required to solve a particular problem or to algorithmic complexity that may range from difficult to intractable. Many problems involve seemingly well-behaved polynomial time algorithms (such as all-to-all comparisons) but have massive computational requirements due to the large data sets that must be analyzed. High-throughput techniques for DNA sequencing and analysis of gene expression have led to exponential growth in the amount of publicly available genomic data. With the increased availability of genomic data traditional database approaches are no longer sufficient for rapidly performing life science queries involving the fusion of data types. Computing systems are now so powerful it is possible for researchers to consider modeling the folding of a protein or even the simulation of an entire human body. This research paper emphasizes the computational biology's growing need for high-performance computing and Big Data. It illustrates this article’s indispensability in meeting the scientific and engineering challenges of the twenty-first century, and how Protein Folding (the structure and function of proteins) and Phylogeny Reconstruction (evolutionary history of a group of genes) can use HPC that provides sufficient capability for evaluating or solving more limited but meaningful instances. This article also indicates solutions to optimization problems, and benefits Big Data and Computational Biology. The article illustrates the Current State-of-the-Art and Future-Generation Biology of HPC Computing with Big Data.

Keywords: high performance, big data, parallel computation, molecular data, computational biology

Procedia PDF Downloads 359
3543 Cleaning of Scientific References in Large Patent Databases Using Rule-Based Scoring and Clustering

Authors: Emiel Caron

Abstract:

Patent databases contain patent related data, organized in a relational data model, and are used to produce various patent statistics. These databases store raw data about scientific references cited by patents. For example, Patstat holds references to tens of millions of scientific journal publications and conference proceedings. These references might be used to connect patent databases with bibliographic databases, e.g. to study to the relation between science, technology, and innovation in various domains. Problematic in such studies is the low data quality of the references, i.e. they are often ambiguous, unstructured, and incomplete. Moreover, a complete bibliographic reference is stored in only one attribute. Therefore, a computerized cleaning and disambiguation method for large patent databases is developed in this work. The method uses rule-based scoring and clustering. The rules are based on bibliographic metadata, retrieved from the raw data by regular expressions, and are transparent and adaptable. The rules in combination with string similarity measures are used to detect pairs of records that are potential duplicates. Due to the scoring, different rules can be combined, to join scientific references, i.e. the rules reinforce each other. The scores are based on expert knowledge and initial method evaluation. After the scoring, pairs of scientific references that are above a certain threshold, are clustered by means of single-linkage clustering algorithm to form connected components. The method is designed to disambiguate all the scientific references in the Patstat database. The performance evaluation of the clustering method, on a large golden set with highly cited papers, shows on average a 99% precision and a 95% recall. The method is therefore accurate but careful, i.e. it weighs precision over recall. Consequently, separate clusters of high precision are sometimes formed, when there is not enough evidence for connecting scientific references, e.g. in the case of missing year and journal information for a reference. The clusters produced by the method can be used to directly link the Patstat database with bibliographic databases as the Web of Science or Scopus.

Keywords: clustering, data cleaning, data disambiguation, data mining, patent analysis, scientometrics

Procedia PDF Downloads 187
3542 A Computer-Aided System for Tooth Shade Matching

Authors: Zuhal Kurt, Meral Kurt, Bilge T. Bal, Kemal Ozkan

Abstract:

Shade matching and reproduction is the most important element of success in prosthetic dentistry. Until recently, shade matching procedure was implemented by dentists visual perception with the help of shade guides. Since many factors influence visual perception; tooth shade matching using visual devices (shade guides) is highly subjective and inconsistent. Subjective nature of this process has lead to the development of instrumental devices. Nowadays, colorimeters, spectrophotometers, spectroradiometers and digital image analysing systems are used for instrumental shade selection. Instrumental devices have advantages that readings are quantifiable, can obtain more rapidly and simply, objectively and precisely. However, these devices have noticeable drawbacks. For example, translucent structure and irregular surfaces of teeth lead to defects on measurement with these devices. Also between the results acquired by devices with different measurement principles may make inconsistencies. So, its obligatory to search for new methods for dental shade matching process. A computer-aided system device; digital camera has developed rapidly upon today. Currently, advances in image processing and computing have resulted in the extensive use of digital cameras for color imaging. This procedure has a much cheaper process than the use of traditional contact-type color measurement devices. Digital cameras can be taken by the place of contact-type instruments for shade selection and overcome their disadvantages. Images taken from teeth show morphology and color texture of teeth. In last decades, a new method was recommended to compare the color of shade tabs taken by a digital camera using color features. This method showed that visual and computer-aided shade matching systems should be used as concatenated. Recently using methods of feature extraction techniques are based on shape description and not used color information. However, color is mostly experienced as an essential property in depicting and extracting features from objects in the world around us. When local feature descriptors with color information are extended by concatenating color descriptor with the shape descriptor, that descriptor will be effective on visual object recognition and classification task. Therefore, the color descriptor is to be used in combination with a shape descriptor it does not need to contain any spatial information, which leads us to use local histograms. This local color histogram method is remain reliable under variation of photometric changes, geometrical changes and variation of image quality. So, coloring local feature extraction methods are used to extract features, and also the Scale Invariant Feature Transform (SIFT) descriptor used to for shape description in the proposed method. After the combination of these descriptors, the state-of-art descriptor named by Color-SIFT will be used in this study. Finally, the image feature vectors obtained from quantization algorithm are fed to classifiers such as Nearest Neighbor (KNN), Naive Bayes or Support Vector Machines (SVM) to determine label(s) of the visual object category or matching. In this study, SVM are used as classifiers for color determination and shade matching. Finally, experimental results of this method will be compared with other recent studies. It is concluded from the study that the proposed method is remarkable development on computer aided tooth shade determination system.

Keywords: classifiers, color determination, computer-aided system, tooth shade matching, feature extraction

Procedia PDF Downloads 429
3541 A Reading Attempt of the Urban Memory of Jordan University of Science and Technology Campus by Cognitive Mapping

Authors: Bsma Adel Bany Mohammad

Abstract:

The University campuses are a small city containing basic city functions such as educational spaces, accommodations, services and transportation. They are spaces of functional and social life with different activities, different occupants. The campus designed and transformed like cities so both experienced and memorized in same way. Campus memory is the ability of individuals to maintain and reveal the spatial components of designed physical spaces, which form the understandings, experiences, sensations of the environment in all. ‘Cognitive mapping’ is used to decode the physical interaction and emotional relationship between individuals and the city; Cognitive maps are created graphically using geometric and verbal elements on paper by remembering the images of the Urban Environment. In this study, to determine the emotional urban identity belonging to Jordan University of science and technology Campus, architecture students Asked to identify the areas they interact with in the campus by drawing a cognitive map. ‘Campus memory items’ are identified by analyzing the cognitive maps of the campus, then the spatial identity result of such data. The analysis based on the five basic elements of Lynch: paths, districts, edges, nodes, and landmarks. As a result of this analysis, it found that Spatial Identity constructed by the shared elements of the maps. The memory of most students listed the gates structure- which is a large desirable structure, located at the main entrances within the campus defined as major landmarks, then the square spaces defined as nodes, in addition to both stairs and corridors defined as paths. Finally, the districts, edges of educational buildings and service spaces are listed correspondingly in cognitive maps. Findings suggest that the spatial identity of the campus design is related mainly to the gates structures, squares and stairs.

Keywords: cognitive maps, university campus, urban memory, identity

Procedia PDF Downloads 146
3540 Buckling of Plates on Foundation with Different Types of Sides Support

Authors: Ali N. Suri, Ahmad A. Al-Makhlufi

Abstract:

In this paper the problem of buckling of plates on foundation of finite length and with different side support is studied. The Finite Strip Method is used as tool for the analysis. This method uses finite strip elastic, foundation, and geometric matrices to build the assembly matrices for the whole structure, then after introducing boundary conditions at supports, the resulting reduced matrices is transformed into a standard Eigenvalue-Eigenvector problem. The solution of this problem will enable the determination of the buckling load, the associated buckling modes and the buckling wave length. To carry out the buckling analysis starting from the elastic, foundation, and geometric stiffness matrices for each strip a computer program FORTRAN list is developed. Since stiffness matrices are function of wave length of buckling, the computer program used an iteration procedure to find the critical buckling stress for each value of foundation modulus and for each boundary condition. The results showed the use of elastic medium to support plates subject to axial load increase a great deal the buckling load, the results found are very close with those obtained by other analytical methods and experimental work. The results also showed that foundation compensates the effect of the weakness of some types of constraint of side support and maximum benefit found for plate with one side simply supported the other free.

Keywords: buckling, finite strip, different sides support, plates on foundation

Procedia PDF Downloads 237
3539 Remote Sensing of Aerated Flows at Large Dams: Proof of Concept

Authors: Ahmed El Naggar, Homyan Saleh

Abstract:

Dams are crucial for flood control, water supply, and the creation of hydroelectric power. Every dam has a water conveyance system, such as a spillway, providing the safe discharge of catastrophic floods when necessary. Spillway design has historically been investigated in laboratory research owing to the absence of suitable full-scale flow monitoring equipment and safety problems. Prototype measurements of aerated flows are urgently needed to quantify projected scale effects and provide missing validation data for design guidelines and numerical simulations. In this work, an image-based investigation of free-surface flows on a tiered spillway was undertaken at the laboratory (fixed camera installation) and prototype size (drone video) (drone footage) (drone footage). The drone videos were generated using data from citizen science. Analyses permitted the measurement of the free-surface aeration inception point, air-water surface velocities, fluctuations, and residual energy at the chute's downstream end from a remote site. The prototype observations offered full-scale proof of concept, while laboratory results were efficiently confirmed against invasive phase-detection probe data. This paper stresses the efficacy of image-based analyses at prototype spillways. It highlights how citizen science data may enable academics better understand real-world air-water flow dynamics and offers a framework for a small collection of long-missing prototype data.

Keywords: remote sensing, aerated flows, large dams, proof of concept, dam spillways, air-water flows, prototype operation, remote sensing, inception point, optical flow, turbulence, residual energy

Procedia PDF Downloads 83
3538 Using Derivative Free Method to Improve the Error Estimation of Numerical Quadrature

Authors: Chin-Yun Chen

Abstract:

Numerical integration is an essential tool for deriving different physical quantities in engineering and science. The effectiveness of a numerical integrator depends on different factors, where the crucial one is the error estimation. This work presents an error estimator that combines a derivative free method to improve the performance of verified numerical quadrature.

Keywords: numerical quadrature, error estimation, derivative free method, interval computation

Procedia PDF Downloads 458
3537 Optimal Sliding Mode Controller for Knee Flexion during Walking

Authors: Gabriel Sitler, Yousef Sardahi, Asad Salem

Abstract:

This paper presents an optimal and robust sliding mode controller (SMC) to regulate the position of the knee joint angle for patients suffering from knee injuries. The controller imitates the role of active orthoses that produce the joint torques required to overcome gravity and loading forces and regain natural human movements. To this end, a mathematical model of the shank, the lower part of the leg, is derived first and then used for the control system design and computer simulations. The design of the controller is carried out in optimal and multi-objective settings. Four objectives are considered: minimization of the control effort and tracking error; and maximization of the control signal smoothness and closed-loop system’s speed of response. Optimal solutions in terms of the Pareto set and its image, the Pareto front, are obtained. The results show that there are trade-offs among the design objectives and many optimal solutions from which the decision-maker can choose to implement. Also, computer simulations conducted at different points from the Pareto set and assuming knee squat movement demonstrate competing relationships among the design goals. In addition, the proposed control algorithm shows robustness in tracking a standard gait signal when accounting for uncertainty in the shank’s parameters.

Keywords: optimal control, multi-objective optimization, sliding mode control, wearable knee exoskeletons

Procedia PDF Downloads 78
3536 Third Eye: A Hybrid Portrayal of Visuospatial Attention through Eye Tracking Research and Modular Arithmetic

Authors: Shareefa Abdullah Al-Maqtari, Ruzaika Omar Basaree, Rafeah Legino

Abstract:

A pictorial representation of hybrid forms in science-art collaboration has become a crucial issue in the course of exploring a new painting technique development. This is straight related to the reception of an invisible-recognition phenomenology. In hybrid pictorial representation of invisible-recognition phenomenology, the challenging issue is how to depict the pictorial features of indescribable objects from its mental source, modality and transparency. This paper proposes the hybrid technique of painting Demonstrate, Resemble, and Synthesize (DRS) through a combination of the hybrid aspect-recognition representation of understanding picture, demonstrative mod, the number theory, pattern in the modular arithmetic system, and the coherence theory of visual attention in the dynamic scenes representation. Multi-methods digital gaze data analyses, pattern-modular table operation design, and rotation parameter were used for the visualization. In the scientific processes, Eye-trackingvideo-sections based was conducted using Tobii T60 remote eye tracking hardware and TobiiStudioTM analysis software to collect and analyze the eye movements of ten participants when watching the video clip, Alexander Paulikevitch’s performance’s ‘Tajwal’. Results: we found that correlation of fixation count in section one was positively and moderately correlated with section two Person’s (r=.10, p < .05, 2-tailed) as well as in fixation duration Person’s (r=.10, p < .05, 2-tailed). However, a paired-samples t-test indicates that scores were significantly higher for the section one (M = 2.2, SD = .6) than for the section two (M = 1.93, SD = .6) t(9) = 2.44, p < .05, d = 0.87. In the visual process, the exported data of gaze number N was resembled the hybrid forms of visuospatial attention using the table-mod-analyses operation. The explored hybrid guideline was simply applicable, and it could be as alternative approach to the sustainability of contemporary visual arts.

Keywords: science-art collaboration, hybrid forms, pictorial representation, visuospatial attention, modular arithmetic

Procedia PDF Downloads 359
3535 Navigating Rough Seas: A Qualitative Exploration of National Sociotechnical Imaginaries of Myanmar’s Future Marine Fisheries

Authors: Hannes Groeneweg

Abstract:

Myanmar is considered one of the largest fishing nations in the world. The country’s rapid economic and political reform process since 2011 entails both challenges and opportunities for its marine fishing sector. The development pathway of the sector remains unclear. Which future will eventually materialize is shaped and determined by the various visions and actions of the stakeholders engaging in political debates and decision-making. These visions can be conceptualized through the Science and Technology Studies (STS) concept of sociotechnical imaginaries. The research of this article is guided by the question of which imaginaries are currently relevant, who is propagating these imaginaries, and how are these imaginaries produced and contested. Using qualitative documentary analysis of policy documents, reports, and media articles as well as in-depth interviews with key stakeholders, three archetypical national sociotechnical imaginaries of Myanmar’s future marine fisheries were identified: The industrial scale extractivism imaginary views marine fishing sector as a driver for national economic growth and focuses on the industrial and technological development of the production chain, increasing yield and exports. Sustainable fishing management encompasses the vulnerability of marine ecosystems and views increasing efficient sustainability governance, planning, and management into existing fishing practices. In the traditional sufficiency fishing imaginary, small-scale fishing practices are viewed as an important livelihood practice for millions of coastal dwellers. The need to conserve them through strengthening the self-reliance, autonomy, and resilience of these communities is stressed. In national debates, the first two imaginaries are currently dominant. The imaginaries, as well as their contestations, are also linked to other critical political issues. The paper suggests that participatory decision-making processes are needed to create an inclusive imaginary of the future marine fishing sector.

Keywords: science and technology studies, sociotechnical imaginaries, marine fishing, knowledge coproduction, Myanmar

Procedia PDF Downloads 176
3534 Promoting Students' Worldview Through Integrative Education in the Process of Teaching Biology in Grades 11 and 12 of High School

Authors: Saule Shazhanbayeva, Denise van der Merwe

Abstract:

Study hypothesis: Nazarbayev Intellectual School of Kyzylorda’s Biology teachers can use STEM-integrated learning to improve students' problem-solving ability and responsibility as global citizens. The significance of this study is to indicate how the use of STEM integrative learning during Biology lessons could contribute to forming globally-minded students who are responsible community members. For the purposes of this study, worldview is defined as a view that is broader than the country of Kazakhstan, allowing students to see the significance of their scientific contributions to the world as global citizens. The context of worldview specifically indicates that most students have never traveled outside of their city or region within Kazakhstan. In order to broaden student understanding, it is imperative that students are exposed to different world views and contrasting ideas within the educational setting of Biology as the science being used for the research. This exposure promulgates students understanding of the significance they have as global citizens alongside the obligations which would rest on them as scientifically minded global citizens. Integrative learning should be Biological Science - with Technology and engineering in the form of problem-solving, and Mathematics to allow improved problem-solving skills to develop within the students of Nazarbayev Intellectual School (NIS) of Kyzylorda. The school's vision is to allow students to realise their role as global citizens and become responsible community members. STEM allows integrations by combining four subject skills to solve topical problems designed by educators. The methods used are based on qualitative analysis: for students’ performance during a problem-solution scenario; and Biology teacher interviews to ascertain their understanding of STEM implementation and willingness to integrate it into current lessons. The research indicated that NIS is ready for a shift into STEM lessons to promote globally responsible students. The only additional need is for proper STEM integrative lesson method training for teachers.

Keywords: global citizen, STEM, Biology, high-school

Procedia PDF Downloads 65
3533 Textile-Based Sensing System for Sleep Apnea Detection

Authors: Mary S. Ruppert-Stroescu, Minh Pham, Bruce Benjamin

Abstract:

Sleep apnea is a condition where a person stops breathing and can lead to cardiovascular disease, hypertension, and stroke. In the United States, approximately forty percent of overnight sleep apnea detection tests are cancelled. The purpose of this study was to develop a textile-based sensing system that acquires biometric signals relevant to cardiovascular health, to transmit them wirelessly to a computer, and to quantitatively assess the signals for sleep apnea detection. Patient interviews, literature review and market analysis defined a need for a device that ubiquitously integrated into the patient’s lifestyle. A multi-disciplinary research team of biomedical scientists, apparel designers, and computer engineers collaborated to design a textile-based sensing system that gathers EKG, Sp02, and respiration, then wirelessly transmits the signals to a computer in real time. The electronic components were assembled from existing hardware, the Health Kit which came pre-set with EKG and Sp02 sensors. The respiration belt was purchased separately and its electronics were built and integrated into the Health Kit mother board. Analog ECG signals were amplified and transmitted to the Arduino™ board where the signal was converted from analog into digital. By using textile electrodes, ECG lead-II was collected, and it reflected the electrical activity of the heart. Signals were collected when the subject was in sitting position and at sampling rate of 250 Hz. Because sleep apnea most often occurs in people with obese body types, prototypes were developed for a man’s size medium, XL, and XXL. To test user acceptance and comfort, wear tests were performed on 12 subjects. Results of the wear tests indicate that the knit fabric and t-shirt-like design were acceptable from both lifestyle and comfort perspectives. The airflow signal and respiration signal sensors return good signals regardless of movement intensity. Future study includes reconfiguring the hardware to a smaller size, developing the same type of garment for the female body, and further enhancing the signal quality.

Keywords: sleep apnea, sensors, electronic textiles, wearables

Procedia PDF Downloads 268
3532 Artificially Intelligent Context Aware Personal Computer Assistant (ACPCA)

Authors: Abdul Mannan Akhtar

Abstract:

In this paper a novel concept of a self learning smart personalized computer assistant (ACPCA) is established which is a context aware system. Based on user habits, moods, and other routines/situational reactions the system will manage various services and suggestions at appropriate times including what schedule to follow, what to watch, what software to be used, what should be deleted etc. This system will utilize a hybrid fuzzyNeural model to predict what the user will do next and support his actions. This will be done by establishing fuzzy sets of user activities, choices, preferences etc. and utilizing their combinations to predict his moods and immediate preferences. Various application of context aware systems exist separately e.g. on certain websites for music or multimedia suggestions but a personalized autonomous system that could adapt to user’s personality does not exist at present. Due to the novelty and massiveness of this concept, this paper will primarily focus on the problem establishment, product features and its functionality; however a small mini case is also implemented on MATLAB to demonstrate some of the aspects of ACPCA. The mini case involves prediction of user moods, activity, routine and food preference using a hybrid fuzzy-Neural soft computing technique.

Keywords: context aware systems, APCPCA, soft computing techniques, artificial intelligence, fuzzy logic, neural network, mood detection, face detection, activity detection

Procedia PDF Downloads 462
3531 Comparison of Computer Software for Swept Path Analysis on Example of Special Paved Areas

Authors: Ivana Cestar, Ivica Stančerić, Saša Ahac, Vesna Dragčević, Tamara Džambas

Abstract:

On special paved areas, such as road intersections, vehicles are usually moving through horizontal curves with smaller radii and occupy considerably greater area compared to open road segments. Planning procedure of these areas is mainly an iterative process that consists of designing project elements, assembling those elements to a design project, and analyzing swept paths for the design vehicle. If applied elements do not fulfill the swept path requirements for the design vehicle, the process must be carried out again. Application of specialized computer software for swept path analysis significantly facilitates planning procedure of special paved areas. There are various software of this kind available on the global market, and each of them has different specifications. In this paper, comparison of two software commonly used in Croatia (Auto TURN and Vehicle Tracking) is presented, their advantages and disadvantages are described, and their applicability on a particular paved area is discussed. In order to reveal which one of the analyszed software is more favorable in terms of swept paths widths, which one includes input parameters that are more relevant for this kind of analysis, and which one is more suitable for the application on a certain special paved area, the analysis shown in this paper was conducted on a number of different intersection types.

Keywords: software comparison, special paved areas, swept path analysis, swept path input parameters

Procedia PDF Downloads 317
3530 Nanoparticles in Diagnosis and Treatment of Cancer, and Medical Imaging Techniques Using Nano-Technology

Authors: Rao Muhammad Afzal Khan

Abstract:

Nano technology is emerging as a useful technology in nearly all areas of Science and Technology. Its role in medical imaging is attracting the researchers towards existing and new imaging modalities and techniques. This presentation gives an overview of the development of the work done throughout the world. Furthermore, it lays an idea into the scope of the future use of this technology for diagnosing different diseases. A comparative analysis has also been discussed with an emphasis to detect diseases, in general, and cancer, in particular.

Keywords: medical imaging, cancer detection, diagnosis, nano-imaging, nanotechnology

Procedia PDF Downloads 470
3529 A Transformer-Based Approach for Multi-Human 3D Pose Estimation Using Color and Depth Images

Authors: Qiang Wang, Hongyang Yu

Abstract:

Multi-human 3D pose estimation is a challenging task in computer vision, which aims to recover the 3D joint locations of multiple people from multi-view images. In contrast to traditional methods, which typically only use color (RGB) images as input, our approach utilizes both color and depth (D) information contained in RGB-D images. We also employ a transformer-based model as the backbone of our approach, which is able to capture long-range dependencies and has been shown to perform well on various sequence modeling tasks. Our method is trained and tested on the Carnegie Mellon University (CMU) Panoptic dataset, which contains a diverse set of indoor and outdoor scenes with multiple people in varying poses and clothing. We evaluate the performance of our model on the standard 3D pose estimation metrics of mean per-joint position error (MPJPE). Our results show that the transformer-based approach outperforms traditional methods and achieves competitive results on the CMU Panoptic dataset. We also perform an ablation study to understand the impact of different design choices on the overall performance of the model. In summary, our work demonstrates the effectiveness of using a transformer-based approach with RGB-D images for multi-human 3D pose estimation and has potential applications in real-world scenarios such as human-computer interaction, robotics, and augmented reality.

Keywords: multi-human 3D pose estimation, RGB-D images, transformer, 3D joint locations

Procedia PDF Downloads 74
3528 A Combined Approach Based on Artificial Intelligence and Computer Vision for Qualitative Grading of Rice Grains

Authors: Hemad Zareiforoush, Saeed Minaei, Ahmad Banakar, Mohammad Reza Alizadeh

Abstract:

The quality inspection of rice (Oryza sativa L.) during its various processing stages is very important. In this research, an artificial intelligence-based model coupled with computer vision techniques was developed as a decision support system for qualitative grading of rice grains. For conducting the experiments, first, 25 samples of rice grains with different levels of percentage of broken kernels (PBK) and degree of milling (DOM) were prepared and their qualitative grade was assessed by experienced experts. Then, the quality parameters of the same samples examined by experts were determined using a machine vision system. A grading model was developed based on fuzzy logic theory in MATLAB software for making a relationship between the qualitative characteristics of the product and its quality. Totally, 25 rules were used for qualitative grading based on AND operator and Mamdani inference system. The fuzzy inference system was consisted of two input linguistic variables namely, DOM and PBK, which were obtained by the machine vision system, and one output variable (quality of the product). The model output was finally defuzzified using Center of Maximum (COM) method. In order to evaluate the developed model, the output of the fuzzy system was compared with experts’ assessments. It was revealed that the developed model can estimate the qualitative grade of the product with an accuracy of 95.74%.

Keywords: machine vision, fuzzy logic, rice, quality

Procedia PDF Downloads 412
3527 Classroom Interaction Patterns as Correlates of Senior Secondary School Achievement in Chemistry in Awka Education Zone

Authors: Emmanuel Nkemakolam Okwuduba, Fransica Chinelo Offiah

Abstract:

The technique of teaching chemistry to students is one of the determining factors towards their achievement. Thus, the study investigated the relationship between classroom interaction patterns and students’ achievement in Chemistry. The purpose of this study was to identify patterns of interaction in an observed chemistry classroom, determine the amount of teacher talk, student talk and period of silence and to find out the relationship between them and the mean achievement scores of students. Five research questions and three hypotheses guided the study. The study was a correlational survey. The sample consisted of 450 (212males and 238 females) senior secondary one students and 12 (5males and 7 females) chemistry teachers drawn from 12 selected secondary schools in Awka Education Zone of Anambra state. In each of the 12 selected schools, an intact class was used. Science Interaction Category (SIC) and Chemistry Achievement Test (CAT) were developed, validated and used for data collection. Each teacher was observed three times and the interaction patterns coded using a coding sheet containing the Science Interaction Category. At the end of the observational period, the Chemistry Achievement Test (for collection of data on students’ achievement in chemistry) was administered on the students. Frequencies, percentage, mean, standard deviation and Pearson product moment correlation were used for data analysis. The result showed that the percentages of teacher talk, student talk and silence were 59.6%, 37.6% and 2.8% respectively. The Pearson correlation coefficient(r) for teacher talk, student talk and silence were -0.61, 0.76 and-0.18 respectively. The result showed negative and significant relationship between teacher talk and mean achievement scores of students; positive and significant relationship between student talk and mean achievement scores of students but there is no relationship between period of silence and mean achievement scores of students at 0.05 significant levels. The following recommendations were made based on the findings: teachers should establish high level of student talk through initiation and response as it promotes involvement and enhances achievement.

Keywords: academic achievement, chemistry, classroom, interactions patterns

Procedia PDF Downloads 303
3526 Pragmatic Development of Chinese Sentence Final Particles via Computer-Mediated Communication

Authors: Qiong Li

Abstract:

This study investigated in which condition computer-mediated communication (CMC) could promote pragmatic development. The focal feature included four Chinese sentence final particles (SFPs), a, ya, ba, and ne. They occur frequently in Chinese, and function as mitigators to soften the tone of speech. However, L2 acquisition of SFPs is difficult, suggesting the necessity of additional exposure to or explicit instruction on Chinese SFPs. This study follows this line and aims to explore two research questions: (1) Is CMC combined with data-driven instruction more effective than CMC alone in promoting L2 Chinese learners’ SFP use? (2) How does L2 Chinese learners’ SFP use change over time, as compared to the production of native Chinese speakers? The study involved 19 intermediate-level learners of Chinese enrolled at a private American university. They were randomly assigned to two groups: (1) the control group (N = 10), which was exposed to SFPs through CMC alone, (2) the treatment group (N = 9), which was exposed to SFPs via CMC and data-driven instruction. Learners interacted with native speakers on given topics through text-based CMC over Skype. Both groups went through six 30-minute CMC sessions on a weekly basis, with a one-week interval after the first two CMC sessions and a two-week interval after the second two CMC sessions (nine weeks in total). The treatment group additionally received a data-driven instruction after the first two sessions. Data analysis focused on three indices: token frequency, type frequency, and acceptability of SFP use. Token frequency was operationalized as the raw occurrence of SFPs per clause. Type frequency was the range of SFPs. Acceptability was rated by two native speakers using a rating rubric. The results showed that the treatment group made noticeable progress over time on the three indices. The production of SFPs approximated the native-like level. In contrast, the control group only slightly improved on token frequency. Only certain SFPs (a and ya) reached the native-like use. Potential explanations for the group differences were discussed in two aspects: the property of Chinese SFPs and the role of CMC and data-driven instruction. Though CMC provided the learners with opportunities to notice and observe SFP use, as a feature with low saliency, SFPs were not easily noticed in input. Data-driven instruction in the treatment group directed the learners’ attention to these particles, which facilitated the development.

Keywords: computer-mediated communication, data-driven instruction, pragmatic development, second language Chinese, sentence final particles

Procedia PDF Downloads 413
3525 Floodnet: Classification for Post Flood Scene with a High-Resolution Aerial Imaginary Dataset

Authors: Molakala Mourya Vardhan Reddy, Kandimala Revanth, Koduru Sumanth, Beena B. M.

Abstract:

Emergency response and recovery operations are severely hampered by natural catastrophes, especially floods. Understanding post-flood scenarios is essential to disaster management because it facilitates quick evaluation and decision-making. To this end, we introduce FloodNet, a brand-new high-resolution aerial picture collection created especially for comprehending post-flood scenes. A varied collection of excellent aerial photos taken during and after flood occurrences make up FloodNet, which offers comprehensive representations of flooded landscapes, damaged infrastructure, and changed topographies. The dataset provides a thorough resource for training and assessing computer vision models designed to handle the complexity of post-flood scenarios, including a variety of environmental conditions and geographic regions. Pixel-level semantic segmentation masks are used to label the pictures in FloodNet, allowing for a more detailed examination of flood-related characteristics, including debris, water bodies, and damaged structures. Furthermore, temporal and positional metadata improve the dataset's usefulness for longitudinal research and spatiotemporal analysis. For activities like flood extent mapping, damage assessment, and infrastructure recovery projection, we provide baseline standards and evaluation metrics to promote research and development in the field of post-flood scene comprehension. By integrating FloodNet into machine learning pipelines, it will be easier to create reliable algorithms that will help politicians, urban planners, and first responders make choices both before and after floods. The goal of the FloodNet dataset is to support advances in computer vision, remote sensing, and disaster response technologies by providing a useful resource for researchers. FloodNet helps to create creative solutions for boosting communities' resilience in the face of natural catastrophes by tackling the particular problems presented by post-flood situations.

Keywords: image classification, segmentation, computer vision, nature disaster, unmanned arial vehicle(UAV), machine learning.

Procedia PDF Downloads 72
3524 The Use of Artificial Intelligence in Digital Forensics and Incident Response in a Constrained Environment

Authors: Dipo Dunsin, Mohamed C. Ghanem, Karim Ouazzane

Abstract:

Digital investigators often have a hard time spotting evidence in digital information. It has become hard to determine which source of proof relates to a specific investigation. A growing concern is that the various processes, technology, and specific procedures used in the digital investigation are not keeping up with criminal developments. Therefore, criminals are taking advantage of these weaknesses to commit further crimes. In digital forensics investigations, artificial intelligence is invaluable in identifying crime. It has been observed that an algorithm based on artificial intelligence (AI) is highly effective in detecting risks, preventing criminal activity, and forecasting illegal activity. Providing objective data and conducting an assessment is the goal of digital forensics and digital investigation, which will assist in developing a plausible theory that can be presented as evidence in court. Researchers and other authorities have used the available data as evidence in court to convict a person. This research paper aims at developing a multiagent framework for digital investigations using specific intelligent software agents (ISA). The agents communicate to address particular tasks jointly and keep the same objectives in mind during each task. The rules and knowledge contained within each agent are dependent on the investigation type. A criminal investigation is classified quickly and efficiently using the case-based reasoning (CBR) technique. The MADIK is implemented using the Java Agent Development Framework and implemented using Eclipse, Postgres repository, and a rule engine for agent reasoning. The proposed framework was tested using the Lone Wolf image files and datasets. Experiments were conducted using various sets of ISA and VMs. There was a significant reduction in the time taken for the Hash Set Agent to execute. As a result of loading the agents, 5 percent of the time was lost, as the File Path Agent prescribed deleting 1,510, while the Timeline Agent found multiple executable files. In comparison, the integrity check carried out on the Lone Wolf image file using a digital forensic tool kit took approximately 48 minutes (2,880 ms), whereas the MADIK framework accomplished this in 16 minutes (960 ms). The framework is integrated with Python, allowing for further integration of other digital forensic tools, such as AccessData Forensic Toolkit (FTK), Wireshark, Volatility, and Scapy.

Keywords: artificial intelligence, computer science, criminal investigation, digital forensics

Procedia PDF Downloads 207
3523 Screening Deformed Red Blood Cells Irradiated by Ionizing Radiations Using Windowed Fourier Transform

Authors: Dahi Ghareab Abdelsalam Ibrahim, R. H. Bakr

Abstract:

Ionizing radiation, such as gamma radiation and X-rays, has many applications in medical diagnoses and cancer treatment. In this paper, we used the windowed Fourier transform to extract the complex image of the deformed red blood cells. The real values of the complex image are used to extract the best fitting of the deformed cell boundary. Male albino rats are irradiated by γ-rays from ⁶⁰Co. The male albino rats are anesthetized with ether, and then blood samples are collected from the eye vein by heparinized capillary tubes for studying the radiation-damaging effect in-vivo by the proposed windowed Fourier transform. The peripheral blood films are prepared according to the Brown method. The peripheral blood film is photographed by using an Automatic Image Contour Analysis system (SAMICA) from ELBEK-Bildanalyse GmbH, Siegen, Germany. The SAMICA system is provided with an electronic camera connected to a computer through a built-in interface card, and the image can be magnified up to 1200 times and displayed by the computer. The images of the peripheral blood films are then analyzed by the windowed Fourier transform method to extract the precise deformation from the best fitting. Based on accurate deformation evaluation of the red blood cells, diseases can be diagnosed in their primary stages.

Keywords: windowed Fourier transform, red blood cells, phase wrapping, Image processing

Procedia PDF Downloads 81
3522 Political Antinomy and Its Resolution in Islam

Authors: Abdul Nasir Zamir

Abstract:

After the downfall of Ottoman Caliphate, it scattered into different small Muslim states. Muslim leaders, intellectuals, revivalists as well as modernists started trying to boost up their nation. Some Muslims are also trying to establish the caliphate. Every Muslim country has its own political system, i.e., kingship, dictatorship or democracy, etc. But these are not in their original forms as the historian or political science discussed in their studies. The laws and their practice are mixed, i.e., others with Islamic laws, e.g., Saudi Arabia (K.S.A) and the Islamic Republic of Pakistan, etc. There is great conflict among the revivalist Muslim parties (groups) and governments about political systems. The question is that the subject matter is Sharia or political system? Leaders of Modern Muslim states are alleged as disbelievers due to neglecting the revelation in their laws and decisions. There are two types of laws; Islamic laws and management laws. The conflict is that the non-Islamic laws are in practice in Muslim states. Non-Islamic laws can be gradually changed with Islamic laws with a legal and peaceful process according to the practice of former Muslim leaders and scholars. The bloodshed of Muslims is not allowed in any case. Weak Muslim state is a blessing than nothing. The political system after Muhammad and guided caliphs is considered as kingship. But during this period Muslims not only developed in science and technology but conquered many territories also. If the original aim is in practice, then the Modern Muslim states can be stabled with different political systems. Modern Muslim states are the hope of survival, stability, and development of Muslim Ummah. Islam does not allow arm clash with Muslim army or Muslim civilians. The caliphate is based on believing in one Allah Almighty and good deeds according to Quran and Sunnah. As faith became weak and good deeds became less from its standard level, caliphate automatically became weak and even ended. The last weak caliphate was Ottoman Caliphate which was a hope of all the Muslims of the world. There is no caliphate or caliph present in the world. But every Muslim country or state is like an Amarat (a part of caliphate or small and alternate form of the caliphate) of Muslims. It is the duty of all Muslims to stable these modern Muslim states with tolerance.

Keywords: caliphate, conflict resolution, modern Muslim state, political conflicts, political systems, tolerance

Procedia PDF Downloads 151
3521 Children's Literature and the Study of the Sociological Approach

Authors: Sulmaz Mozaffari, Zahra Mozaffari, Saman Mozaffari

Abstract:

Man has always tried to find the Ideal place for life and he has experienced a lot of problems. So many internal and external limits has been on his way. Today man is threatened by so many crisis because of his specific look to the world. Literature as a universal science has not ignored this problem either. Children's literature has tried to present the social, cultural, religious and economical problems in tales and novels. This research tries to analyse social and cultural problems related to 10th century children from social point of criticism.

Keywords: social criticism, crisis, children's literature, tale

Procedia PDF Downloads 471
3520 Comics as an Intermediary for Media Literacy Education

Authors: Ryan C. Zlomek

Abstract:

The value of using comics in the literacy classroom has been explored since the 1930s. At that point in time researchers had begun to implement comics into daily lesson plans and, in some instances, had started the development process for comics-supported curriculum. In the mid-1950s, this type of research was cut short due to the work of psychiatrist Frederic Wertham whose research seemingly discovered a correlation between comic readership and juvenile delinquency. Since Wertham’s allegations the comics medium has had a hard time finding its way back to education. Now, over fifty years later, the definition of literacy is in mid-transition as the world has become more visually-oriented and students require the ability to interpret images as often as words. Through this transition, comics has found a place in the field of literacy education research as the shift focuses from traditional print to multimodal and media literacies. Comics are now believed to be an effective resource in bridging the gap between these different types of literacies. This paper seeks to better understand what students learn from the process of reading comics and how those skills line up with the core principles of media literacy education in the United States. In the first section, comics are defined to determine the exact medium that is being examined. The different conventions that the medium utilizes are also discussed. In the second section, the comics reading process is explored through a dissection of the ways a reader interacts with the page, panel, gutter, and different comic conventions found within a traditional graphic narrative. The concepts of intersubjective acts and visualization are attributed to the comics reading process as readers draw in real world knowledge to decode meaning. In the next section, the learning processes that comics encourage are explored parallel to the core principles of media literacy education. Each principle is explained and the extent to which comics can act as an intermediary for this type of education is theorized. In the final section, the author examines comics use in his computer science and technology classroom. He lays out different theories he utilizes from Scott McCloud’s text Understanding Comics and how he uses them to break down media literacy strategies with his students. The article concludes with examples of how comics has positively impacted classrooms around the United States. It is stated that integrating comics into the classroom will not solve all issues related to literacy education but, rather, that comics can be a powerful multimodal resource for educators looking for new mediums to explore with their students.

Keywords: comics, graphics novels, mass communication, media literacy, metacognition

Procedia PDF Downloads 295
3519 Evidence of Half-Metallicity in Cubic PrMnO3 Perovskite

Authors: B. Bouadjemi, S. Bentata, W. Benstaali, A. Abbad

Abstract:

The electronic and magnetic properties of the cubic praseodymium oxides perovskites PrMnO3 were calculated using the density functional theory (DFT) with both generalized gradient approximation (GGA) and GGA+U approaches, where U is on-site Coulomb interaction correction. The results show a half-metallic ferromagnetic ground state for PrMnO3 in GGA+U approached, while semi-metallic ferromagnetic character is observed in GGA. The results obtained, make the cubic PrMnO3 a promising candidate for application in spintronics.

Keywords: first-principles, electronic properties, transition metal, materials science

Procedia PDF Downloads 455