Search results for: feature noise
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 2594

Search results for: feature noise

434 Methods for Enhancing Ensemble Learning or Improving Classifiers of This Technique in the Analysis and Classification of Brain Signals

Authors: Seyed Mehdi Ghezi, Hesam Hasanpoor

Abstract:

This scientific article explores enhancement methods for ensemble learning with the aim of improving the performance of classifiers in the analysis and classification of brain signals. The research approach in this field consists of two main parts, each with its own strengths and weaknesses. The choice of approach depends on the specific research question and available resources. By combining these approaches and leveraging their respective strengths, researchers can enhance the accuracy and reliability of classification results, consequently advancing our understanding of the brain and its functions. The first approach focuses on utilizing machine learning methods to identify the best features among the vast array of features present in brain signals. The selection of features varies depending on the research objective, and different techniques have been employed for this purpose. For instance, the genetic algorithm has been used in some studies to identify the best features, while optimization methods have been utilized in others to identify the most influential features. Additionally, machine learning techniques have been applied to determine the influential electrodes in classification. Ensemble learning plays a crucial role in identifying the best features that contribute to learning, thereby improving the overall results. The second approach concentrates on designing and implementing methods for selecting the best classifier or utilizing meta-classifiers to enhance the final results in ensemble learning. In a different section of the research, a single classifier is used instead of multiple classifiers, employing different sets of features to improve the results. The article provides an in-depth examination of each technique, highlighting their advantages and limitations. By integrating these techniques, researchers can enhance the performance of classifiers in the analysis and classification of brain signals. This advancement in ensemble learning methodologies contributes to a better understanding of the brain and its functions, ultimately leading to improved accuracy and reliability in brain signal analysis and classification.

Keywords: ensemble learning, brain signals, classification, feature selection, machine learning, genetic algorithm, optimization methods, influential features, influential electrodes, meta-classifiers

Procedia PDF Downloads 69
433 First Rank Symptoms in Mania: An Indistinct Diagnostic Strand

Authors: Afshan Channa, Sameeha Aleem, Harim Mohsin

Abstract:

First rank symptoms (FRS) are considered to be pathognomic for Schizophrenia. However, FRS is not a distinctive feature of Schizophrenia. It has also been noticed in affective disorder, albeit not inclusive in diagnostic criteria. The presence of FRS in Mania leads to misdiagnosis of psychotic illness, further complicating the management and delay of appropriate treatment. FRS in Mania is associated with poor clinical and functional outcome. Its existence in the first episode of bipolar disorder may be a predictor of poor short-term outcome and decompensating course of illness. FRS in Mania is studied in west. However, the cultural divergence and detriments make it pertinent to study the frequency of FRS in affective disorder independently in Pakistan. Objective: The frequency of first rank symptoms in manic patients, who were under treatment at psychiatric services of tertiary care hospital. Method: The cross sectional study was done at psychiatric services of Aga Khan University Hospital, Karachi, Pakistan. One hundred and twenty manic patients were recruited from November 2014 to May 2015. The patients who were unable to comprehend Urdu or had comorbid psychiatric or organic disorder were excluded. FRS was assessed by administration of validated Urdu version of Present State Examination (PSE) tool. Result: The mean age of the patients was 37.62 + 12.51. The mean number of previous manic episode was 2.17 + 2.23. 11.2% males and 30.6% females had FRS. This association of first rank symptoms with gender in patients of mania was found to be significant with a p-value of 0.008. All-inclusive, 19.2% exhibited FRS in their course of illness. 43.5% had thought broadcasting, made feeling, impulses, action and somatic passivity. 39.1% had thought insertion, 30.4% had auditory perceptual distortion, and 17.4% had thought withdrawal. However, none displayed delusional perception. Conclusion: The study confirms the presence of FRS in mania in both male and female, irrespective of the duration of current manic illness or previous number of manic episodes. A substantial difference was established between both the genders. Being married had no protective effect on the presence of FRS.

Keywords: first rank symptoms, Mania, psychosis, present state examination

Procedia PDF Downloads 369
432 Enhancing Robustness in Federated Learning through Decentralized Oracle Consensus and Adaptive Evaluation

Authors: Peiming Li

Abstract:

This paper presents an innovative blockchain-based approach to enhance the reliability and efficiency of federated learning systems. By integrating a decentralized oracle consensus mechanism into the federated learning framework, we address key challenges of data and model integrity. Our approach utilizes a network of redundant oracles, functioning as independent validators within an epoch-based training system in the federated learning model. In federated learning, data is decentralized, residing on various participants' devices. This scenario often leads to concerns about data integrity and model quality. Our solution employs blockchain technology to establish a transparent and tamper-proof environment, ensuring secure data sharing and aggregation. The decentralized oracles, a concept borrowed from blockchain systems, act as unbiased validators. They assess the contributions of each participant using a Hidden Markov Model (HMM), which is crucial for evaluating the consistency of participant inputs and safeguarding against model poisoning and malicious activities. Our methodology's distinct feature is its epoch-based training. An epoch here refers to a specific training phase where data is updated and assessed for quality and relevance. The redundant oracles work in concert to validate data updates during these epochs, enhancing the system's resilience to security threats and data corruption. The effectiveness of this system was tested using the Mnist dataset, a standard in machine learning for benchmarking. Results demonstrate that our blockchain-oriented federated learning approach significantly boosts system resilience, addressing the common challenges of federated environments. This paper aims to make these advanced concepts accessible, even to those with a limited background in blockchain or federated learning. We provide a foundational understanding of how blockchain technology can revolutionize data integrity in decentralized systems and explain the role of oracles in maintaining model accuracy and reliability.

Keywords: federated learning system, block chain, decentralized oracles, hidden markov model

Procedia PDF Downloads 55
431 Series Connected GaN Resonant Tunneling Diodes for Multiple-Valued Logic

Authors: Fang Liu, JunShuai Xue, JiaJia Yao, XueYan Yang, ZuMao Li, GuanLin Wu, HePeng Zhang, ZhiPeng Sun

Abstract:

III-Nitride resonant tunneling diode (RTD) is one of the most promising candidates for multiple-valued logic (MVL) elements. Here, we report a monolithic integration of GaN resonant tunneling diodes to realize multiple negative differential resistance (NDR) regions for MVL application. GaN RTDs, composed of a 2 nm quantum well embedded in two 1 nm quantum barriers, are grown by plasma-assisted molecular beam epitaxy on free-standing c-plane GaN substrates. Negative differential resistance characteristic with a peak current density of 178 kA/cm² in conjunction with a peak-to-valley current ratio (PVCR) of 2.07 is observed. Statistical properties exhibit high consistency showing a peak current density standard deviation of almost 1%, laying the foundation for the monolithic integration. After complete electrical isolation, two diodes of the designed same area are connected in series. By solving the Poisson equation and Schrodinger equation in one dimension, the energy band structure is calculated to explain the transport mechanism of the differential negative resistance phenomenon. Resonant tunneling events in a sequence of the series-connected RTD pair (SCRTD) form multiple NDR regions with nearly equal peak current, obtaining three stable operating states corresponding to ternary logic. A frequency multiplier circuit achieved using this integration is demonstrated, attesting to the robustness of this multiple peaks feature. This article presents a monolithic integration of SCRTD with multiple NDR regions driven by the resonant tunneling mechanism, which can be applied to a multiple-valued logic field, promising a fast operation speed and a great reduction of circuit complexity and demonstrating a new solution for nitride devices to break through the limitations of binary logic.

Keywords: GaN resonant tunneling diode, multiple-valued logic system, frequency multiplier, negative differential resistance, peak-to-valley current ratio

Procedia PDF Downloads 72
430 Application of Compressed Sensing and Different Sampling Trajectories for Data Reduction of Small Animal Magnetic Resonance Image

Authors: Matheus Madureira Matos, Alexandre Rodrigues Farias

Abstract:

Magnetic Resonance Imaging (MRI) is a vital imaging technique used in both clinical and pre-clinical areas to obtain detailed anatomical and functional information. However, MRI scans can be expensive, time-consuming, and often require the use of anesthetics to keep animals still during the imaging process. Anesthetics are commonly administered to animals undergoing MRI scans to ensure they remain still during the imaging process. However, prolonged or repeated exposure to anesthetics can have adverse effects on animals, including physiological alterations and potential toxicity. Minimizing the duration and frequency of anesthesia is, therefore, crucial for the well-being of research animals. In recent years, various sampling trajectories have been investigated to reduce the number of MRI measurements leading to shorter scanning time and minimizing the duration of animal exposure to the effects of anesthetics. Compressed sensing (CS) and sampling trajectories, such as cartesian, spiral, and radial, have emerged as powerful tools to reduce MRI data while preserving diagnostic quality. This work aims to apply CS and cartesian, spiral, and radial sampling trajectories for the reconstruction of MRI of the abdomen of mice sub-sampled at levels below that defined by the Nyquist theorem. The methodology of this work consists of using a fully sampled reference MRI of a female model C57B1/6 mouse acquired experimentally in a 4.7 Tesla MRI scanner for small animals using Spin Echo pulse sequences. The image is down-sampled by cartesian, radial, and spiral sampling paths and then reconstructed by CS. The quality of the reconstructed images is objectively assessed by three quality assessment techniques RMSE (Root mean square error), PSNR (Peak to Signal Noise Ratio), and SSIM (Structural similarity index measure). The utilization of optimized sampling trajectories and CS technique has demonstrated the potential for a significant reduction of up to 70% of image data acquisition. This result translates into shorter scan times, minimizing the duration and frequency of anesthesia administration and reducing the potential risks associated with it.

Keywords: compressed sensing, magnetic resonance, sampling trajectories, small animals

Procedia PDF Downloads 63
429 The Sociology of the Facebook: An Exploratory Study

Authors: Liana Melissa E. de la Rosa, Jayson P. Ada

Abstract:

This exploratory study was conducted to determine the sociology of the Facebook. Specifically, it aimed to know the socio-demographic profile of the respondents in terms of age, sex, year level and monthly allowance; find out the common usage of Facebook to the respondents; identify the features of Facebook that are commonly used by the respondents; understand the benefits and risks of using the Facebook; determine how frequent the respondents use the Facebook; and find out if there is a significant relationship between socio-demographic profile of the respondents and their Facebook usage. This study used the exploratory research design and correlational design employing research survey questionnaire as its main data gathering instrument. Students of the University of Eastern Philippines were selected as the respondents of this study through quota sampling. Ten (10) students were randomly selected from each college of the university. Based on the findings of this study, the following conclusion were drawn: The majority of the respondents are aged 18 and 21 old, female, are third year students, and have monthly allowance of P 2,000 above. On the respondents’ usage of Facebook, the majority of use the Facebook on a daily basis for one to two (1-2) hours everyday. And most users used Facebook by renting a computer in an internet cafe. On the use of Facebook, most users have created their profiles mainly to connect with people and gain new friends. The most commonly used features of Facebook, are: photos application, like button, wall, notification, friend, chat, network, groups and “like” pages status updates, messages and inbox and events. While the other Facebook features that are seldom used by the respondents are games, news feed, user name, video sharing and notes. And the least used Facebook features are questions, poke feature, credits and the market place. The respondents stated that the major benefit that the Facebook has given to its users is its ability to keep in touch with family members or friends while the main risk identified is that the users can become addicted to the Internet. On the tests of relationships between the respondents’ use of Facebook and the four (4) socio-demographic profile variables: age, sex, year level, and month allowance, were found to be not significantly related to the respondents’ use of the Facebook. While the variable found to be significantly related was gender.

Keywords: Facebook, sociology, social networking, exploratory study

Procedia PDF Downloads 281
428 Hand Gesture Detection via EmguCV Canny Pruning

Authors: N. N. Mosola, S. J. Molete, L. S. Masoebe, M. Letsae

Abstract:

Hand gesture recognition is a technique used to locate, detect, and recognize a hand gesture. Detection and recognition are concepts of Artificial Intelligence (AI). AI concepts are applicable in Human Computer Interaction (HCI), Expert systems (ES), etc. Hand gesture recognition can be used in sign language interpretation. Sign language is a visual communication tool. This tool is used mostly by deaf societies and those with speech disorder. Communication barriers exist when societies with speech disorder interact with others. This research aims to build a hand recognition system for Lesotho’s Sesotho and English language interpretation. The system will help to bridge the communication problems encountered by the mentioned societies. The system has various processing modules. The modules consist of a hand detection engine, image processing engine, feature extraction, and sign recognition. Detection is a process of identifying an object. The proposed system uses Canny pruning Haar and Haarcascade detection algorithms. Canny pruning implements the Canny edge detection. This is an optimal image processing algorithm. It is used to detect edges of an object. The system employs a skin detection algorithm. The skin detection performs background subtraction, computes the convex hull, and the centroid to assist in the detection process. Recognition is a process of gesture classification. Template matching classifies each hand gesture in real-time. The system was tested using various experiments. The results obtained show that time, distance, and light are factors that affect the rate of detection and ultimately recognition. Detection rate is directly proportional to the distance of the hand from the camera. Different lighting conditions were considered. The more the light intensity, the faster the detection rate. Based on the results obtained from this research, the applied methodologies are efficient and provide a plausible solution towards a light-weight, inexpensive system which can be used for sign language interpretation.

Keywords: canny pruning, hand recognition, machine learning, skin tracking

Procedia PDF Downloads 176
427 Design and Development of a Safety Equipment and Accessory for Bicycle Users

Authors: Francine Siy, Stephen Buñi

Abstract:

Safety plays a significant role in everyone’s life on a day-to-day basis. We wish ourselves and our loved ones their safety as we all venture out on our daily commute. The road is undeniably dangerous and unpredictable, with abundant traffic collisions and pedestrians experiencing various injuries. For bicycle users, the risk of accidents is even more exacerbated, and injuries may be severe. Even when cyclists try their best to be safe and protected, the possibility of encountering danger is always there. Despite being equipped with protective gear, safety is never guaranteed. Cyclists often settle for helmets and standard reflector vests to establish a presence on the road. There are different types of vests available, depending on the profession. However, traditional reflector vests, mostly seen on construction workers and traffic enforcers, were not designed for riders and their protection from injuries. With insufficient protection for riders, they need access to ergonomically designed equipment and accessories that suit the riders and cater to their needs. This research aimed to offer a protective vest with safety features for riders that is comfortable, effective, durable, and intuitive. This sheds light and addresses the safety of the biker population, which continuously grows through the years. The product was designed and developed by gathering data and using the cognitive mapping method to ensure that all qualitative and quantitative data were considered in this study to improve other existing products that do not have the proper design considerations. It is known that available equipment for cyclists is often sold separately or lacks the safety features for cyclists traversing open roads. Each safety feature like the headlights, reflectors, signal or rear lights, zipper pouch, body camera attachment, and wireless remote control all play a particular role in helping cyclists embark on their daily commute. These features aid in illumination, visibility, easy maneuvering, convenience, and security, allowing cyclists to go for a safer ride that is of use throughout the day. The product is designed and produced effectively and inexpensively without sacrificing the quality and purpose of its usage.

Keywords: bicycle accessory, protective gear, safety, transport, visibility

Procedia PDF Downloads 77
426 Factors Determining the Vulnerability to Occupational Health Risk and Safety of Call Center Agents in the Philippines

Authors: Lito M. Amit, Venecio U. Ultra, Young-Woong Song

Abstract:

The business process outsourcing (BPO) in the Philippines is expanding rapidly attracting more than 2% of total employment. Currently, the BPO industry is confronted with several issues pertaining to sustainable productivity such as meeting the staffing gap, high rate of employees’ turnover and workforce retention, and the occupational health and safety (OHS) of call center agents. We conducted a survey of OHS programs and health concerns among call center agents in the Philippines and determined the sociocultural factors that affect the vulnerability of call center agents to occupational health risks and hazards. The majority of the agents affirmed that OHS are implemented and OHS orientation and emergency procedures were conducted at employment initiations, perceived favorable and convenient working environment except for occasional noise disturbances and acoustic shock, visual, and voice fatigues. Male agents can easily adjust to the demands and changes in their work environment and flexible work schedules than female agents. Female agents have a higher tendency to be pressured and humiliated by low work performance, experience a higher incidence of emotional abuse, psychological abuse, and experience more physical stress than male agents. The majority of the call center agents had a night-shift schedule and regardless of other factors, night shift work brings higher stress to agents. While working in a call center, higher incidence of headaches and insomnia, burnout, suppressed anger, anxiety, and depressions were experienced by female, younger (21-25 years old) and those at night shift than their counterpart. Most common musculoskeletal disorders include body pain in the neck, shoulders and back; and hand and wrist disorders and these are commonly experienced by female and younger workers. About 30% experienced symptoms of cardiovascular and gastrointestinal disorders and weakened immune systems. Overall, these findings have shown the variable vulnerability by a different subpopulation of call center agents and are important in the occupational health risk prevention and management towards a sustainable human resource for BPO industry in the Philippines.

Keywords: business process outsourcing industry, health risk of call center agents, socio-cultural determinants, Philippines

Procedia PDF Downloads 490
425 Co-Design of Accessible Speech Recognition for Users with Dysarthric Speech

Authors: Elizabeth Howarth, Dawn Green, Sean Connolly, Geena Vabulas, Sara Smolley

Abstract:

Through the EU Horizon 2020 Nuvoic Project, the project team recruited 70 individuals in the UK and Ireland to test the Voiceitt speech recognition app and provide user feedback to developers. The app is designed for people with dysarthric speech, to support communication with unfamiliar people and access to speech-driven technologies such as smart home equipment and smart assistants. Participants with atypical speech, due to a range of conditions such as cerebral palsy, acquired brain injury, Down syndrome, stroke and hearing impairment, were recruited, primarily through organisations supporting disabled people. Most had physical or learning disabilities in addition to dysarthric speech. The project team worked with individuals, their families and local support teams, to provide access to the app, including through additional assistive technologies where needed. Testing was user-led, with participants asked to identify and test use cases most relevant to their daily lives over a period of three months or more. Ongoing technical support and training were provided remotely and in-person throughout the testing period. Structured interviews were used to collect feedback on users' experiences, with delivery adapted to individuals' needs and preferences. Informal feedback was collected through ongoing contact between participants, their families and support teams and the project team. Focus groups were held to collect feedback on specific design proposals. User feedback shared with developers has led to improvements to the user interface and functionality, including faster voice training, simplified navigation, the introduction of gamification elements and of switch access as an alternative to touchscreen access, with other feature requests from users still in development. This work offers a case-study in successful and inclusive co-design with the disabled community.

Keywords: co-design, assistive technology, dysarthria, inclusive speech recognition

Procedia PDF Downloads 100
424 Analysis and the Fair Distribution Modeling of Urban Facilities in Kabul City

Authors: Ansari Mohammad Reza, Hiroko Ono, Fakhrullah Sarwari

Abstract:

Our world is fast heading toward being a predominantly urban planet. This can be a double-edged sword reality where it is as much frightening as it seems interesting. Moreover, a look to the current predictions and taking into the consideration the fact that about 90 percent of the coming urbanization is going to be absorbed by the towns and the cities of the developing countries of Asia and Africa, directly provide us the clues to assume a much more tragic ending to this story than to the happy one. Likewise, in a situation wherein most of these countries are still severely struggling to find the proper answer to their very first initial questions of urbanization—e.g. how to provide the essential structure for their cities, define the regulation, or even design the proper pattern on how the cities should be expanded—thus it is not weird to claim that most of the coming urbanization of the world is going to happen informally. This reality could not only bring the feature, landscape or the picture of the cities of the future under the doubt but at the same time provide the ground for the rise of a bunch of other essential questions of how the facilities would be distributed in these cities, or how fair will this pattern of distribution be. Kabul the capital of Afghanistan, as a city located in the developing world that its process of urbanization has been starting since 2001 and currently hold the position to be the fifth fastest growing city in the world, contained to a considerable slum ratio of 0.7—that means about 70 percent of its population is living in the informal areas—subsequently could be a very good case study to put this questions into the research and find out how the informal development of a city can lead to the unfair and unbalanced distribution of its facilities. Likewise, in this study we tried our best to first propose the ideal model for the fair distribution of the facilities in the Kabul city—where all the citizens have the same equal chance of access to the facilities—and then evaluate the situation of the city based on how fair the facilities are currently distributed therein. We subsequently did it by the comparative analysis between the existing facility rate in the formal and informal areas of the city to the one that was proposed as the fair ideal model.

Keywords: Afghanistan, facility distribution, formal settlements, informal settlements, Kabul

Procedia PDF Downloads 112
423 In-Flight Radiometric Performances Analysis of an Airborne Optical Payload

Authors: Caixia Gao, Chuanrong Li, Lingli Tang, Lingling Ma, Yaokai Liu, Xinhong Wang, Yongsheng Zhou

Abstract:

Performances analysis of remote sensing sensor is required to pursue a range of scientific research and application objectives. Laboratory analysis of any remote sensing instrument is essential, but not sufficient to establish a valid inflight one. In this study, with the aid of the in situ measurements and corresponding image of three-gray scale permanent artificial target, the in-flight radiometric performances analyses (in-flight radiometric calibration, dynamic range and response linearity, signal-noise-ratio (SNR), radiometric resolution) of self-developed short-wave infrared (SWIR) camera are performed. To acquire the inflight calibration coefficients of the SWIR camera, the at-sensor radiances (Li) for the artificial targets are firstly simulated with in situ measurements (atmosphere parameter and spectral reflectance of the target) and viewing geometries using MODTRAN model. With these radiances and the corresponding digital numbers (DN) in the image, a straight line with a formulation of L = G × DN + B is fitted by a minimization regression method, and the fitted coefficients, G and B, are inflight calibration coefficients. And then the high point (LH) and the low point (LL) of dynamic range can be described as LH= (G × DNH + B) and LL= B, respectively, where DNH is equal to 2n − 1 (n is the quantization number of the payload). Meanwhile, the sensor’s response linearity (δ) is described as the correlation coefficient of the regressed line. The results show that the calibration coefficients (G and B) are 0.0083 W·sr−1m−2µm−1 and −3.5 W·sr−1m−2µm−1; the low point of dynamic range is −3.5 W·sr−1m−2µm−1 and the high point is 30.5 W·sr−1m−2µm−1; the response linearity is approximately 99%. Furthermore, a SNR normalization method is used to assess the sensor’s SNR, and the normalized SNR is about 59.6 when the mean value of radiance is equal to 11.0 W·sr−1m−2µm−1; subsequently, the radiometric resolution is calculated about 0.1845 W•sr-1m-2μm-1. Moreover, in order to validate the result, a comparison of the measured radiance with a radiative-transfer-code-predicted over four portable artificial targets with reflectance of 20%, 30%, 40%, 50% respectively, is performed. It is noted that relative error for the calibration is within 6.6%.

Keywords: calibration and validation site, SWIR camera, in-flight radiometric calibration, dynamic range, response linearity

Procedia PDF Downloads 265
422 Experimental Evaluation of Foundation Settlement Mitigations in Liquefiable Soils using Press-in Sheet Piling Technique: 1-g Shake Table Tests

Authors: Md. Kausar Alam, Ramin Motamed

Abstract:

The damaging effects of liquefaction-induced ground movements have been frequently observed in past earthquakes, such as the 2010-2011 Canterbury Earthquake Sequence (CES) in New Zealand and the 2011 Tohoku earthquake in Japan. To reduce the consequences of soil liquefaction at shallow depths, various ground improvement techniques have been utilized in engineering practice, among which this research is focused on experimentally evaluating the press-in sheet piling technique. The press-in sheet pile technique eliminates the vibration, hammering, and noise pollution associated with dynamic sheet pile installation methods. Unfortunately, there are limited experimental studies on the press-in sheet piling technique for liquefaction mitigation using 1g shake table tests in which all the controlling mechanisms of liquefaction-induced foundation settlement, including sand ejecta, can be realistically reproduced. In this study, a series of moderate scale 1g shake table experiments were conducted at the University of Nevada, Reno, to evaluate the performance of this technique in liquefiable soil layers. First, a 1/5 size model was developed based on a recent UC San Diego shaking table experiment. The scaled model has a density of 50% for the top crust, 40% for the intermediate liquefiable layer, and 85% for the bottom dense layer. Second, a shallow foundation is seated atop an unsaturated sandy soil crust. Third, in a series of tests, a sheet pile with variable embedment depth is inserted into the liquefiable soil using the press-in technique surrounding the shallow foundations. The scaled models are subjected to harmonic input motions with amplitude and dominant frequency properly scaled based on the large-scale shake table test. This study assesses the performance of the press-in sheet piling technique in terms of reductions in the foundation movements (settlement and tilt) and generated excess pore water pressures. In addition, this paper discusses the cost-effectiveness and carbon footprint features of the studied mitigation measures.

Keywords: excess pore water pressure, foundation settlement, press-in sheet pile, soil liquefaction

Procedia PDF Downloads 91
421 Constructing Digital Memory for Chinese Ancient Village: A Case on Village of Gaoqian

Authors: Linqing Ma, Huiling Feng, Jihong Liang, Yi Qian

Abstract:

In China, some villages have survived in the long history of changes and remain until today with their unique styles and featured culture developed in the past. Those ancient villages, usually aged for hundreds or thousands of years, are the mirror for traditional Chinese culture, especially the farming-studying culture represented by the Confucianism. Gaoqian, an ancient village with a population of 3,000 in Zhejiang province, is such a case. With a history dating back to Yuan Dynasty, Gaoqian Village has 13 well-preserved traditional Chinese houses with a courtyard, which were built in the Ming and Qing Dynasty. It is a fine specimen to study traditional rural China. In China, some villages have survived in the long history of changes and remain until today with their unique styles and featured culture developed in the past. Those ancient villages, usually aged for hundreds or thousands of years, are the mirror for traditional Chinese culture, especially the farming-studying culture represented by the Confucianism. Gaoqian, an ancient village with a population of 3,000 in Zhejiang province, is such a case. With a history dating back to Yuan Dynasty, Gaoqian Village has 13 well-preserved traditional Chinese houses with a courtyard, which were built in the Ming and Qing Dynasty. It is a fine specimen to study traditional rural China. Then a repository for the memory of the Village will be completed by doing arrangement and description for those multimedia resources such as texts, photos, videos and so on. Production of Creative products with digital technologies is also possible based a thorough understanding of the culture feature of Gaoqian Village using research tools for literature and history studies and a method of comparative study. Finally, the project will construct an exhibition platform for the Village and its culture by telling its stories with completed structures and treads.

Keywords: ancient villages, digital exhibition, multimedia, traditional culture

Procedia PDF Downloads 575
420 Analysis of Citation Rate and Data Reuse for Openly Accessible Biodiversity Datasets on Global Biodiversity Information Facility

Authors: Nushrat Khan, Mike Thelwall, Kayvan Kousha

Abstract:

Making research data openly accessible has been mandated by most funders over the last 5 years as it promotes reproducibility in science and reduces duplication of effort to collect the same data. There are evidence that articles that publicly share research data have higher citation rates in biological and social sciences. However, how and whether shared data is being reused is not always intuitive as such information is not easily accessible from the majority of research data repositories. This study aims to understand the practice of data citation and how data is being reused over the years focusing on biodiversity since research data is frequently reused in this field. Metadata of 38,878 datasets including citation counts were collected through the Global Biodiversity Information Facility (GBIF) API for this purpose. GBIF was used as a data source since it provides citation count for datasets, not a commonly available feature for most repositories. Analysis of dataset types, citation counts, creation and update time of datasets suggests that citation rate varies for different types of datasets, where occurrence datasets that have more granular information have higher citation rates than checklist and metadata-only datasets. Another finding is that biodiversity datasets on GBIF are frequently updated, which is unique to this field. Majority of the datasets from the earliest year of 2007 were updated after 11 years, with no dataset that was not updated since creation. For each year between 2007 and 2017, we compared the correlations between update time and citation rate of four different types of datasets. While recent datasets do not show any correlations, 3 to 4 years old datasets show weak correlation where datasets that were updated more recently received high citations. The results are suggestive that it takes several years to cumulate citations for research datasets. However, this investigation found that when searched on Google Scholar or Scopus databases for the same datasets, the number of citations is often not the same as GBIF. Hence future aim is to further explore the citation count system adopted by GBIF to evaluate its reliability and whether it can be applicable to other fields of studies as well.

Keywords: data citation, data reuse, research data sharing, webometrics

Procedia PDF Downloads 170
419 Advanced Magnetic Field Mapping Utilizing Vertically Integrated Deployment Platforms

Authors: John E. Foley, Martin Miele, Raul Fonda, Jon Jacobson

Abstract:

This paper presents development and implementation of new and innovative data collection and analysis methodologies based on deployment of total field magnetometer arrays. Our research has focused on the development of a vertically-integrated suite of platforms all utilizing common data acquisition, data processing and analysis tools. These survey platforms include low-altitude helicopters and ground-based vehicles, including robots, for terrestrial mapping applications. For marine settings the sensor arrays are deployed from either a hydrodynamic bottom-following wing towed from a surface vessel or from a towed floating platform for shallow-water settings. Additionally, sensor arrays are deployed from tethered remotely operated vehicles (ROVs) for underwater settings where high maneuverability is required. While the primary application of these systems is the detection and mapping of unexploded ordnance (UXO), these system are also used for various infrastructure mapping and geologic investigations. For each application, success is driven by the integration of magnetometer arrays, accurate geo-positioning, system noise mitigation, and stable deployment of the system in appropriate proximity of expected targets or features. Each of the systems collects geo-registered data compatible with a web-enabled data management system providing immediate access of data and meta-data for remote processing, analysis and delivery of results. This approach allows highly sophisticated magnetic processing methods, including classification based on dipole modeling and remanent magnetization, to be efficiently applied to many projects. This paper also briefly describes the initial development of magnetometer-based detection systems deployed from low-altitude helicopter platforms and the subsequent successful transition of this technology to the marine environment. Additionally, we present examples from a range of terrestrial and marine settings as well as ongoing research efforts related to sensor miniaturization for unmanned aerial vehicle (UAV) magnetic field mapping applications.

Keywords: dipole modeling, magnetometer mapping systems, sub-surface infrastructure mapping, unexploded ordnance detection

Procedia PDF Downloads 460
418 Automatic Processing of Trauma-Related Visual Stimuli in Female Patients Suffering From Post-Traumatic Stress Disorder after Interpersonal Traumatization

Authors: Theresa Slump, Paula Neumeister, Katharina Feldker, Carina Y. Heitmann, Thomas Straube

Abstract:

A characteristic feature of post-traumatic stress disorder (PTSD) is the automatic processing of disorder-specific stimuli that expresses itself in intrusive symptoms such as intense physical and psychological reactions to trauma-associated stimuli. That automatic processing plays an essential role in the development and maintenance of symptoms. The aim of our study was, therefore, to investigate the behavioral and neural correlates of automatic processing of trauma-related stimuli in PTSD. Although interpersonal traumatization is a form of traumatization that often occurs, it has not yet been sufficiently studied. That is why, in our study, we focused on patients suffering from interpersonal traumatization. While previous imaging studies on PTSD mainly used faces, words, or generally negative visual stimuli, our study presented complex trauma-related and neutral visual scenes. We examined 19 female subjects suffering from PTSD and examined 19 healthy women as a control group. All subjects did a geometric comparison task while lying in a functional-magnetic-resonance-imaging (fMRI) scanner. Trauma-related scenes and neutral visual scenes that were not relevant to the task were presented while the subjects were doing the task. Regarding the behavioral level, there were not any significant differences between the task performance of the two groups. Regarding the neural level, the PTSD patients showed significant hyperactivation of the hippocampus for task-irrelevant trauma-related stimuli versus neutral stimuli when compared with healthy control subjects. Connectivity analyses revealed altered connectivity between the hippocampus and other anxiety-related areas in PTSD patients, too. Overall, those findings suggest that fear-related areas are involved in PTSD patients' processing of trauma-related stimuli even if the stimuli that were used in the study were task-irrelevant.

Keywords: post-traumatic stress disorder, automatic processing, hippocampus, functional magnetic resonance imaging

Procedia PDF Downloads 193
417 Impact of Urban Densification on Travel Behaviour: Case of Surat and Udaipur, India

Authors: Darshini Mahadevia, Kanika Gounder, Saumya Lathia

Abstract:

Cities, an outcome of natural growth and migration, are ever-expanding due to urban sprawl. In the Global South, urban areas are experiencing a switch from public transport to private vehicles, coupled with intensified urban agglomeration, leading to frequent longer commutes by automobiles. This increase in travel distance and motorized vehicle kilometres lead to unsustainable cities. To achieve the nationally pledged GHG emission mitigation goal, the government is prioritizing a modal shift to low-carbon transport modes like mass transit and paratransit. Mixed land-use and urban densification are crucial for the economic viability of these projects. Informed by desktop assessment of mobility plans and in-person primary surveys, the paper explores the challenges around urban densification and travel patterns in two Indian cities of contrasting nature- Surat, a metropolitan industrial city with a 5.9 million population and a very compact urban form, and Udaipur, a heritage city attracting large international tourists’ footfall, with limited scope for further densification. Dense, mixed-use urban areas often improve access to basic services and economic opportunities by reducing distances and enabling people who don't own personal vehicles to reach them on foot/ cycle. But residents travelling on different modes end up contributing to similar trip lengths, highlighting the non-uniform distribution of land-uses and lack of planned transport infrastructure in the city and the urban-peri urban networks. Additionally, it is imperative to manage these densities to reduce negative externalities like congestion, air/noise pollution, lack of public spaces, loss of livelihood, etc. The study presents a comparison of the relationship between transport systems with the built form in both cities. The paper concludes with recommendations for managing densities in urban areas along with promoting low-carbon transport choices like improved non-motorized transport and public transport infrastructure and minimizing personal vehicle usage in the Global South.

Keywords: India, low-carbon transport, travel behaviour, trip length, urban densification

Procedia PDF Downloads 212
416 MIMO Radar-Based System for Structural Health Monitoring and Geophysical Applications

Authors: Davide D’Aria, Paolo Falcone, Luigi Maggi, Aldo Cero, Giovanni Amoroso

Abstract:

The paper presents a methodology for real-time structural health monitoring and geophysical applications. The key elements of the system are a high performance MIMO RADAR sensor, an optical camera and a dedicated set of software algorithms encompassing interferometry, tomography and photogrammetry. The MIMO Radar sensor proposed in this work, provides an extremely high sensitivity to displacements making the system able to react to tiny deformations (up to tens of microns) with a time scale which spans from milliseconds to hours. The MIMO feature of the system makes the system capable of providing a set of two-dimensional images of the observed scene, each mapped on the azimuth-range directions with noticeably resolution in both the dimensions and with an outstanding repetition rate. The back-scattered energy, which is distributed in the 3D space, is projected on a 2D plane, where each pixel has as coordinates the Line-Of-Sight distance and the cross-range azimuthal angle. At the same time, the high performing processing unit allows to sense the observed scene with remarkable refresh periods (up to milliseconds), thus opening the way for combined static and dynamic structural health monitoring. Thanks to the smart TX/RX antenna array layout, the MIMO data can be processed through a tomographic approach to reconstruct the three-dimensional map of the observed scene. This 3D point cloud is then accurately mapped on a 2D digital optical image through photogrammetric techniques, allowing for easy and straightforward interpretations of the measurements. Once the three-dimensional image is reconstructed, a 'repeat-pass' interferometric approach is exploited to provide the user of the system with high frequency three-dimensional motion/vibration estimation of each point of the reconstructed image. At this stage, the methodology leverages consolidated atmospheric correction algorithms to provide reliable displacement and vibration measurements.

Keywords: interferometry, MIMO RADAR, SAR, tomography

Procedia PDF Downloads 186
415 Designing and Implementing a Tourist-Guide Web Service Based on Volunteer Geographic Information Using Open-Source Technologies

Authors: Javad Sadidi, Ehsan Babaei, Hani Rezayan

Abstract:

The advent of web 2.0 gives a possibility to scale down the costs of data collection and mapping, specifically if the process is done by volunteers. Every volunteer can be thought of as a free and ubiquitous sensor to collect spatial, descriptive as well as multimedia data for tourist services. The lack of large-scale information, such as real-time climate and weather conditions, population density, and other related data, can be considered one of the important challenges in developing countries for tourists to make the best decision in terms of time and place of travel. The current research aims to design and implement a spatiotemporal web map service using volunteer-submitted data. The service acts as a tourist-guide service in which tourists can search interested places based on their requested time for travel. To design the service, three tiers of architecture, including data, logical processing, and presentation tiers, have been utilized. For implementing the service, open-source software programs, client and server-side programming languages (such as OpenLayers2, AJAX, and PHP), Geoserver as a map server, and Web Feature Service (WFS) standards have been used. The result is two distinct browser-based services, one for sending spatial, descriptive, and multimedia volunteer data and another one for tourists and local officials. Local official confirms the veracity of the volunteer-submitted information. In the tourist interface, a spatiotemporal search engine has been designed to enable tourists to find a tourist place based on province, city, and location at a specific time of interest. Implementing the tourist-guide service by this methodology causes the following: the current tourists participate in a free data collection and sharing process for future tourists, a real-time data sharing and accessing for all, avoiding a blind selection of travel destination and significantly, decreases the cost of providing such services.

Keywords: VGI, tourism, spatiotemporal, browser-based, web mapping

Procedia PDF Downloads 89
414 Using Serious Games to Integrate the Potential of Mass Customization into the Fuzzy Front-End of New Product Development

Authors: Michael N. O'Sullivan, Con Sheahan

Abstract:

Mass customization is the idea of offering custom products or services to satisfy the needs of each individual customer while maintaining the efficiency of mass production. Technologies like 3D printing and artificial intelligence have many start-ups hoping to capitalize on this dream of creating personalized products at an affordable price, and well established companies scrambling to innovate and maintain their market share. However, the majority of them are failing as they struggle to understand one key question – where does customization make sense? Customization and personalization only make sense where the value of the perceived benefit outweighs the cost to implement it. In other words, will people pay for it? Looking at the Kano Model makes it clear that it depends on the product. In products where customization is an inherent need, like prosthetics, mass customization technologies can be highly beneficial. However, for products that already sell as a standard, like headphones, offering customization is likely only an added bonus, and so the product development team must figure out if the customers’ perception of the added value of this feature will outweigh its premium price tag. This can be done through the use of a ‘serious game,’ whereby potential customers are given a limited budget to collaboratively buy and bid on potential features of the product before it is developed. If the group choose to buy customization over other features, then the product development team should implement it into their design. If not, the team should prioritize the features on which the customers have spent their budget. The level of customization purchased can also be translated to an appropriate production method, for example, the most expensive type of customization would likely be free-form design and could be achieved through digital fabrication, while a lower level could be achieved through short batch production. Twenty-five teams of final year students from design, engineering, construction and technology tested this methodology when bringing a product from concept through to production specification, and found that it allowed them to confidently decide what level of customization, if any, would be worth offering for their product, and what would be the best method of producing it. They also found that the discussion and negotiations between players during the game led to invaluable insights, and often decided to play a second game where they offered customers the option to buy the various customization ideas that had been discussed during the first game.

Keywords: Kano model, mass customization, new product development, serious game

Procedia PDF Downloads 129
413 A Study on the Effect of Design Factors of Slim Keyboard’s Tactile Feedback

Authors: Kai-Chieh Lin, Chih-Fu Wu, Hsiang Ling Hsu, Yung-Hsiang Tu, Chia-Chen Wu

Abstract:

With the rapid development of computer technology, the design of computers and keyboards moves towards a trend of slimness. The change of mobile input devices directly influences users’ behavior. Although multi-touch applications allow entering texts through a virtual keyboard, the performance, feedback, and comfortableness of the technology is inferior to traditional keyboard, and while manufacturers launch mobile touch keyboards and projection keyboards, the performance has not been satisfying. Therefore, this study discussed the design factors of slim pressure-sensitive keyboards. The factors were evaluated with an objective (accuracy and speed) and a subjective evaluation (operability, recognition, feedback, and difficulty) depending on the shape (circle, rectangle, and L-shaped), thickness (flat, 3mm, and 6mm), and force (35±10g, 60±10g, and 85±10g) of the keyboard. Moreover, MANOVA and Taguchi methods (regarding signal-to-noise ratios) were conducted to find the optimal level of each design factor. The research participants, by their typing speed (30 words/ minute), were divided in two groups. Considering the multitude of variables and levels, the experiments were implemented using the fractional factorial design. A representative model of the research samples were established for input task testing. The findings of this study showed that participants with low typing speed primarily relied on vision to recognize the keys, and those with high typing speed relied on tactile feedback that was affected by the thickness and force of the keys. In the objective and subjective evaluation, a combination of keyboard design factors that might result in higher performance and satisfaction was identified (L-shaped, 3mm, and 60±10g) as the optimal combination. The learning curve was analyzed to make a comparison with a traditional standard keyboard to investigate the influence of user experience on keyboard operation. The research results indicated the optimal combination provided input performance to inferior to a standard keyboard. The results could serve as a reference for the development of related products in industry and for applying comprehensively to touch devices and input interfaces which are interacted with people.

Keywords: input performance, mobile device, slim keyboard, tactile feedback

Procedia PDF Downloads 296
412 Transient Response of Elastic Structures Subjected to a Fluid Medium

Authors: Helnaz Soltani, J. N. Reddy

Abstract:

Presence of fluid medium interacting with a structure can lead to failure of the structure. Since developing efficient computational model for fluid-structure interaction (FSI) problems has broader impact to realistic problems encountered in aerospace industry, ship industry, oil and gas industry, and so on, one can find an increasing need to find a method in order to investigate the effect of fluid domain on structural response. A coupled finite element formulation of problems involving FSI issue is an accurate method to predict the response of structures in contact with a fluid medium. This study proposes a finite element approach in order to study the transient response of the structures interacting with a fluid medium. Since beam and plate are considered to be the fundamental elements of almost any structure, the developed method is applied to beams and plates benchmark problems in order to demonstrate its efficiency. The formulation is a combination of the various structure theories and the solid-fluid interface boundary condition, which is used to represent the interaction between the solid and fluid regimes. Here, three different beam theories as well as three different plate theories are considered to model the solid medium, and the Navier-Stokes equation is used as the theoretical equation governed the fluid domain. For each theory, a coupled set of equations is derived where the element matrices of both regimes are calculated by Gaussian quadrature integration. The main feature of the proposed methodology is to model the fluid domain as an added mass; the external distributed force due to the presence of the fluid. We validate the accuracy of such formulation by means of some numerical examples. Since the formulation presented in this study covers several theories in literature, the applicability of our proposed approach is independent of any structure geometry. The effect of varying parameters such as structure thickness ratio, fluid density and immersion depth, are studied using numerical simulations. The results indicate that maximum vertical deflection of the structure is affected considerably in the presence of a fluid medium.

Keywords: beam and plate, finite element analysis, fluid-structure interaction, transient response

Procedia PDF Downloads 559
411 Constructing a Semi-Supervised Model for Network Intrusion Detection

Authors: Tigabu Dagne Akal

Abstract:

While advances in computer and communications technology have made the network ubiquitous, they have also rendered networked systems vulnerable to malicious attacks devised from a distance. These attacks or intrusions start with attackers infiltrating a network through a vulnerable host and then launching further attacks on the local network or Intranet. Nowadays, system administrators and network professionals can attempt to prevent such attacks by developing intrusion detection tools and systems using data mining technology. In this study, the experiments were conducted following the Knowledge Discovery in Database Process Model. The Knowledge Discovery in Database Process Model starts from selection of the datasets. The dataset used in this study has been taken from Massachusetts Institute of Technology Lincoln Laboratory. After taking the data, it has been pre-processed. The major pre-processing activities include fill in missed values, remove outliers; resolve inconsistencies, integration of data that contains both labelled and unlabelled datasets, dimensionality reduction, size reduction and data transformation activity like discretization tasks were done for this study. A total of 21,533 intrusion records are used for training the models. For validating the performance of the selected model a separate 3,397 records are used as a testing set. For building a predictive model for intrusion detection J48 decision tree and the Naïve Bayes algorithms have been tested as a classification approach for both with and without feature selection approaches. The model that was created using 10-fold cross validation using the J48 decision tree algorithm with the default parameter values showed the best classification accuracy. The model has a prediction accuracy of 96.11% on the training datasets and 93.2% on the test dataset to classify the new instances as normal, DOS, U2R, R2L and probe classes. The findings of this study have shown that the data mining methods generates interesting rules that are crucial for intrusion detection and prevention in the networking industry. Future research directions are forwarded to come up an applicable system in the area of the study.

Keywords: intrusion detection, data mining, computer science, data mining

Procedia PDF Downloads 291
410 Alternative Approach to the Machine Vision System Operating for Solving Industrial Control Issue

Authors: M. S. Nikitenko, S. A. Kizilov, D. Y. Khudonogov

Abstract:

The paper considers an approach to a machine vision operating system combined with using a grid of light markers. This approach is used to solve several scientific and technical problems, such as measuring the capability of an apron feeder delivering coal from a lining return port to a conveyor in the technology of mining high coal releasing to a conveyor and prototyping an autonomous vehicle obstacle detection system. Primary verification of a method of calculating bulk material volume using three-dimensional modeling and validation in laboratory conditions with relative errors calculation were carried out. A method of calculating the capability of an apron feeder based on a machine vision system and a simplifying technology of a three-dimensional modelled examined measuring area with machine vision was offered. The proposed method allows measuring the volume of rock mass moved by an apron feeder using machine vision. This approach solves the volume control issue of coal produced by a feeder while working off high coal by lava complexes with release to a conveyor with accuracy applied for practical application. The developed mathematical apparatus for measuring feeder productivity in kg/s uses only basic mathematical functions such as addition, subtraction, multiplication, and division. Thus, this fact simplifies software development, and this fact expands the variety of microcontrollers and microcomputers suitable for performing tasks of calculating feeder capability. A feature of an obstacle detection issue is to correct distortions of the laser grid, which simplifies their detection. The paper presents algorithms for video camera image processing and autonomous vehicle model control based on obstacle detection machine vision systems. A sample fragment of obstacle detection at the moment of distortion with the laser grid is demonstrated.

Keywords: machine vision, machine vision operating system, light markers, measuring capability, obstacle detection system, autonomous transport

Procedia PDF Downloads 105
409 Identifying, Reporting and Preventing Medical Errors Among Nurses Working in Critical Care Units At Kenyatta National Hospital, Kenya: Closing the Gap Between Attitude and Practice

Authors: Jared Abuga, Wesley Too

Abstract:

Medical error is the third leading cause of death in US, with approximately 98,000 deaths occurring every year as a result of medical errors. The world financial burden of medication errors is roughly USD 42 billion. Medication errors may lead to at least one death daily and injure roughly 1.3 million people every year. Medical error reporting is essential in creating a culture of accountability in our healthcare system. Studies have shown that attitudes and practice of healthcare workers in reporting medical errors showed that the major factors in under-reporting of errors included work stress and fear of medico-legal consequences due to the disclosure of error. Further, the majority believed that increase in reporting medical errors would contribute to a better system. Most hospitals depend on nurses to discover medication errors because they are considered to be the sources of these errors, as contributors or mere observers, consequently, the nurse’s perception of medication errors and what needs to be done is a vital feature to reducing incidences of medication errors. We sought to explore knowledge among nurses on medical errors and factors affecting or hindering reporting of medical errors among nurses working at the emergency unit, KNH. Critical care nurses are faced with many barriers to completing incident reports on medication errors. One of these barriers which contribute to underreporting is a lack of education and/or knowledge regarding medication errors and the reporting process. This study, therefore, sought to determine the availability and the use of reporting systems for medical errors in critical care unity. It also sought to establish nurses’ perception regarding medical errors and reporting and document factors facilitating timely identification and reporting of medical errors in critical care settings. Methods: The study used cross-section study design to collect data from 76 critical care nurses from Kenyatta Teaching & Research National Referral Hospital, Kenya. Data analysis and results is ongoing. By October 2022, we will have analysis, results, discussions, and recommendations of the study for purposes of the conference in 2023

Keywords: errors, medical, kenya, nurses, safety

Procedia PDF Downloads 234
408 A Virtual Set-Up to Evaluate Augmented Reality Effect on Simulated Driving

Authors: Alicia Yanadira Nava Fuentes, Ilse Cervantes Camacho, Amadeo José Argüelles Cruz, Ana María Balboa Verduzco

Abstract:

Augmented reality promises being present in future driving, with its immersive technology let to show directions and maps to identify important places indicating with graphic elements when the car driver requires the information. On the other side, driving is considered a multitasking activity and, for some people, a complex activity where different situations commonly occur that require the immediate attention of the car driver to make decisions that contribute to avoid accidents; therefore, the main aim of the project is the instrumentation of a platform with biometric sensors that allows evaluating the performance in driving vehicles with the influence of augmented reality devices to detect the level of attention in drivers, since it is important to know the effect that it produces. In this study, the physiological sensors EPOC X (EEG), ECG06 PRO and EMG Myoware are joined in the driving test platform with a Logitech G29 steering wheel and the simulation software City Car Driving in which the level of traffic can be controlled, as well as the number of pedestrians that exist within the simulation obtaining a driver interaction in real mode and through a MSP430 microcontroller achieves the acquisition of data for storage. The sensors bring a continuous analog signal in time that needs signal conditioning, at this point, a signal amplifier is incorporated due to the acquired signals having a sensitive range of 1.25 mm/mV, also filtering that consists in eliminating the frequency bands of the signal in order to be interpretative and without noise to convert it from an analog signal into a digital signal to analyze the physiological signals of the drivers, these values are stored in a database. Based on this compilation, we work on the extraction of signal features and implement K-NN (k-nearest neighbor) classification methods and decision trees (unsupervised learning) that enable the study of data for the identification of patterns and determine by classification methods different effects of augmented reality on drivers. The expected results of this project include are a test platform instrumented with biometric sensors for data acquisition during driving and a database with the required variables to determine the effect caused by augmented reality on people in simulated driving.

Keywords: augmented reality, driving, physiological signals, test platform

Procedia PDF Downloads 133
407 Carbon Based Wearable Patch Devices for Real-Time Electrocardiography Monitoring

Authors: Hachul Jung, Ahee Kim, Sanghoon Lee, Dahye Kwon, Songwoo Yoon, Jinhee Moon

Abstract:

We fabricated a wearable patch device including novel patch type flexible dry electrode based on carbon nanofibers (CNFs) and silicone-based elastomer (MED 6215) for real-time ECG monitoring. There are many methods to make flexible conductive polymer by mixing metal or carbon-based nanoparticles. In this study, CNFs are selected for conductive nanoparticles because carbon nanotubes (CNTs) are difficult to disperse uniformly in elastomer compare with CNFs and silver nanowires are relatively high cost and easily oxidized in the air. Wearable patch is composed of 2 parts that dry electrode parts for recording bio signal and sticky patch parts for mounting on the skin. Dry electrode parts were made by vortexer and baking in prepared mold. To optimize electrical performance and diffusion degree of uniformity, we developed unique mixing and baking process. Secondly, sticky patch parts were made by patterning and detaching from smooth surface substrate after spin-coating soft skin adhesive. In this process, attachable and detachable strengths of sticky patch are measured and optimized for them, using a monitoring system. Assembled patch is flexible, stretchable, easily skin mountable and connectable directly with the system. To evaluate the performance of electrical characteristics and ECG (Electrocardiography) recording, wearable patch was tested by changing concentrations of CNFs and thickness of the dry electrode. In these results, the CNF concentration and thickness of dry electrodes were important variables to obtain high-quality ECG signals without incidental distractions. Cytotoxicity test is conducted to prove biocompatibility, and long-term wearing test showed no skin reactions such as itching or erythema. To minimize noises from motion artifacts and line noise, we make the customized wireless, light-weight data acquisition system. Measured ECG Signals from this system are stable and successfully monitored simultaneously. To sum up, we could fully utilize fabricated wearable patch devices for real-time ECG monitoring easily.

Keywords: carbon nanofibers, ECG monitoring, flexible dry electrode, wearable patch

Procedia PDF Downloads 181
406 Malignant Ovarian Cancer Ascites Confers Platinum Chemoresistance to Ovarian Cancer Cells: A Combination Treatment with Crizotinib and 2 Hydroxyestradiol Restore Platinum Sensitivity

Authors: Yifat Koren Carmi, Abed Agbarya, Hazem Khamaisi, Raymond Farah, Yelena Shechtman, Roman Korobochka, Jacob Gopas, Jamal Mahajna

Abstract:

Ovarian cancer (OC), the second most common form of gynecological malignancy, has a poor prognosis and is frequently identified in its late stages. The recommended treatment for OC typically includes a platinum-based chemotherapy, like carboplatin. Nonetheless, OC treatment has proven challenging due to toxicity and development of acquired resistance to therapy. Chemoresistance is a significant obstacle to a long-lasting response in OC patients, believed to arise from alterations within the cancer cells as well as within the tumor microenvironments (TME). Malignant ascites is a presenting feature in more than one-third of OC patients. It serves as a reservoir for a complex mixture of soluble factors, metabolites, and cellular components, providing a pro-inflammatory and tumor-promoting microenvironment for the OC cells. Malignant ascites is also associated with metastasis and chemoresistance. In an attempt to elucidate the role of TME in chemoresistance of OC, we monitored the ability of soluble factors derived from ascites fluids to affect platinum sensitivity of OC cells. This research, compared ascites fluids from non-malignant cirrhotic patients to those from OC patients in terms of their ability to alter the platinum sensitivity of OC cells. Our findings indicated that exposure to OC ascites induces platinum chemoresistance on OC cells in 11 out of 13 cases (85%). In contrast, 75% of cirrhosis ascites (3 out of 4) failed to confer platinum chemoresistance to OC cells. Cytokine array analysis revealed that IL-6, and to a lesser extent HGF were enriched in OC ascites, whereas IL-22 was enriched in cirrhosis ascites. Pharmaceutical inhibitors that target the IL-6/JAK signaling pathway were mildly effective in overcoming the platinum chemoresistance induced by malignant ascites. In contrast, Crizotinib an HGF/c-MET inhibitor, and 2-hydroxyestradiol (2HE2) were effective in restoring platinum chemoresistance to OC. Our findings demonstrate the importance of OC ascites in supporting platinum chemoresistance as well as the potential of a combination therapy with Crizotinib and the estradiol metabolite 2HE2 to regain OC cells chemosensitivity.

Keywords: ovarian cancer, platinum chemoresistance, malignant ascites, tumor microenvironment, IL-6, 2-hydroxyestradiol, HGF, crizotinib

Procedia PDF Downloads 56
405 Risk-Sharing Financing of Islamic Banks: Better Shielded against Interest Rate Risk

Authors: Mirzet SeHo, Alaa Alaabed, Mansur Masih

Abstract:

In theory, risk-sharing-based financing (RSF) is considered a corner stone of Islamic finance. It is argued to render Islamic banks more resilient to shocks. In practice, however, this feature of Islamic financial products is almost negligible. Instead, debt-based instruments, with conventional like features, have overwhelmed the nascent industry. In addition, the framework of present-day economic, regulatory and financial reality inevitably exposes Islamic banks in dual banking systems to problems of conventional banks. This includes, but is not limited to, interest rate risk. Empirical evidence has, thus far, confirmed such exposures, despite Islamic banks’ interest-free operations. This study applies system GMM in modeling the determinants of RSF, and finds that RSF is insensitive to changes in interest rates. Hence, our results provide support to the “stability” view of risk-sharing-based financing. This suggests RSF as the way forward for risk management at Islamic banks, in the absence of widely acceptable Shariah compliant hedging instruments. Further support to the stability view is given by evidence of counter-cyclicality. Unlike debt-based lending that inflates artificial asset bubbles through credit expansion during the upswing of business cycles, RSF is negatively related to GDP growth. Our results also imply a significantly strong relationship between risk-sharing deposits and RSF. However, the pass-through of these deposits to RSF is economically low. Only about 40% of risk-sharing deposits are channeled to risk-sharing financing. This raises questions on the validity of the industry’s claim that depositors accustomed to conventional banking shun away from risk sharing and signals potential for better balance sheet management at Islamic banks. Overall, our findings suggest that, on the one hand, Islamic banks can gain ‘independence’ from conventional banks and interest rates through risk-sharing products, the potential for which is enormous. On the other hand, RSF could enable policy makers to improve systemic stability and restrain excessive credit expansion through its countercyclical features.

Keywords: Islamic banks, risk-sharing, financing, interest rate, dynamic system GMM

Procedia PDF Downloads 313