Search results for: flexible thin structure
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 9437

Search results for: flexible thin structure

5387 From Shallow Semantic Representation to Deeper One: Verb Decomposition Approach

Authors: Aliaksandr Huminski

Abstract:

Semantic Role Labeling (SRL) as shallow semantic parsing approach includes recognition and labeling arguments of a verb in a sentence. Verb participants are linked with specific semantic roles (Agent, Patient, Instrument, Location, etc.). Thus, SRL can answer on key questions such as ‘Who’, ‘When’, ‘What’, ‘Where’ in a text and it is widely applied in dialog systems, question-answering, named entity recognition, information retrieval, and other fields of NLP. However, SRL has the following flaw: Two sentences with identical (or almost identical) meaning can have different semantic role structures. Let consider 2 sentences: (1) John put butter on the bread. (2) John buttered the bread. SRL for (1) and (2) will be significantly different. For the verb put in (1) it is [Agent + Patient + Goal], but for the verb butter in (2) it is [Agent + Goal]. It happens because of one of the most interesting and intriguing features of a verb: Its ability to capture participants as in the case of the verb butter, or their features as, say, in the case of the verb drink where the participant’s feature being liquid is shared with the verb. This capture looks like a total fusion of meaning and cannot be decomposed in direct way (in comparison with compound verbs like babysit or breastfeed). From this perspective, SRL looks really shallow to represent semantic structure. If the key point in semantic representation is an opportunity to use it for making inferences and finding hidden reasons, it assumes by default that two different but semantically identical sentences must have the same semantic structure. Otherwise we will have different inferences from the same meaning. To overcome the above-mentioned flaw, the following approach is suggested. Assume that: P is a participant of relation; F is a feature of a participant; Vcp is a verb that captures a participant; Vcf is a verb that captures a feature of a participant; Vpr is a primitive verb or a verb that does not capture any participant and represents only a relation. In another word, a primitive verb is a verb whose meaning does not include meanings from its surroundings. Then Vcp and Vcf can be decomposed as: Vcp = Vpr +P; Vcf = Vpr +F. If all Vcp and Vcf will be represented this way, then primitive verbs Vpr can be considered as a canonical form for SRL. As a result of that, there will be no hidden participants caught by a verb since all participants will be explicitly unfolded. An obvious example of Vpr is the verb go, which represents pure movement. In this case the verb drink can be represented as man-made movement of liquid into specific direction. Extraction and using primitive verbs for SRL create a canonical representation unique for semantically identical sentences. It leads to the unification of semantic representation. In this case, the critical flaw related to SRL will be resolved.

Keywords: decomposition, labeling, primitive verbs, semantic roles

Procedia PDF Downloads 367
5386 Basic Calibration and Normalization Techniques for Time Domain Reflectometry Measurements

Authors: Shagufta Tabassum

Abstract:

The study of dielectric properties in a binary mixture of liquids is very useful to understand the liquid structure, molecular interaction, dynamics, and kinematics of the mixture. Time-domain reflectometry (TDR) is a powerful tool for studying the cooperation and molecular dynamics of the H-bonded system. In this paper, we discuss the basic calibration and normalization procedure for time-domain reflectometry measurements. Our approach is to explain the different types of error occur during TDR measurements and how these errors can be eliminated or minimized.

Keywords: time domain reflectometry measurement techinque, cable and connector loss, oscilloscope loss, and normalization technique

Procedia PDF Downloads 206
5385 Matrix-Based Linear Analysis of Switched Reluctance Generator with Optimum Pole Angles Determination

Authors: Walid A. M. Ghoneim, Hamdy A. Ashour, Asmaa E. Abdo

Abstract:

In this paper, linear analysis of a Switched Reluctance Generator (SRG) model is applied on the most common configurations (4/2, 6/4 and 8/6) for both conventional short-pitched and fully-pitched designs, in order to determine the optimum stator/rotor pole angles at which the maximum output voltage is generated per unit excitation current. This study is focused on SRG analysis and design as a proposed solution for renewable energy applications, such as wind energy conversion systems. The world’s potential to develop the renewable energy technologies through dedicated scientific researches was the motive behind this study due to its positive impact on economy and environment. In addition, the problem of rare earth metals (Permanent magnet) caused by mining limitations, banned export by top producers and environment restrictions leads to the unavailability of materials used for rotating machines manufacturing. This challenge gave authors the opportunity to study, analyze and determine the optimum design of the SRG that has the benefit to be free from permanent magnets, rotor windings, with flexible control system and compatible with any application that requires variable-speed operation. In addition, SRG has been proved to be very efficient and reliable in both low-speed or high-speed applications. Linear analysis was performed using MATLAB simulations based on the (Modified generalized matrix approach) of Switched Reluctance Machine (SRM). About 90 different pole angles combinations and excitation patterns were simulated through this study, and the optimum output results for each case were recorded and presented in detail. This procedure has been proved to be applicable for any SRG configuration, dimension and excitation pattern. The delivered results of this study provide evidence for using the 4-phase 8/6 fully pitched SRG as the main optimum configuration for the same machine dimensions at the same angular speed.

Keywords: generalized matrix approach, linear analysis, renewable applications, switched reluctance generator

Procedia PDF Downloads 198
5384 The Road Ahead: Merging Human Cyber Security Expertise with Generative AI

Authors: Brennan Lodge

Abstract:

Amidst a complex regulatory landscape, Retrieval Augmented Generation (RAG) emerges as a transformative tool for Governance Risk and Compliance (GRC) officers. This paper details the application of RAG in synthesizing Large Language Models (LLMs) with external knowledge bases, offering GRC professionals an advanced means to adapt to rapid changes in compliance requirements. While the development for standalone LLM’s (Large Language Models) is exciting, such models do have their downsides. LLM’s cannot easily expand or revise their memory, and they can’t straightforwardly provide insight into their predictions, and may produce “hallucinations.” Leveraging a pre-trained seq2seq transformer and a dense vector index of domain-specific data, this approach integrates real-time data retrieval into the generative process, enabling gap analysis and the dynamic generation of compliance and risk management content. We delve into the mechanics of RAG, focusing on its dual structure that pairs parametric knowledge contained within the transformer model with non-parametric data extracted from an updatable corpus. This hybrid model enhances decision-making through context-rich insights, drawing from the most current and relevant information, thereby enabling GRC officers to maintain a proactive compliance stance. Our methodology aligns with the latest advances in neural network fine-tuning, providing a granular, token-level application of retrieved information to inform and generate compliance narratives. By employing RAG, we exhibit a scalable solution that can adapt to novel regulatory challenges and cybersecurity threats, offering GRC officers a robust, predictive tool that augments their expertise. The granular application of RAG’s dual structure not only improves compliance and risk management protocols but also informs the development of compliance narratives with pinpoint accuracy. It underscores AI’s emerging role in strategic risk mitigation and proactive policy formation, positioning GRC officers to anticipate and navigate the complexities of regulatory evolution confidently.

Keywords: cybersecurity, gen AI, retrieval augmented generation, cybersecurity defense strategies

Procedia PDF Downloads 95
5383 Inventive Synthesis and Characterization of a Cesium Molybdate Compound: CsBi(MoO4)2

Authors: Gülşah Çelik Gül, Figen Kurtuluş

Abstract:

Cesium molybdates with general formula CsMIII(MoO4)2, where MIII = Bi, Dy, Pr, Er, exhibit rich polymorphism, and crystallize in a layered structure. These properties cause intensive studies on cesium molybdates. CsBi(MoO4)2 was synthesized by microwave method by using cerium sulphate, bismuth oxide and molybdenum (VI) oxide in an appropriate molar ratio. Characterizations were done by x-ray diffraction (XRD), fourier transform infrared (FTIR) spectroscopy, scanning electron microscopy/energy dispersive analyze (SEM/EDS), thermo gravimetric/differantial thermal analysis (TG/DTA).

Keywords: cesium bismuth dimolybdate, microwave synthesis, powder x-ray diffraction, rare earth dimolybdates

Procedia PDF Downloads 518
5382 Dense and Quality Urban Living: A Comparative Study on Architectural Solutions in the European City

Authors: Flavia Magliacani

Abstract:

The urbanization of the last decades and its resulting urban growth entail problems both for environmental and economic sustainability. From this perspective, sustainable settlement development requires a horizontal decrease in the existing urban structure in order to enhance its greater concentration. Hence, new stratifications of the city fabric and architectural strategies ensuring high-density settlement models are possible solutions. However, although increasing housing density is necessary, it is not sufficient. Guaranteeing the quality of living is, indeed, equally essential. In order to meet this objective, many other factors come to light, namely the relationship between private and public spaces, the proximity to services, the accessibility of public transport, the local lifestyle habits, and the social needs. Therefore, how to safeguard both quality and density in human habitats? The present paper attempts to answer the previous main research question by addressing several sub-questions: Which architectural types meet the dual need for urban density and housing quality? Which project criteria should be taken into consideration by good design practices? What principles are desirable for future planning? The research will analyse different architectural responses adopted in four European cities: Paris, Lion, Rotterdam, and Amsterdam. In particular, it will develop a qualitative and comparative study of two specific architectural solutions which integrate housing density and quality living. On the one hand, the so-called 'self-contained city' model, on the other hand, the French 'Habitat Dense Individualisé' one. The structure of the paper will be as follows: the first part will develop a qualitative evaluation of some case studies, emblematic examples of the two above said architectural models. The second part will focus on the comparison among the chosen case studies. Finally, some conclusions will be drawn. The methodological approach, therefore, combines qualitative and comparative research. Parameters will be defined in order to highlight potential and criticality of each model in light of an interdisciplinary view. In conclusion, the present paper aims at shading light on design approaches which ensure a right balance between density and quality of the urban living in contemporary European cities.

Keywords: density, future design, housing quality, human habitat

Procedia PDF Downloads 106
5381 Numerical Buckling of Composite Cylindrical Shells under Axial Compression Using Asymmetric Meshing Technique (AMT)

Authors: Zia R. Tahir, P. Mandal

Abstract:

This paper presents the details of a numerical study of buckling and post buckling behaviour of laminated carbon fiber reinforced plastic (CFRP) thin-walled cylindrical shell under axial compression using asymmetric meshing technique (AMT) by ABAQUS. AMT is considered to be a new perturbation method to introduce disturbance without changing geometry, boundary conditions or loading conditions. Asymmetric meshing affects both predicted buckling load and buckling mode shapes. Cylindrical shell having lay-up orientation [0°/+45°/-45°/0°] with radius to thickness ratio (R/t) equal to 265 and length to radius ratio (L/R) equal to 1.5 is analysed numerically. A series of numerical simulations (experiments) are carried out with symmetric and asymmetric meshing to study the effect of asymmetric meshing on predicted buckling behaviour. Asymmetric meshing technique is employed in both axial direction and circumferential direction separately using two different methods, first by changing the shell element size and varying the total number elements, and second by varying the shell element size and keeping total number of elements constant. The results of linear analysis (Eigenvalue analysis) and non-linear analysis (Riks analysis) using symmetric meshing agree well with analytical results. The results of numerical analysis are presented in form of non-dimensional load factor, which is the ratio of buckling load using asymmetric meshing technique to buckling load using symmetric meshing technique. Using AMT, load factor has about 2% variation for linear eigenvalue analysis and about 2% variation for non-linear Riks analysis. The behaviour of load end-shortening curve for pre-buckling is same for both symmetric and asymmetric meshing but for asymmetric meshing curve behaviour in post-buckling becomes extraordinarily complex. The major conclusions are: different methods of AMT have small influence on predicted buckling load and significant influence on load displacement curve behaviour in post buckling; AMT in axial direction and AMT in circumferential direction have different influence on buckling load and load displacement curve in post-buckling.

Keywords: CFRP composite cylindrical shell, asymmetric meshing technique, primary buckling, secondary buckling, linear eigenvalue analysis, non-linear riks analysis

Procedia PDF Downloads 353
5380 Ductility Spectrum Method for the Design and Verification of Structures

Authors: B. Chikh, L. Moussa, H. Bechtoula, Y. Mehani, A. Zerzour

Abstract:

This study presents a new method, applicable to evaluation and design of structures has been developed and illustrated by comparison with the capacity spectrum method (CSM, ATC-40). This method uses inelastic spectra and gives peak responses consistent with those obtained when using the nonlinear time history analysis. Hereafter, the seismic demands assessment method is called in this paper DSM, Ductility Spectrum Method. It is used to estimate the seismic deformation of Single-Degree-Of-Freedom (SDOF) systems based on DDRS, Ductility Demand Response Spectrum, developed by the author.

Keywords: seismic demand, capacity, inelastic spectra, design and structure

Procedia PDF Downloads 396
5379 Airborne Particulate Matter Passive Samplers for Indoor and Outdoor Exposure Monitoring: Development and Evaluation

Authors: Kholoud Abdulaziz, Kholoud Al-Najdi, Abdullah Kadri, Konstantinos E. Kakosimos

Abstract:

The Middle East area is highly affected by air pollution induced by anthropogenic and natural phenomena. There is evidence that air pollution, especially particulates, greatly affects the population health. Many studies have raised a warning of the high concentration of particulates and their affect not just around industrial and construction areas but also in the immediate working and living environment. One of the methods to study air quality is continuous and periodic monitoring using active or passive samplers. Active monitoring and sampling are the default procedures per the European and US standards. However, in many cases they have been inefficient to accurately capture the spatial variability of air pollution due to the small number of installations; which eventually is attributed to the high cost of the equipment and the limited availability of users with expertise and scientific background. Another alternative has been found to account for the limitations of the active methods that is the passive sampling. It is inexpensive, requires no continuous power supply, and easy to assemble which makes it a more flexible option, though less accurate. This study aims to investigate and evaluate the use of passive sampling for particulate matter pollution monitoring in dry tropical climates, like in the Middle East. More specifically, a number of field measurements have be conducted, both indoors and outdoors, at Qatar and the results have been compared with active sampling equipment and the reference methods. The samples have been analyzed, that is to obtain particle size distribution, by applying existing laboratory techniques (optical microscopy) and by exploring new approaches like the white light interferometry to. Then the new parameters of the well-established model have been calculated in order to estimate the atmospheric concentration of particulates. Additionally, an extended literature review will investigate for new and better models. The outcome of this project is expected to have an impact on the public, as well, as it will raise awareness among people about the quality of life and about the importance of implementing research culture in the community.

Keywords: air pollution, passive samplers, interferometry, indoor, outdoor

Procedia PDF Downloads 398
5378 Expressivity of Word-Formation in English and Russian Advertising Lexicon

Authors: Voronina Ekaterina Borisovna

Abstract:

The problem of expressivity of advertising lexicon is studied in the article. The comparison of English and Russian advertising lexicons is done. The objects of the analysis were English and Russian advertising texts, both printed advertising texts and texts extracted from the commercials. Some conclusions concerning the expressivity of advertising lexicon were made. Expressivity can be included in the semantic structure of words or created by word-formation means. Expressivity caused by morphological derivatives includes such facilities as derivational affixes, models and types of word formation.

Keywords: advertising lexicon, expressivity, word-formation means, linguistics

Procedia PDF Downloads 351
5377 An Artificial Neural Network Model Based Study of Seismic Wave

Authors: Hemant Kumar, Nilendu Das

Abstract:

A study based on ANN structure gives us the information to predict the size of the future in realizing a past event. ANN, IMD (Indian meteorological department) data and remote sensing were used to enable a number of parameters for calculating the size that may occur in the future. A threshold selected specifically above the high-frequency harvest reached the area during the selected seismic activity. In the field of human and local biodiversity it remains to obtain the right parameter compared to the frequency of impact. But during the study the assumption is that predicting seismic activity is a difficult process, not because of the parameters involved here, which can be analyzed and funded in research activity.

Keywords: ANN, Bayesion class, earthquakes, IMD

Procedia PDF Downloads 125
5376 Model Predictive Controller for Pasteurization Process

Authors: Tesfaye Alamirew Dessie

Abstract:

Our study focuses on developing a Model Predictive Controller (MPC) and evaluating it against a traditional PID for a pasteurization process. Utilizing system identification from the experimental data, the dynamics of the pasteurization process were calculated. Using best fit with data validation, residual, and stability analysis, the quality of several model architectures was evaluated. The validation data fit the auto-regressive with exogenous input (ARX322) model of the pasteurization process by roughly 80.37 percent. The ARX322 model structure was used to create MPC and PID control techniques. After comparing controller performance based on settling time, overshoot percentage, and stability analysis, it was found that MPC controllers outperform PID for those parameters.

Keywords: MPC, PID, ARX, pasteurization

Procedia PDF Downloads 163
5375 Success of Trabeculectomy: May Not Always Depend on Mitomycin C

Authors: Sushma Tejwani, Shoruba Dinakaran, Rupa Rokhade, K. Bhujang Shetty

Abstract:

Introduction and aim: One of the major causes for failure of trabeculectomy is fibrosis and scarring of subconjunctival tissue around the bleb, and hence intra operative usage of anti-fibrotic agents like Mitomycin C (MMC) has become very popular. However, the long term effects of MMC like thin, avascular bleb, hypotony, bleb leaks and late onset endophthalmitis cannot be ignored, and may preclude its usage in routine trabeculectomy. In this particular study we aim to study the outcomes of trabeculectomy with and without MMC in uncomplicated glaucoma patients. Methods: Retrospective study of series of patients that underwent trabeculectomy with or without cataract surgery in glaucoma department of a tertiary eye care centre by a single surgeon for primary open angle glaucoma (POAG), angle closure glaucoma (PACG), Pseudoexfoliation glaucoma (PXF glaucoma). Patients with secondary glaucoma, juvenile and congenital glaucoma were excluded; also patients undergoing second trabeculectomy were excluded. The outcomes were studied in terms of IOP control at 1 month, 6 months, and 1 year and were analyzed separately for surgical outcomes with and without MMC. Success was considered if IOP was < 16 mmHg on applanation tonometry. Further, the necessity of medication, 5 fluorouracil (5FU) postoperative injections, needling post operatively was noted. Results: Eighty nine patient’s medical records were reviewed, of which 58 patients had undergone trabeculectomy without MMC and 31 with MMC. Mean age was 62.4 (95%CI 61- 64), 34 were females and 55 males. MMC group (n=31): Preoperative mean IOP was 21.1mmHg (95% CI: 17.6 -24.6), and 22 patients had IOP > 16. Three out of 33 patients were on single medication and rests were on multiple drugs. At 1 month (n=27) mean IOP was 12.4 mmHg (CI: 10.7-14), and 31/33 had success. At 6 months (n=18) mean IOP was 13mmHg (CI: 10.3-14.6) and 16/18 had good outcome, however at 1 year only 11 patients were available for follow up and 91% (10/11) had success. Overall, 3 patients required medication and one patient required postoperative injection of 5 FU. No MMC group (n=58): Preoperative mean IOP was 21.9 mmHg (CI: 19.8-24.2), and 42 had IOP > 16 mmHg. 12 out of 58 patients were on single medication and rests were on multiple drugs. At 1 month (n=52) mean IOP was14.6mmHg (CI: 13.2-15.9), and 45/ 58 had IOP < 16mmHg. At 6 months (n=31) mean IOP was 13.5 mmHg (CI: 11.9-15.2) and 26/31 had success, however at 1 year only 23 patients came for follow up and of these 87% (20/23) patients had success. Overall, 1 patient required needling, 5 required 5 FU injections and 5 patients required medication. The success rates at each follow up visit were not significantly different in both the groups. Conclusion: Intra-operative MMC usage may not be required in all patients undergoing trabeculectomy, and the ones without MMC also have fairly good outcomes in primary glaucoma.

Keywords: glaucoma filtration surgery, mitomycin C, outcomes of trabeculectomy, wound modulation

Procedia PDF Downloads 274
5374 The Challenges of Digital Crime Nowadays

Authors: Bendes Ákos

Abstract:

Digital evidence will be the most widely used type of evidence in the future. With the development of the modern world, more and more new types of crimes have evolved and transformed. For this reason, it is extremely important to examine these types of crimes in order to get a comprehensive picture of them, with which we can help the authorities work. In 1865, with early technologies, people were able to forge a picture of a quality that is not even recognized today. With the help of today's technology, authorities receive a lot of false evidence. Officials are not able to process such a large amount of data, nor do they have the necessary technical knowledge to get a real picture of the authenticity of the given evidence. The digital world has many dangers. Unfortunately, we live in an age where we must protect everything digitally: our phones, our computers, our cars, and all the smart devices that are present in our personal lives and this is not only a burden on us, since companies, state and public utilities institutions are also forced to do so. The training of specialists and experts is essential so that the authorities can manage the incoming digital evidence at some level. When analyzing evidence, it is important to be able to examine it from the moment it is created. Establishing authenticity is a very important issue during official procedures. After the proper acquisition of the evidence, it is essential to store it safely and use it professionally. After the proper acquisition of the evidence, it is essential to store it safely and use it professionally. Otherwise, they will not have sufficient probative value and in case of doubt, the court will always decide in favor of the defendant. One of the most common problems in the world of digital data and evidence is doubt, which is why it is extremely important to examine the above-mentioned problems. The most effective way to avoid digital crimes is to prevent them, for which proper education and knowledge are essential. The aim is to present the dangers inherent in the digital world and the new types of digital crimes. After the comparison of the Hungarian investigative techniques with international practice, modernizing proposals will be given. A sufficiently stable yet flexible legislation is needed that can monitor the rapid changes in the world and not regulate afterward but rather provide an appropriate framework. It is also important to be able to distinguish between digital and digitalized evidence, as the degree of probative force differs greatly. The aim of the research is to promote effective international cooperation and uniform legal regulation in the world of digital crimes.

Keywords: digital crime, digital law, cyber crime, international cooperation, new crimes, skepticism

Procedia PDF Downloads 63
5373 Influence of the Test Environment on the Dynamic Response of a Composite Beam

Authors: B. Moueddene, B. Labbaci, L. Missoum, R. Abdeldjebar

Abstract:

Quality estimation of the experimental simulation of boundary conditions is one of the problems encountered while performing an experimental program. In fact, it is not easy to estimate directly the effective influence of these simulations on the results of experimental investigation. The aim of this is article to evaluate the effect of boundary conditions uncertainties on structure response, using the change of the dynamics characteristics. The experimental models used and the correlation by the Frequency Domain Assurance Criterion (FDAC) allowed an interpretation of the change in the dynamic characteristics. The application of this strategy to stratified composite structures (glass/ polyester) has given satisfactory results.

Keywords: vibration, composite, endommagement, correlation

Procedia PDF Downloads 366
5372 Characteristics Influencing Response of a Base Isolated Building

Authors: Ounis Hadj Mohamed, Ounis Abdelhafid

Abstract:

In order to illustrate the effect of damping on the response of a base-isolated building, a parametric study is led, taking into account the progressive variation of the damping ratio (10% to 30%) under different types of seismic excitations (near and far field). A time history analysis is used to determine the response of the structure in terms of relative displacement and understory drift at various levels of the building. Thus, the results show that the efficiency of the isolator increases with the assumed damping ratio, provided that this latter is less or equal to 20%. Beyond this value, the isolator becomes less convenient. Furthermore, a strong deviation of energy capacity by the LRB (Lead Rubber Bearing) system is recorded.

Keywords: damping, base isolation, LRB, seismic excitation, hysteresis

Procedia PDF Downloads 416
5371 Kirigami Designs for Enhancing the Electromechanical Performance of E-Textiles

Authors: Braden M. Li, Inhwan Kim, Jesse S. Jur

Abstract:

One of the fundamental challenges in the electronic textile (e-textile) industry is the mismatch in compliance between the rigid electronic components integrated onto soft textile platforms. To address these problems, various printing technologies using conductive inks have been explored in an effort to improve the electromechanical performance without sacrificing the innate properties of the printed textile. However, current printing methods deposit densely layered coatings onto textile surfaces with low through-plane wetting resulting in poor electromechanical properties. This work presents an inkjet printing technique in conjunction with unique Kirigami cut designs to address these issues for printed smart textiles. By utilizing particle free reactive silver inks, our inkjet process produces conformal and micron thick silver coatings that surround individual fibers of the printed smart textile. This results in a highly conductive (0.63 Ω sq-1) printed e-textile while also maintaining the innate properties of the textile material including stretchability, flexibility, breathability and fabric hand. Kirigami is the Japanese art of paper cutting. By utilizing periodic cut designs, Kirigami imparts enhanced flexibility and delocalization of stress concentrations. Kirigami cut design parameters (i.e., cut spacing and length) were correlated to both the mechanical and electromechanical properties of the printed textiles. We demonstrate that designs using a higher cut-out ratio exponentially softens the textile substrate. Thus, our designs achieve a 30x improvement in the overall stretchability, 1000x decrease in elastic modulus, and minimal resistance change over strain regimes of 100-200% when compared to uncut designs. We also show minimal resistance change of our Kirigami inspired printed devices after being stretched to 100% for 1000 cycles. Lastly, we demonstrate a Kirigami-inspired electrocardiogram (ECG) monitoring system that improves stretchability without sacrificing signal acquisition performance. Overall this study suggests fundamental parameters affecting the performance of e-textiles and their scalability in the wearable technology industry

Keywords: kirigami, inkjet printing, flexible electronics, reactive silver ink

Procedia PDF Downloads 143
5370 Semantic-Based Collaborative Filtering to Improve Visitor Cold Start in Recommender Systems

Authors: Baba Mbaye

Abstract:

In collaborative filtering recommendation systems, a user receives suggested items based on the opinions and evaluations of a community of users. This type of recommendation system uses only the information (notes in numerical values) contained in a usage matrix as input data. This matrix can be constructed based on users' behaviors or by offering users to declare their opinions on the items they know. The cold start problem leads to very poor performance for new users. It is a phenomenon that occurs at the beginning of use, in the situation where the system lacks data to make recommendations. There are three types of cold start problems: cold start for a new item, a new system, and a new user. We are interested in this article at the cold start for a new user. When the system welcomes a new user, the profile exists but does not have enough data, and its communities with other users profiles are still unknown. This leads to recommendations not adapted to the profile of the new user. In this paper, we propose an approach that improves cold start by using the notions of similarity and semantic proximity between users profiles during cold start. We will use the cold-metadata available (metadata extracted from the new user's data) useful in positioning the new user within a community. The aim is to look for similarities and semantic proximities with the old and current user profiles of the system. Proximity is represented by close concepts considered to belong to the same group, while similarity groups together elements that appear similar. Similarity and proximity are two close but not similar concepts. This similarity leads us to the construction of similarity which is based on: a) the concepts (properties, terms, instances) independent of ontology structure and, b) the simultaneous representation of the two concepts (relations, presence of terms in a document, simultaneous presence of the authorities). We propose an ontology, OIVCSRS (Ontology of Improvement Visitor Cold Start in Recommender Systems), in order to structure the terms and concepts representing the meaning of an information field, whether by the metadata of a namespace, or the elements of a knowledge domain. This approach allows us to automatically attach the new user to a user community, partially compensate for the data that was not initially provided and ultimately to associate a better first profile with the cold start. Thus, the aim of this paper is to propose an approach to improving cold start using semantic technologies.

Keywords: visitor cold start, recommender systems, collaborative filtering, semantic filtering

Procedia PDF Downloads 218
5369 Monetary Policy and Assets Prices in Nigeria: Testing for the Direction of Relationship

Authors: Jameelah Omolara Yaqub

Abstract:

One of the main reasons for the existence of central bank is that it is believed that central banks have some influence on private sector decisions which will enable the Central Bank to achieve some of its objectives especially that of stable price and economic growth. By the assumption of the New Keynesian theory that prices are fully flexible in the short run, the central bank can temporarily influence real interest rate and, therefore, have an effect on real output in addition to nominal prices. There is, therefore, the need for the Central Bank to monitor, respond to, and influence private sector decisions appropriately. This thus shows that the Central Bank and the private sector will both affect and be affected by each other implying considerable interdependence between the sectors. The interdependence may be simultaneous or not depending on the level of information, readily available and how sensitive prices are to agents’ expectations about the future. The aim of this paper is, therefore, to determine whether the interdependence between asset prices and monetary policy are simultaneous or not and how important is this relationship. Studies on the effects of monetary policy have largely used VAR models to identify the interdependence but most have found small effects of interaction. Some earlier studies have ignored the possibility of simultaneous interdependence while those that have allowed for simultaneous interdependence used data from developed economies only. This study, therefore, extends the literature by using data from a developing economy where information might not be readily available to influence agents’ expectation. In this study, the direction of relationship among variables of interest will be tested by carrying out the Granger causality test. Thereafter, the interaction between asset prices and monetary policy in Nigeria will be tested. Asset prices will be represented by the NSE index as well as real estate prices while monetary policy will be represented by money supply and the MPR respectively. The VAR model will be used to analyse the relationship between the variables in order to take account of potential simultaneity of interdependence. The study will cover the period between 1980 and 2014 due to data availability. It is believed that the outcome of the research will guide monetary policymakers especially the CBN to effectively influence the private sector decisions and thereby achieve its objectives of price stability and economic growth.

Keywords: asset prices, granger causality, monetary policy rate, Nigeria

Procedia PDF Downloads 220
5368 Sensing Endocrine Disrupting Chemicals by Virus-Based Structural Colour Nanostructure

Authors: Lee Yujin, Han Jiye, Oh Jin-Woo

Abstract:

The adverse effects of endocrine disrupting chemicals (EDCs) has attracted considerable public interests. The benzene-like EDCs structure mimics the mechanisms of hormones naturally occurring in vivo, and alters physiological function of the endocrine system. Although, some of the most representative EDCs such as polychlorinated biphenyls (PCBs) and phthalates compounds already have been prohibited to produce and use in many countries, however, PCBs and phthalates in plastic products as flame retardant and plasticizer are still circulated nowadays. EDCs can be released from products while using and discarding, and it causes serious environmental and health issues. Here, we developed virus-based structurally coloured nanostructure that can detect minute EDCs concentration sensitively and selectively. These structurally coloured nanostructure exhibits characteristic angel-independent colors due to the regular virus bundle structure formation through simple pulling technique. The designed number of different colour bands can be formed through controlling concentration of virus solution and pulling speed. The virus, M-13 bacteriophage, was genetically engineered to react with specific ECDs, typically PCBs and phthalates. M-13 bacteriophage surface (pVIII major coat protein) was decorated with benzene derivative binding peptides (WHW) through phage library method. In the initial assessment, virus-based color sensor was exposed to several organic chemicals including benzene, toluene, phenol, chlorobenzene, and phthalic anhydride. Along with the selectivity evaluation of virus-based colour sensor, it also been tested for sensitivity. 10 to 300 ppm of phthalic anhydride and chlorobenzene were detected by colour sensor, and showed the significant sensitivity with about 90 of dissociation constant. Noteworthy, all measurements were analyzed through principal component analysis (PCA) and linear discrimination analysis (LDA), and exhibited clear discrimination ability upon exposure to 2 categories of EDCs (PCBs and phthalates). Because of its easy fabrication, high sensitivity, and the superior selectivity, M-13 bacteriophage-based color sensor could be a simple and reliable portable sensing system for environmental monitoring, healthcare, social security, and so on.

Keywords: M-13 bacteriophage, colour sensor, genetic engineering, EDCs

Procedia PDF Downloads 242
5367 Synthesis and Charaterization of Nanocomposite Poly (4,4' Methylenedianiline) Catalyzed by Maghnite-H+

Authors: A. Belmokhtar, A. Yahiaoui, A. Benyoucef, M. Belbachir

Abstract:

We reported the synthesis and characterization of nanocomposite poly (4,4’ methylenedianiline) via chemical polymerization of monomers 4,4’ methylenedianiline by ammonium persulfate (APS) at room temperature catalyzed by Maghnite-H+. A facile method was demonstrated to grow poly (4,4’ methylenedianiline) nanocomposite, which was carried out by mixing Ammonium Persulfate (APS) aqueous and 4,4’ methylenedianiline solution in the presence of Maghnite-H+ at room temperature The effect of amount of catalyst and time on the polymerization yield of the polymers was studied. Structure was confirmed by elemental analysis, UV vis, RMN-1H, and voltammetry cyclique.

Keywords: charaterization, maghnite-h+, polymerization, poly (4, 4’ methylenedianiline)

Procedia PDF Downloads 289
5366 Verification Protocols for the Lightning Protection of a Large Scale Scientific Instrument in Harsh Environments: A Case Study

Authors: Clara Oliver, Oibar Martinez, Jose Miguel Miranda

Abstract:

This paper is devoted to the study of the most suitable protocols to verify the lightning protection and ground resistance quality in a large-scale scientific facility located in a harsh environment. We illustrate this work by reviewing a case study: the largest telescopes of the Northern Hemisphere Cherenkov Telescope Array, CTA-N. This array hosts sensitive and high-speed optoelectronics instrumentation and sits on a clear, free from obstacle terrain at around 2400 m above sea level. The site offers a top-quality sky but also features challenging conditions for a lightning protection system: the terrain is volcanic and has resistivities well above 1 kOhm·m. In addition, the environment often exhibits humidities well below 5%. On the other hand, the high complexity of a Cherenkov telescope structure does not allow a straightforward application of lightning protection standards. CTA-N has been conceived as an array of fourteen Cherenkov Telescopes of two different sizes, which will be constructed in La Palma Island, Spain. Cherenkov Telescopes can provide valuable information on different astrophysical sources from the gamma rays reaching the Earth’s atmosphere. The largest telescopes of CTA are called LST’s, and the construction of the first one was finished in October 2018. The LST has a shape which resembles a large parabolic antenna, with a 23-meter reflective surface supported by a tubular structure made of carbon fibers and steel tubes. The reflective surface has 400 square meters and is made of an array of segmented mirrors that can be controlled individually by a subsystem of actuators. This surface collects and focuses the Cherenkov photons into the camera, where 1855 photo-sensors convert the light in electrical signals that can be processed by dedicated electronics. We describe here how the risk assessment of direct strike impacts was made and how down conductors and ground system were both tested. The verification protocols which should be applied for the commissioning and operation phases are then explained. We stress our attention on the ground resistance quality assessment.

Keywords: grounding, large scale scientific instrument, lightning risk assessment, lightning standards and safety

Procedia PDF Downloads 123
5365 Observation on the Performance of Heritage Structures in Kathmandu Valley, Nepal during the 2015 Gorkha Earthquake

Authors: K. C. Apil, Keshab Sharma, Bigul Pokharel

Abstract:

Kathmandu Valley, capital city of Nepal houses numerous historical monuments as well as religious structures which are as old as from the 4th century A.D. The city alone is home to seven UNESCO’s world heritage sites including various public squares and religious sanctums which are often regarded as living heritages by various historians and archeological explorers. Recently on April 25, 2015, the capital city including other nearby locations was struck with Gorkha earthquake of moment magnitude (Mw) 7.8, followed by the strongest aftershock of moment magnitude (Mw) 7.3 on May 12. This study reports structural failures and collapse of heritage structures in Kathmandu Valley during the earthquake and presents preliminary findings as to the causes of failures and collapses. Field reconnaissance was carried immediately after the main shock and the aftershock, in major heritage sites: UNESCO world heritage sites, a number of temples and historic buildings in Kathmandu Durbar Square, Patan Durbar Square, and Bhaktapur Durbar Square. Despite such catastrophe, a significant number of heritage structures stood high, performing very well during the earthquake. Preliminary reports from archeological department suggest that 721 of such structures were severely affected, whereas numbers within the valley only were 444 including 76 structures which were completely collapsed. This study presents recorded accelerograms and geology of Kathmandu Valley. Structural typology and architecture of the heritage structures in Kathmandu Valley are briefly described. Case histories of damaged heritage structures, the patterns, and the failure mechanisms are also discussed in this paper. It was observed that performance of heritage structures was influenced by the multiple factors such as structural and architecture typology, configuration, and structural deficiency, local ground site effects and ground motion characteristics, age and maintenance level, material quality etc. Most of such heritage structures are of masonry type using bricks and earth-mortar as a bonding agent. The walls' resistance is mainly compressive, thus capable of withstanding vertical static gravitational load but not horizontal dynamic seismic load. There was no definitive pattern of damage to heritage structures as most of them behaved as a composite structure. Some structures were extensively damaged in some locations, while structures with similar configuration at nearby location had little or no damage. Out of major heritage structures, Dome, Pagoda (2, 3 or 5 tiered temples) and Shikhara structures were studied with similar variables. Studying varying degrees of damages in such structures, it was found that Shikhara structures were most vulnerable one where Dome structures were found to be the most stable one, followed by Pagoda structures. The seismic performance of the masonry-timber and stone masonry structures were slightly better than that of the masonry structures. Regular maintenance and periodic seismic retrofitting seems to have played pivotal role in strengthening seismic performance of the structure. The study also recommends some key functions to strengthen the seismic performance of such structures through study based on structural analysis, building material behavior and retrofitting details. The result also recognises the importance of documentation of traditional knowledge and its revised transformation in modern technology.

Keywords: Gorkha earthquake, field observation, heritage structure, seismic performance, masonry building

Procedia PDF Downloads 151
5364 The Use of Corpora in Improving Modal Verb Treatment in English as Foreign Language Textbooks

Authors: Lexi Li, Vanessa H. K. Pang

Abstract:

This study aims to demonstrate how native and learner corpora can be used to enhance modal verb treatment in EFL textbooks in mainland China. It contributes to a corpus-informed and learner-centered design of grammar presentation in EFL textbooks that enhances the authenticity and appropriateness of textbook language for target learners. The linguistic focus is will, would, can, could, may, might, shall, should, must. The native corpus is the spoken component of BNC2014 (hereafter BNCS2014). The spoken part is chosen because pedagogical purpose of the textbooks is communication-oriented. Using the standard query option of CQPweb, 5% of each of the nine modals was sampled from BNCS2014. The learner corpus is the POS-tagged Ten-thousand English Compositions of Chinese Learners (TECCL). All the essays under the 'secondary school' section were selected. A series of five secondary coursebooks comprise the textbook corpus. All the data in both the learner and the textbook corpora are retrieved through the concordance functions of WordSmith Tools (version, 5.0). Data analysis was divided into two parts. The first part compared the patterns of modal verbs in the textbook corpus and BNC2014 with respect to distributional features, semantic functions, and co-occurring constructions to examine whether the textbooks reflect the authentic use of English. Secondly, the learner corpus was analyzed in terms of the use (distributional features, semantic functions, and co-occurring constructions) and the misuse (syntactic errors, e.g., she can sings*.) of the nine modal verbs to uncover potential difficulties that confront learners. The analysis of distribution indicates several discrepancies between the textbook corpus and BNCS2014. The first four most frequent modal verbs in BNCS2014 are can, would, will, could, while can, will, should, could are the top four in the textbooks. Most strikingly, there is an unusually high proportion of can (41.1%) in the textbooks. The results on different meanings shows that will, would and must are the most problematic. For example, for will, the textbooks contain 20% more occurrences of 'volition' and 20% less of 'prediction' than those in BNCS2014. Regarding co-occurring structures, the textbooks over-represented the structure 'modal +do' across the nine modal verbs. Another major finding is that the structure of 'modal +have done' that frequently co-occur with could, would, should, and must is underused in textbooks. Besides, these four modal verbs are the most difficult for learners, as the error analysis shows. This study demonstrates how the synergy of native and learner corpora can be harnessed to improve EFL textbook presentation of modal verbs in a way that textbooks can provide not only authentic language used in natural discourse but also appropriate design tailed for the needs of target learners.

Keywords: English as Foreign Language, EFL textbooks, learner corpus, modal verbs, native corpus

Procedia PDF Downloads 143
5363 Copper Phthalocyanine Nanostructures: A Potential Material for Field Emission Display

Authors: Uttam Kumar Ghorai, Madhupriya Samanta, Subhajit Saha, Swati Das, Nilesh Mazumder, Kalyan Kumar Chattopadhyay

Abstract:

Organic semiconductors have gained potential interest in the last few decades for their significant contributions in the various fields such as solar cell, non-volatile memory devices, field effect transistors and light emitting diodes etc. The most important advantages of using organic materials are mechanically flexible, light weight and low temperature depositing techniques. Recently with the advancement of nanoscience and technology, one dimensional organic and inorganic nanostructures such as nanowires, nanorods, nanotubes have gained tremendous interests due to their very high aspect ratio and large surface area for electron transport etc. Among them, self-assembled organic nanostructures like Copper, Zinc Phthalocyanine have shown good transport property and thermal stability due to their π conjugated bonds and π-π stacking respectively. Field emission properties of inorganic and carbon based nanostructures are reported in literatures mostly. But there are few reports in case of cold cathode emission characteristics of organic semiconductor nanostructures. In this work, the authors report the field emission characteristics of chemically and physically synthesized Copper Phthalocyanine (CuPc) nanostructures such as nanowires, nanotubes and nanotips. The as prepared samples were characterized by X-Ray diffraction (XRD), Ultra Violet Visible Spectrometer (UV-Vis), Fourier Transform Infra-red Spectroscopy (FTIR), and Field Emission Scanning Electron Microscope (FESEM) and Transmission Electron Microscope (TEM). The field emission characteristics were measured in our home designed field emission set up. The registered turn-on field and local field enhancement factor are found to be less than 5 V/μm and greater than 1000 respectively. The field emission behaviour is also stable for 200 minute. The experimental results are further verified by theoretically using by a finite displacement method as implemented in ANSYS Maxwell simulation package. The obtained results strongly indicate CuPc nanostructures to be the potential candidate as an electron emitter for field emission based display device applications.

Keywords: organic semiconductor, phthalocyanine, nanowires, nanotubes, field emission

Procedia PDF Downloads 501
5362 Wearable Antenna for Diagnosis of Parkinson’s Disease Using a Deep Learning Pipeline on Accelerated Hardware

Authors: Subham Ghosh, Banani Basu, Marami Das

Abstract:

Background: The development of compact, low-power antenna sensors has resulted in hardware restructuring, allowing for wireless ubiquitous sensing. The antenna sensors can create wireless body-area networks (WBAN) by linking various wireless nodes across the human body. WBAN and IoT applications, such as remote health and fitness monitoring and rehabilitation, are becoming increasingly important. In particular, Parkinson’s disease (PD), a common neurodegenerative disorder, presents clinical features that can be easily misdiagnosed. As a mobility disease, it may greatly benefit from the antenna’s nearfield approach with a variety of activities that can use WBAN and IoT technologies to increase diagnosis accuracy and patient monitoring. Methodology: This study investigates the feasibility of leveraging a single patch antenna mounted (using cloth) on the wrist dorsal to differentiate actual Parkinson's disease (PD) from false PD using a small hardware platform. The semi-flexible antenna operates at the 2.4 GHz ISM band and collects reflection coefficient (Γ) data from patients performing five exercises designed for the classification of PD and other disorders such as essential tremor (ET) or those physiological disorders caused by anxiety or stress. The obtained data is normalized and converted into 2-D representations using the Gabor wavelet transform (GWT). Data augmentation is then used to expand the dataset size. A lightweight deep-learning (DL) model is developed to run on the GPU-enabled NVIDIA Jetson Nano platform. The DL model processes the 2-D images for feature extraction and classification. Findings: The DL model was trained and tested on both the original and augmented datasets, thus doubling the dataset size. To ensure robustness, a 5-fold stratified cross-validation (5-FSCV) method was used. The proposed framework, utilizing a DL model with 1.356 million parameters on the NVIDIA Jetson Nano, achieved optimal performance in terms of accuracy of 88.64%, F1-score of 88.54, and recall of 90.46%, with a latency of 33 seconds per epoch.

Keywords: antenna, deep-learning, GPU-hardware, Parkinson’s disease

Procedia PDF Downloads 7
5361 Efficient Residual Road Condition Segmentation Network Based on Reconstructed Images

Authors: Xiang Shijie, Zhou Dong, Tian Dan

Abstract:

This paper focuses on the application of real-time semantic segmentation technology in complex road condition recognition, aiming to address the critical issue of how to improve segmentation accuracy while ensuring real-time performance. Semantic segmentation technology has broad application prospects in fields such as autonomous vehicle navigation and remote sensing image recognition. However, current real-time semantic segmentation networks face significant technical challenges and optimization gaps in balancing speed and accuracy. To tackle this problem, this paper conducts an in-depth study and proposes an innovative Guided Image Reconstruction Module. By resampling high-resolution images into a set of low-resolution images, this module effectively reduces computational complexity, allowing the network to more efficiently extract features within limited resources, thereby improving the performance of real-time segmentation tasks. In addition, a dual-branch network structure is designed in this paper to fully leverage the advantages of different feature layers. A novel Hybrid Attention Mechanism is also introduced, which can dynamically capture multi-scale contextual information and effectively enhance the focus on important features, thus improving the segmentation accuracy of the network in complex road condition. Compared with traditional methods, the proposed model achieves a better balance between accuracy and real-time performance and demonstrates competitive results in road condition segmentation tasks, showcasing its superiority. Experimental results show that this method not only significantly improves segmentation accuracy while maintaining real-time performance, but also remains stable across diverse and complex road conditions, making it highly applicable in practical scenarios. By incorporating the Guided Image Reconstruction Module, dual-branch structure, and Hybrid Attention Mechanism, this paper presents a novel approach to real-time semantic segmentation tasks, which is expected to further advance the development of this field.

Keywords: hybrid attention mechanism, image reconstruction, real-time, road status recognition

Procedia PDF Downloads 25
5360 Repository Blockchain for Collaborative Blockchain Ecosystem

Authors: Razwan Ahmed Tanvir, Greg Speegle

Abstract:

Collaborative blockchain ecosystems allow diverse groups to cooperate on tasks while providing properties such as decentralization and transaction security. We provide a model that uses a repository blockchain to manage hard forks within a collaborative system such that a single process (assuming that it has knowledge of the requirements of each fork) can access all of the blocks within the system. The repository blockchain replaces the need for Inter Blockchain Communication (IBC) within the ecosystem by navigating the networks. The resulting construction resembles a tree instead of a chain. A proof-of-concept implementation performs a depth-first search on the new structure.

Keywords: hard fork, shared governance, inter blockchain communication, blockchain ecosystem, regular research paper

Procedia PDF Downloads 17
5359 Immiscible Polymer Blends with Controlled Nanoparticle Location for Excellent Microwave Absorption: A Compartmentalized Approach

Authors: Sourav Biswas, Goutam Prasanna Kar, Suryasarathi Bose

Abstract:

In order to obtain better materials, control in the precise location of nanoparticles is indispensable. It was shown here that ordered arrangement of nanoparticles, possessing different characteristics (electrical/magnetic dipoles), in the blend structure can result in excellent microwave absorption. This is manifested from a high reflection loss of ca. -67 dB for the best blend structure designed here. To attenuate electromagnetic radiations, the key parameters i.e. high electrical conductivity and large dielectric/magnetic loss are targeted here using a conducting inclusion [multiwall carbon nanotubes, MWNTs]; ferroelectric nanostructured material with associated relaxations in the GHz frequency [barium titanate, BT]; and a loss ferromagnetic nanoparticles [nickel ferrite, NF]. In this study, bi-continuous structures were designed using 50/50 (by wt) blends of polycarbonate (PC) and polyvinylidene fluoride (PVDF). The MWNTs was modified using an electron acceptor molecule; a derivative of perylenediimide, which facilitates π-π stacking with the nanotubes and stimulates efficient charge transport in the blends. The nanoscopic materials have specific affinity towards the PVDF phase. Hence, by introducing surface-active groups, ordered arrangement can be tailored. To accomplish this, both BT and NF was first hydroxylated followed by introducing amine-terminal groups on the surface. The latter facilitated in nucleophilic substitution reaction with PC and resulted in their precise location. In this study, we have shown for the first time that by compartmentalized approach, superior EM attenuation can be achieved. For instance, when the nanoparticles were localized exclusively in the PVDF phase or in both the phases, the minimum reflection loss was ca. -18 dB (for MWNT/BT mixture) and -29 dB (for MWNT/NF mixture), and the shielding was primarily through reflection. Interestingly, by adopting the compartmentalized approach where in, the lossy materials were in the PC phase and the conducting inclusion (MWNT) in PVDF, an outstanding reflection loss of ca. -57 dB (for BT and MWNT combination) and -67 dB (for NF and MWNT combination) was noted and the shielding was primarily through absorption. Thus, the approach demonstrates that nanoscopic structuring in the blends can be achieved under macroscopic processing conditions and this strategy can further be explored to design microwave absorbers.

Keywords: barium titanate, EMI shielding, MWNTs, nickel ferrite

Procedia PDF Downloads 447
5358 Improving Rural Access to Specialist Emergency Mental Health Care: Using a Time and Motion Study in the Evaluation of a Telepsychiatry Program

Authors: Emily Saurman, David Lyle

Abstract:

In Australia, a well serviced rural town might have a psychiatrist visit once-a-month with more frequent visits from a psychiatric nurse, but many have no resident access to mental health specialists. Access to specialist care, would not only reduce patient distress and benefit outcomes, but facilitate the effective use of limited resources. The Mental Health Emergency Care-Rural Access Program (MHEC-RAP) was developed to improve access to specialist emergency mental health care in rural and remote communities using telehealth technologies. However, there has been no current benchmark to gauge program efficiency or capacity; to determine whether the program activity is justifiably sufficient. The evaluation of MHEC-RAP used multiple methods and applied a modified theory of access to assess the program and its aim of improved access to emergency mental health care. This was the first evaluation of a telepsychiatry service to include a time and motion study design examining program time expenditure, efficiency, and capacity. The time and motion study analysis was combined with an observational study of the program structure and function to assess the balance between program responsiveness and efficiency. Previous program studies have demonstrated that MHEC-RAP has improved access and is used and effective. The findings from the time and motion study suggest that MHEC-RAP has the capacity to manage increased activity within the current model structure without loss to responsiveness or efficiency in the provision of care. Enhancing program responsiveness and efficiency will also support a claim of the program’s value for money. MHEC-RAP is a practical telehealth solution for improving access to specialist emergency mental health care. The findings from this evaluation have already attracted the attention of other regions in Australia interested in implementing emergency telepsychiatry programs and are now informing the progressive establishment of mental health resource centres in rural New South Wales. Like MHEC-RAP, these centres will provide rapid, safe, and contextually relevant assessments and advice to support local health professionals to manage mental health emergencies in the smaller rural emergency departments. Sharing the application of this methodology and research activity may help to improve access to and future evaluations of telehealth and telepsychiatry services for others around the globe.

Keywords: access, emergency, mental health, rural, time and motion

Procedia PDF Downloads 234