Search results for: real excess portfolio returns
1170 Kernel-Based Double Nearest Proportion Feature Extraction for Hyperspectral Image Classification
Authors: Hung-Sheng Lin, Cheng-Hsuan Li
Abstract:
Over the past few years, kernel-based algorithms have been widely used to extend some linear feature extraction methods such as principal component analysis (PCA), linear discriminate analysis (LDA), and nonparametric weighted feature extraction (NWFE) to their nonlinear versions, kernel principal component analysis (KPCA), generalized discriminate analysis (GDA), and kernel nonparametric weighted feature extraction (KNWFE), respectively. These nonlinear feature extraction methods can detect nonlinear directions with the largest nonlinear variance or the largest class separability based on the given kernel function. Moreover, they have been applied to improve the target detection or the image classification of hyperspectral images. The double nearest proportion feature extraction (DNP) can effectively reduce the overlap effect and have good performance in hyperspectral image classification. The DNP structure is an extension of the k-nearest neighbor technique. For each sample, there are two corresponding nearest proportions of samples, the self-class nearest proportion and the other-class nearest proportion. The term “nearest proportion” used here consider both the local information and other more global information. With these settings, the effect of the overlap between the sample distributions can be reduced. Usually, the maximum likelihood estimator and the related unbiased estimator are not ideal estimators in high dimensional inference problems, particularly in small data-size situation. Hence, an improved estimator by shrinkage estimation (regularization) is proposed. Based on the DNP structure, LDA is included as a special case. In this paper, the kernel method is applied to extend DNP to kernel-based DNP (KDNP). In addition to the advantages of DNP, KDNP surpasses DNP in the experimental results. According to the experiments on the real hyperspectral image data sets, the classification performance of KDNP is better than that of PCA, LDA, NWFE, and their kernel versions, KPCA, GDA, and KNWFE.Keywords: feature extraction, kernel method, double nearest proportion feature extraction, kernel double nearest feature extraction
Procedia PDF Downloads 3461169 Data Protection and Regulation Compliance on Handling Physical Child Abuse Scenarios- A Scoping Review
Authors: Ana Mafalda Silva, Rebeca Fontes, Ana Paula Vaz, Carla Carreira, Ana Corte-Real
Abstract:
Decades of research on the topic of interpersonal violence against minors highlight five main conclusions: 1) it causes harmful effects on children's development and health; 2) it is prevalent; 3) it violates children's rights; 4) it can be prevented and 5) parents are the main aggressors. The child abuse scenario is identified through clinical observation, administrative data and self-reports. The most used instruments are self-reports; however, there are no valid and reliable self-report instruments for minors, which consist of a retrospective interpretation of the situation by the victim already in her adult phase and/or by her parents. Clinical observation and collection of information, namely from the orofacial region, are essential in the early identification of these situations. The management of medical data, such as personal data, must comply with the General Data Protection Regulation (GDPR), in Europe, and with the General Law of Data Protection (LGPD), in Brazil. This review aims to answer the question: In a situation of medical assistance to minors, in the suspicion of interpersonal violence, due to mistreatment, is it necessary for the guardians to provide consent in the registration and sharing of personal data, namely medical ones. A scoping review was carried out based on a search by the Web of Science and Pubmed search engines. Four papers and two documents from the grey literature were selected. As found, the process of identifying and signaling child abuse by the health professional, and the necessary early intervention in defense of the minor as a victim of abuse, comply with the guidelines expressed in the GDPR and LGPD. This way, the notification in maltreatment scenarios by health professionals should be a priority and there shouldn’t be the fear or anxiety of legal repercussions that stands in the way of collecting and treating the data necessary for the signaling procedure that safeguards and promotes the welfare of children living with abuse.Keywords: child abuse, disease notifications, ethics, healthcare assistance
Procedia PDF Downloads 961168 Incorporating Adult Learners’ Interests into Learning Styles: Enhancing Education for Lifelong Learners
Authors: Christie DeGregorio
Abstract:
In today's rapidly evolving educational landscape, adult learners are becoming an increasingly significant demographic. These individuals often possess a wealth of life experiences and diverse interests that can greatly influence their learning styles. Recognizing and incorporating these interests into educational practices can lead to enhanced engagement, motivation, and overall learning outcomes for adult learners. This essay aims to explore the significance of incorporating adult learners' interests into learning styles and provide an overview of the methodologies used in related studies. When investigating the incorporation of adult learners' interests into learning styles, researchers have employed various methodologies to gather valuable insights. These methodologies include surveys, interviews, case studies, and classroom observations. Surveys and interviews allow researchers to collect self-reported data directly from adult learners, providing valuable insights into their interests, preferences, and learning styles. Case studies offer an in-depth exploration of individual adult learners, highlighting how their interests can be integrated into personalized learning experiences. Classroom observations provide researchers with a firsthand understanding of the dynamics between adult learners' interests and their engagement within a learning environment. The major findings from studies exploring the incorporation of adult learners' interests into learning styles reveal the transformative impact of this approach. Firstly, aligning educational content with adult learners' interests increases their motivation and engagement in the learning process. By connecting new knowledge and skills to topics they are passionate about, adult learners become active participants in their own education. Secondly, integrating interests into learning styles fosters a sense of relevance and applicability. Adult learners can see the direct connection between the knowledge they acquire and its real-world applications, which enhances their ability to transfer learning to various contexts. Lastly, personalized learning experiences tailored to individual interests enable adult learners to take ownership of their educational journey, promoting lifelong learning habits and self-directedness.Keywords: integration, personalization, transferability, learning style
Procedia PDF Downloads 751167 Motor Control Recovery Minigame
Authors: Taha Enes Kon, Vanshika Reddy
Abstract:
This project focuses on developing a gamified mobile application to aid in stroke rehabilitation by enhancing motor skills through interactive activities. The primary goal was to design a companion app for a passive haptic rehab glove, incorporating Google MediaPipe for gesture tracking and vibrotactile feedback. The app simulates farming activities, offering a fun and engaging experience while addressing the monotony of traditional rehabilitation methods. The prototype focuses on a single minigame, Flower Picking, which uses gesture recognition to interact with virtual elements, encouraging users to perform exercises that improve hand dexterity. The development process involved creating accessible and user-centered designs using Figma, integrating gesture recognition algorithms, and implementing unity-based game mechanics. Real-time feedback and progressive difficulty levels ensured a personalized experience, motivating users to adhere to rehabilitation routines. The prototype achieved a gesture detection precision of 90%, effectively recognizing predefined gestures such as the Fist and OK symbols. Quantitative analysis highlighted a 40% increase in average session duration compared to traditional exercises, while qualitative feedback praised the app’s immersive design and ease of use. Despite its success, challenges included rigidity in gesture recognition, requiring precise hand orientations, and limited gesture support. Future improvements include expanding gesture adaptability and incorporating additional minigames to target a broader range of exercises. The project demonstrates the potential of gamification in stroke rehabilitation, offering a scalable and accessible solution that complements clinical treatments, making recovery engaging and effective for users.Keywords: stroke rehabilitation, haptic feedback, gamification, MediaPipe, motor control
Procedia PDF Downloads 71166 An Artificial Intelligence Framework to Forecast Air Quality
Authors: Richard Ren
Abstract:
Air pollution is a serious danger to international well-being and economies - it will kill an estimated 7 million people every year, costing world economies $2.6 trillion by 2060 due to sick days, healthcare costs, and reduced productivity. In the United States alone, 60,000 premature deaths are caused by poor air quality. For this reason, there is a crucial need to develop effective methods to forecast air quality, which can mitigate air pollution’s detrimental public health effects and associated costs by helping people plan ahead and avoid exposure. The goal of this study is to propose an artificial intelligence framework for predicting future air quality based on timing variables (i.e. season, weekday/weekend), future weather forecasts, as well as past pollutant and air quality measurements. The proposed framework utilizes multiple machine learning algorithms (logistic regression, random forest, neural network) with different specifications and averages the results of the three top-performing models to eliminate inaccuracies, weaknesses, and biases from any one individual model. Over time, the proposed framework uses new data to self-adjust model parameters and increase prediction accuracy. To demonstrate its applicability, a prototype of this framework was created to forecast air quality in Los Angeles, California using datasets from the RP4 weather data repository and EPA pollutant measurement data. The results showed good agreement between the framework’s predictions and real-life observations, with an overall 92% model accuracy. The combined model is able to predict more accurately than any of the individual models, and it is able to reliably forecast season-based variations in air quality levels. Top air quality predictor variables were identified through the measurement of mean decrease in accuracy. This study proposed and demonstrated the efficacy of a comprehensive air quality prediction framework leveraging multiple machine learning algorithms to overcome individual algorithm shortcomings. Future enhancements should focus on expanding and testing a greater variety of modeling techniques within the proposed framework, testing the framework in different locations, and developing a platform to automatically publish future predictions in the form of a web or mobile application. Accurate predictions from this artificial intelligence framework can in turn be used to save and improve lives by allowing individuals to protect their health and allowing governments to implement effective pollution control measures.Air pollution is a serious danger to international wellbeing and economies - it will kill an estimated 7 million people every year, costing world economies $2.6 trillion by 2060 due to sick days, healthcare costs, and reduced productivity. In the United States alone, 60,000 premature deaths are caused by poor air quality. For this reason, there is a crucial need to develop effective methods to forecast air quality, which can mitigate air pollution’s detrimental public health effects and associated costs by helping people plan ahead and avoid exposure. The goal of this study is to propose an artificial intelligence framework for predicting future air quality based on timing variables (i.e. season, weekday/weekend), future weather forecasts, as well as past pollutant and air quality measurements. The proposed framework utilizes multiple machine learning algorithms (logistic regression, random forest, neural network) with different specifications and averages the results of the three top-performing models to eliminate inaccuracies, weaknesses, and biases from any one individual model. Over time, the proposed framework uses new data to self-adjust model parameters and increase prediction accuracy. To demonstrate its applicability, a prototype of this framework was created to forecast air quality in Los Angeles, California using datasets from the RP4 weather data repository and EPA pollutant measurement data. The results showed good agreement between the framework’s predictions and real-life observations, with an overall 92% model accuracy. The combined model is able to predict more accurately than any of the individual models, and it is able to reliably forecast season-based variations in air quality levels. Top air quality predictor variables were identified through the measurement of mean decrease in accuracy. This study proposed and demonstrated the efficacy of a comprehensive air quality prediction framework leveraging multiple machine learning algorithms to overcome individual algorithm shortcomings. Future enhancements should focus on expanding and testing a greater variety of modeling techniques within the proposed framework, testing the framework in different locations, and developing a platform to automatically publish future predictions in the form of a web or mobile application. Accurate predictions from this artificial intelligence framework can in turn be used to save and improve lives by allowing individuals to protect their health and allowing governments to implement effective pollution control measures.Air pollution is a serious danger to international wellbeing and economies - it will kill an estimated 7 million people every year, costing world economies $2.6 trillion by 2060 due to sick days, healthcare costs, and reduced productivity. In the United States alone, 60,000 premature deaths are caused by poor air quality. For this reason, there is a crucial need to develop effective methods to forecast air quality, which can mitigate air pollution’s detrimental public health effects and associated costs by helping people plan ahead and avoid exposure. The goal of this study is to propose an artificial intelligence framework for predicting future air quality based on timing variables (i.e. season, weekday/weekend), future weather forecasts, as well as past pollutant and air quality measurements. The proposed framework utilizes multiple machine learning algorithms (logistic regression, random forest, neural network) with different specifications and averages the results of the three top-performing models to eliminate inaccuracies, weaknesses, and biases from any one individual model. Over time, the proposed framework uses new data to self-adjust model parameters and increase prediction accuracy. To demonstrate its applicability, a prototype of this framework was created to forecast air quality in Los Angeles, California using datasets from the RP4 weather data repository and EPA pollutant measurement data. The results showed good agreement between the framework’s predictions and real-life observations, with an overall 92% model accuracy. The combined model is able to predict more accurately than any of the individual models, and it is able to reliably forecast season-based variations in air quality levels. Top air quality predictor variables were identified through the measurement of mean decrease in accuracy. This study proposed and demonstrated the efficacy of a comprehensive air quality prediction framework leveraging multiple machine learning algorithms to overcome individual algorithm shortcomings. Future enhancements should focus on expanding and testing a greater variety of modeling techniques within the proposed framework, testing the framework in different locations, and developing a platform to automatically publish future predictions in the form of a web or mobile application. Accurate predictions from this artificial intelligence framework can in turn be used to save and improve lives by allowing individuals to protect their health and allowing governments to implement effective pollution control measures.Keywords: air quality prediction, air pollution, artificial intelligence, machine learning algorithms
Procedia PDF Downloads 1301165 Verification and Validation of Simulated Process Models of KALBR-SIM Training Simulator
Authors: T. Jayanthi, K. Velusamy, H. Seetha, S. A. V. Satya Murty
Abstract:
Verification and Validation of Simulated Process Model is the most important phase of the simulator life cycle. Evaluation of simulated process models based on Verification and Validation techniques checks the closeness of each component model (in a simulated network) with the real system/process with respect to dynamic behaviour under steady state and transient conditions. The process of Verification and validation helps in qualifying the process simulator for the intended purpose whether it is for providing comprehensive training or design verification. In general, model verification is carried out by comparison of simulated component characteristics with the original requirement to ensure that each step in the model development process completely incorporates all the design requirements. Validation testing is performed by comparing the simulated process parameters to the actual plant process parameters either in standalone mode or integrated mode. A Full Scope Replica Operator Training Simulator for PFBR - Prototype Fast Breeder Reactor has been developed at IGCAR, Kalpakkam, INDIA named KALBR-SIM (Kalpakkam Breeder Reactor Simulator) wherein the main participants are engineers/experts belonging to Modeling Team, Process Design and Instrumentation and Control design team. This paper discusses the Verification and Validation process in general, the evaluation procedure adopted for PFBR operator training Simulator, the methodology followed for verifying the models, the reference documents and standards used etc. It details out the importance of internal validation by design experts, subsequent validation by external agency consisting of experts from various fields, model improvement by tuning based on expert’s comments, final qualification of the simulator for the intended purpose and the difficulties faced while co-coordinating various activities.Keywords: Verification and Validation (V&V), Prototype Fast Breeder Reactor (PFBR), Kalpakkam Breeder Reactor Simulator (KALBR-SIM), steady state, transient state
Procedia PDF Downloads 2661164 GBKMeans: A Genetic Based K-Means Applied to the Capacitated Planning of Reading Units
Authors: Anderson S. Fonseca, Italo F. S. Da Silva, Robert D. A. Santos, Mayara G. Da Silva, Pedro H. C. Vieira, Antonio M. S. Sobrinho, Victor H. B. Lemos, Petterson S. Diniz, Anselmo C. Paiva, Eliana M. G. Monteiro
Abstract:
In Brazil, the National Electric Energy Agency (ANEEL) establishes that electrical energy companies are responsible for measuring and billing their customers. Among these regulations, it’s defined that a company must bill your customers within 27-33 days. If a relocation or a change of period is required, the consumer must be notified in writing, in advance of a billing period. To make it easier to organize a workday’s measurements, these companies create a reading plan. These plans consist of grouping customers into reading groups, which are visited by an employee responsible for measuring consumption and billing. The creation process of a plan efficiently and optimally is a capacitated clustering problem with constraints related to homogeneity and compactness, that is, the employee’s working load and the geographical position of the consuming unit. This process is a work done manually by several experts who have experience in the geographic formation of the region, which takes a large number of days to complete the final planning, and because it’s human activity, there is no guarantee of finding the best optimization for planning. In this paper, the GBKMeans method presents a technique based on K-Means and genetic algorithms for creating a capacitated cluster that respects the constraints established in an efficient and balanced manner, that minimizes the cost of relocating consumer units and the time required for final planning creation. The results obtained by the presented method are compared with the current planning of a real city, showing an improvement of 54.71% in the standard deviation of working load and 11.97% in the compactness of the groups.Keywords: capacitated clustering, k-means, genetic algorithm, districting problems
Procedia PDF Downloads 1991163 Prioritizing Roads Safety Based on the Quasi-Induced Exposure Method and Utilization of the Analytical Hierarchy Process
Authors: Hamed Nafar, Sajad Rezaei, Hamid Behbahani
Abstract:
Safety analysis of the roads through the accident rates which is one of the widely used tools has been resulted from the direct exposure method which is based on the ratio of the vehicle-kilometers traveled and vehicle-travel time. However, due to some fundamental flaws in its theories and difficulties in gaining access to the data required such as traffic volume, distance and duration of the trip, and various problems in determining the exposure in a specific time, place, and individual categories, there is a need for an algorithm for prioritizing the road safety so that with a new exposure method, the problems of the previous approaches would be resolved. In this way, an efficient application may lead to have more realistic comparisons and the new method would be applicable to a wider range of time, place, and individual categories. Therefore, an algorithm was introduced to prioritize the safety of roads using the quasi-induced exposure method and utilizing the analytical hierarchy process. For this research, 11 provinces of Iran were chosen as case study locations. A rural accidents database was created for these provinces, the validity of quasi-induced exposure method for Iran’s accidents database was explored, and the involvement ratio for different characteristics of the drivers and the vehicles was measured. Results showed that the quasi-induced exposure method was valid in determining the real exposure in the provinces under study. Results also showed a significant difference in the prioritization based on the new and traditional approaches. This difference mostly would stem from the perspective of the quasi-induced exposure method in determining the exposure, opinion of experts, and the quantity of accidents data. Overall, the results for this research showed that prioritization based on the new approach is more comprehensive and reliable compared to the prioritization in the traditional approach which is dependent on various parameters including the driver-vehicle characteristics.Keywords: road safety, prioritizing, Quasi-induced exposure, Analytical Hierarchy Process
Procedia PDF Downloads 3401162 A Low-Cost of Foot Plantar Shoes for Gait Analysis
Authors: Zulkifli Ahmad, Mohd Razlan Azizan, Nasrul Hadi Johari
Abstract:
This paper presents a study on development and conducting of a wearable sensor system for gait analysis measurement. For validation, the method of plantar surface measurement by force plate was prepared. In general gait analysis, force plate generally represents a studies about barefoot in whole steps and do not allow analysis of repeating movement step in normal walking and running. The measurements that were usually perform do not represent the whole daily plantar pressures in the shoe insole and only obtain the ground reaction force. The force plate measurement is usually limited a few step and it is done indoor and obtaining coupling information from both feet during walking is not easily obtained. Nowadays, in order to measure pressure for a large number of steps and obtain pressure in each insole part, it could be done by placing sensors within an insole. With this method, it will provide a method for determine the plantar pressures while standing, walking or running of a shoe wearing subject. Inserting pressure sensors in the insole will provide specific information and therefore the point of the sensor placement will result in obtaining the critical part under the insole. In the wearable shoe sensor project, the device consists left and right shoe insole with ten FSR. Arduino Mega was used as a micro-controller that read the analog input from FSR. The analog inputs were transmitted via bluetooth data transmission that gains the force data in real time on smartphone. Blueterm software which is an android application was used as an interface to read the FSR reading on the shoe wearing subject. The subject consist of two healthy men with different age and weight doing test while standing, walking (1.5 m/s), jogging (5 m/s) and running (9 m/s) on treadmill. The data obtain will be saved on the android device and for making an analysis and comparison graph.Keywords: gait analysis, plantar pressure, force plate, earable sensor
Procedia PDF Downloads 4541161 Survey-Based Pilot Investigation to Establish Meaningful Education Links in the Gambia
Authors: Miriam Fahmy, Shalini Fernando
Abstract:
Educational links between teaching hospitals and universities can provide visits with great impact for both sides. As a visitor, one is responsible for the content, respecting current practice while offering guidance from a completely different perspective. There is little documented guidance for establishing links with universities in developing countries and providing meaningful teaching and exchange programmes. An initial contact retrieved one response with regards to written curriculum. The otolaryngology department from a Swansea teaching hospital visited a university in the Gambia. A consultant and clinical fellow visited with medical students to deliver lectures, clinical skills and informal teaching such as bedside and small group teaching. Students who had participated in teaching provided by the visiting university were asked to give feedback. This information was collated and used to evaluate the impact, and to guide future visits, including thinking of establishing a curriculum tailored to the West Africa region. The students felt they gained the most from informal sessions such as bedside teaching and felt that more practical experience on real patients and pathology would be most beneficial to them. Given that internet is poor, they also suggested a video library for their reference. Many of them look forward to visiting Swansea and are interested in the differences in practice and technologies. The findings are limited to little previous literature and student feedback. Student feedback sparked further questions and careful contemplation. There is great scope for introducing a range of teaching resources but it is important to avoid assumptions and imposition of a western curriculum and education system, a larger sample is needed with input from lecturers and curriculum writers in leading universities. In conclusion, more literature and guidance needs to be established for future visitors contemplating an educational link.Keywords: education, impact, West Africa, university links
Procedia PDF Downloads 1561160 Demographic Profile, Risk Factors and In-hospital Outcomes of Acute Coronary Syndrome (ACS) in Young Population, in Pakistan-Single Center Real World Experience
Authors: Asma Qudrat, Abid Ullah, Rafi Ullah, Ali Raza, Shah Zeb, Syed Ali Shan Ul-Haq, Shahkar Ahmed Shah, Attiya Hameed Khan, Saad Zaheer, Umama Qasim, Kiran Jamal, Zahoor khan
Abstract:
Objectives: Coronary artery disease (CAD) is the major public health issue associated with high mortality and morbidity rate worldwide. Young patients with ACS have unique characteristics with different demographic profiles and risk factors. The precise diagnosis and early risk stratification is important in guiding treatment and predicting the prognosis of young patients with ACS. To evaluate the associated demographics, risk factors, and outcomes profile of ACS in young age patients. Methods: The research follow a retrospective design, the single centre study of patients diagnosis with the first event of ACS in young age (>18 and <40) were included. Data collection included demographic profiles, risk factors, and in-hospital outcomes of young ACS patients. The patient’s data was retrieved through Electronic Medical Records (EMR) of Peshawar Institute of Cardiology (PIC), and all characteristic were assessed. Results: In this study, 77% were male, and 23% were female patients. The risk factors were assessed with CAD and shown significant results (P < 0.01). The most common presentation was STEMI, with (45%) most in ACS young patients. The angiographic pattern showed single vessel disease (SVD) in 49%, double vessel disease (DVD) in 17% and triple vessel disease (TVD) was found in 10%, and Left Artery Disease (LAD) (54%) was present to be the most common involved artery. Conclusion: It is concluded that the male sex was predominant in ACS young age patients. SVD was the common coronary angiographic finding. Risk factors showed significant results towards CAD and common presentations.Keywords: coronary artery disease, Non-ST elevation myocardial infarction, ST elevation myocardial infarction, unstable angina, acute coronary syndrome
Procedia PDF Downloads 1671159 Renovate to nZEB of an Existing Building in the Mediterranean Area: Analysis of the Use of Renewable Energy Sources for the HVAC System
Authors: M. Baratieri, M. Beccali, S. Corradino, B. Di Pietra, C. La Grassa, F. Monteleone, G. Morosinotto, G. Puglisi
Abstract:
The energy renovation of existing buildings represents an important opportunity to increase the decarbonization and the sustainability of urban environments. In this context, the work carried out has the objective of demonstrating the technical and economic feasibility of an energy renovate of a public building destined for offices located on the island of Lampedusa in the Mediterranean Sea. By applying the Italian transpositions of European Directives 2010/31/EU and 2009/28/EC, the building has been renovated from the current energy requirements of 111.7 kWh/m² to 16.4 kWh/m². The result achieved classifies the building as nZEB (nearly Zero Energy Building) according to the Italian national definition. The analysis was carried out using in parallel a quasi-stationary software, normally used in the professional field, and a dynamic simulation model often used in the academic world. The proposed interventions cover the components of the building’s envelope, the heating-cooling system and the supply of energy from renewable sources. In these latter points, the analysis has focused more on assessing two aspects that affect the supply of renewable energy. The first concerns the use of advanced logic control systems for air conditioning units in order to increase photovoltaic self-consumption. With these adjustments, a considerable increase in photovoltaic self-consumption and a decrease in the electricity exported to the Island's electricity grid have been obtained. The second point concerned the evaluation of the building's energy classification considering the real efficiency of the heating-cooling plant. Normally the energy plants have lower operational efficiency than the designed one due to multiple reasons; the decrease in the energy classification of the building for this factor has been quantified. This study represents an important example for the evaluation of the best interventions for the energy renovation of buildings in the Mediterranean Climate and a good description of the correct methodology to evaluate the resulting improvements.Keywords: heat pumps, HVAC systems, nZEB renovation, renewable energy sources
Procedia PDF Downloads 4531158 The Safety Transfer in Acute Critical Patient by Telemedicine (START) Program at Udonthani General Hospital
Authors: Wisit Wichitkosoom
Abstract:
Objective:The majority of the hisk-risk patients (ST-elevation myocardial infarction (STEMI), Acute cerebrovascular accident, Sepsis, Acute Traumatic patient ) are admitted to district or lacal hospitals (average 1-1.30 hr. from Udonthani general hospital, Northeastern province, Thailand) without proper facilities. The referral system was support to early care and early management at pre-hospital stage and prepare for the patient data to higher hospital. This study assessed the reduction in treatment delay achieved by pre-hospital diagnosis and referral directly to Udonthani General Hospital. Methods and results: Four district or local hospitals without proper facilities for treatment the very high-risk patient were serving the study region. Pre-hospital diagnoses were established with the simple technology such as LINE, SMS, telephone and Fax for concept of LEAN process and then the telemedicine, by ambulance monitoring (ECG, SpO2, BT, BP) in both real time and snapshot mode was administrated during the period of transfer for safety transfer concept (inter-hospital stage). The standard treatment for patients with STEMI, Intracranial injury and acute cerebrovascular accident were done. From 1 October 2012 to 30 September 2013, the 892 high-risk patients transported by ambulance and transferred to Udonthani general hospital were registered. Patients with STEMI diagnosed pre-hospitally and referred directly to the Udonthani general hospital with telemedicine closed monitor (n=248). The mortality rate decreased from 11.69% in 2011 to 6.92 in 2012. The 34 patients were arrested on the way and successful to CPR during transfer with the telemedicine consultation were 79.41%. Conclusion: The proper innovation could apply for health care system. The very high-risk patients must had the closed monitoring with two-way communication for the “safety transfer period”. It could modified to another high-risk group too.Keywords: safety transfer, telemedicine, critical patients, medical and health sciences
Procedia PDF Downloads 3061157 Projects and Limits of Memory Engineering: A Case of Lithuanian Partisan War
Authors: Mingaile Jurkute, Vilnius University
Abstract:
The memory of the Lithuanian partisan war (1944-1953) underwent extremely dramatic transformations. During this war, the image of the resistance and a partisan was one of the key elements of Lithuanian identity. Its importance is evidenced by the extremely large legacy of songs about partisans, no other topic has collected so much folklore in Lithuania. In the Soviet years, this resistance was practically forced to be forgotten. Terror and Soviet laws have forced people to stop talking about the events, even in the family circle. In addition, the Soviets created their own propaganda story, reinterpreting the Lithuanian partisan war, presenting partisans as bandits who brutally tortured and murdered locals. But even in the Soviet years, the memory could neither be completely suppressed, nor completely transformed into wishful shape. The analysis of fiction and cinema shows that the traumatic memory of real events rushed to the surface, thus transforming the very propagandistic narrative. After the restoration of the Republic of Lithuania in 1990, the Lithuanian partisan war was gradually returned to the central place of Lithuanian history. After 2014 the nationalist heroic narrative about Lithuanian partisans became the central narrative of modern Lithuanian history. Nevertheless, interviews I conducted in Lithuanian villages reveal that the memory of local communities and families preserves quite different experiences that do not fit into neither the Soviet narrative nor the heroic one. Such experiences include, for example, partisan violence against local families. This paper is about the efforts of two political ideologies (the Soviet and the Lithuanian patriotic) to use the history of the Lithuanian partisans for their own needs, and the attempts of small communities (mostly families) to resist these efforts. The research reveals that family memory, even when opposed to aggressive state memory policies, can preserve counter-narratives by exploiting unexpected objects beyond the control of the state, such as nature and wildlife. Basically, the paper analyses the limits of the instrumentalization of memory, even by extremely aggressive political regimes.Keywords: collective memory, post-memory, violence, military conflict, family memory
Procedia PDF Downloads 921156 Social Enterprise Concept in Sustaining Agro-Industry Development in Indonesia: Case Study of Yourgood Social Business
Authors: Koko Iwan Agus Kurniawan, Dwi Purnomo, Anas Bunyamin, Arif Rahman Jaya
Abstract:
Fruters model is a concept of technopreneurship-based on empowerment, in which technology research results were designed to create high value-added products and implemented as a locomotive of collaborative empowerment; thereby, the impact was widely spread. This model still needs to be inventoried and validated concerning the influenced variables in the business growth process. Model validation accompanied by mapping was required to be applicable to Small Medium Enterprises (SMEs) agro-industry based on sustainable social business and existing real cases. This research explained the empowerment model of Yourgood, an SME, which emphasized on empowering the farmers/ breeders in farmers in rural areas, Cipageran, Cimahi, to housewives in urban areas, Bandung, West Java, Indonesia. This research reviewed some works of literature discussing the agro-industrial development associated with the empowerment and social business process and gained a unique business model picture with the social business platform as well. Through the mapped business model, there were several advantages such as technology acquisition, independence, capital generation, good investment growth, strengthening of collaboration, and improvement of social impacts that can be replicated on other businesses. This research used analytical-descriptive research method consisting of qualitative analysis with design thinking approach and that of quantitative with the AHP (Analytical Hierarchy Process). Based on the results, the development of the enterprise’s process was highly affected by supplying farmers with the score of 0.248 out of 1, being the most valuable for the existence of the enterprise. It was followed by university (0.178), supplying farmers (0.153), business actors (0.128), government (0.100), distributor (0.092), techno-preneurship laboratory (0.069), banking (0.033), and Non-Government Organization (NGO) (0.031).Keywords: agro-industry, small medium enterprises, empowerment, design thinking, AHP, business model canvas, social business
Procedia PDF Downloads 1701155 Investigation of Electrochemical, Morphological, Rheological and Mechanical Properties of Nano-Layered Graphene/Zinc Nanoparticles Incorporated Cold Galvanizing Compound at Reduced Pigment Volume Concentration
Authors: Muhammad Abid
Abstract:
The ultimate goal of this research was to produce a cold galvanizing compound (CGC) at reduced pigment volume concentration (PVC) to protect metallic structures from corrosion. The influence of the partial replacement of Zn dust by nano-layered graphene (NGr) and Zn metal nanoparticles on the electrochemical, morphological, rheological, and mechanical properties of CGC was investigated. EIS was used to explore the electrochemical nature of coatings. The EIS results revealed that the partial replacement of Zn by NGr and Zn nanoparticles enhanced the cathodic protection at reduced PVC (4:1) by improving the electrical contact between the Zn particles and the metal substrate. The Tafel scan was conducted to support the cathodic behaviour of the coatings. The sample formulated solely with Zn at PVC 4:1 was found to be dominated in physical barrier characteristics over cathodic protection. By increasing the concentration of NGr in the formulation, the corrosion potential shifted towards a more negative side. The coating with 1.5% NGr showed the highest galvanic action at reduced PVC. FE-SEM confirmed the interconnected network of conducting particles. The coating without NGr and Zn nanoparticles at PVC 4:1 showed significant gaps between the Zn dust particles. The novelty was evidenced when micrographs showed the consistent distribution of NGr and Zn nanoparticles all over the surface, which acted as a bridge between spherical Zn particles and provided cathodic protection at a reduced PVC. The layered structure of graphene also improved the physical shielding effect of the coatings, which limited the diffusion of electrolytes and corrosion products (oxides/hydroxides) into the coatings, which was reflected by the salt spray test. The rheological properties of coatings showed good liquid/fluid properties. All the coatings showed excellent adhesion but had different strength values. A real-time scratch resistance assessment showed all the coatings had good scratch resistance.Keywords: protective coatings, anti-corrosion, galvanization, graphene, nanomaterials, polymers
Procedia PDF Downloads 971154 Banking Union: A New Step towards Completing the Economic and Monetary Union
Authors: Marijana Ivanov, Roman Šubić
Abstract:
The single rulebook together with the Single Supervisory Mechanism and the Single Resolution Mechanism - as two main pillars of the banking union, represent important steps towards completing the Economic and Monetary Union. It should provide a consistent application of common rules and administrative standards for supervision, recovery and resolution of banks – with the final aim that a former practice of the bail-out is replaced with the bail-in system through which bank failures will be resolved by their own funds, i.e. with minimal costs for taxpayers and real economy. It has to reduce the financial fragmentation recorded in the years of crisis as the result of divergent behaviors in risk premium, lending activities, and interest rates between the core and the periphery. In addition, it should strengthen the effectiveness of monetary transmission channels, in particular the credit channels and overflows of liquidity on the single interbank money market. However, contrary to all the positive expectations related to the future functioning of the banking union, low and unbalanced economic growth rates remain a challenge for the maintenance of financial stability in the euro area, and this problem cannot be resolved just by a single supervision. In many countries bank assets exceed their GDP by several times, and large banks are still a matter of concern because of their systemic importance for individual countries and the euro zone as a whole. The creation of the SSM and the SRM should increase transparency of the banking system in the euro area and restore confidence that have been disturbed during the depression. It would provide a new opportunity to strengthen economic and financial systems in the peripheral countries. On the other hand, there is a potential threat that future focus of the ECB, resolution mechanism and other relevant institutions will be extremely oriented to the large and significant banks (whereby one half of them operate in the core and most important euro area countries), while it is questionable to what extent the common resolution funds will be used for rescue of less important institutions.Keywords: banking union, financial integration, single supervision mechanism (SSM)
Procedia PDF Downloads 4721153 Factors That Influence Willingness to Pay for Theatre Performances: The Case of Lithuanian National Drama Theatre
Authors: Rusne Kregzdaite
Abstract:
The value of the cultural sector stems from the symbolic exploration that differentiates cultural organisations from other product or service organisations. As a result, the cultural sector has a dual impact on the socio-economic system: the economic value (expressed in terms of market relations) created influences the dynamics of the country's financial indicators, while the cultural (non-market) value indirectly contributes to the welfare of the state through changes in societal values, creativity transformations and cultural needs of the country. Measurement of indirect (cultural value) impacts is difficult, but in the case of the cultural sector (especially when it comes to economically inefficient state-funded culture), it helps to reveal the essential characteristics of the sector. The study aims to analyze the value of cultural organisations that are invisible in market processes and to base it on quantified calculations. This was be done by analyzing the usefulness of the consumer, incorporating not only the price paid but also the social and cultural decision-making factors that determine the spectator's choice (time dedicated for a visit, additional costs, content, previous experiences, corporate image). This may reflect the consumer's real choice to consume (all the costs he incurs may be considered the financial equivalent of his experience with the cultural establishment). The research methodology was tested by analyzing the performing arts sector and applying methods to the Lithuanian national drama theatre case. The empirical research consisted of a survey (more than 800 participants) of Lithuanian national drama theatre visitors to different performances. The willingness to pay and travel costs methods were used. Analysis of different performances lets identifies the factor that increases willingness to pay for the performance and affects theatre attendance. The research stresses the importance of cultural value and social perspective of the cultural sector and relates it to the discussions of public funding of culture.Keywords: cultural economics, performing arts, willingness to pay, travel cost analysis, performing arts management
Procedia PDF Downloads 911152 The Potential of Potato and Maize Based Snacks as Fire Accelerants
Authors: E. Duffin, L. Brownlow
Abstract:
Arson is a crime which can provide exceptional problems to forensic specialists. Its destructive nature makes evidence much harder to find, especially when used to cover up another crime. There is a consistent potential threat of arsonists seeking new and easier ways to set fires. Existing research in this field primarily focuses on the use of accelerants such as petrol, with less attention to other more accessible and harder to detect materials. This includes the growing speculation of potato and maize-based snacks being used as fire accelerants. It was hypothesized that all ‘crisp-type’ snacks in foil packaging had the potential to act as accelerants and would burn readily in the various experiments. To test this hypothesis, a series of small lab-based experiments were undertaken, igniting samples of the snacks. Factors such as ingredients, shape, packaging and calorific value were all taken into consideration. The time (in seconds) spent on fire by the individual snacks was recorded. It was found that all of the snacks tested burnt for statistically similar amounts of time with a p-value of 0.0157. This was followed with a large mock real-life scenario using packets of crisps on fire and car seats to investigate as to the possibility of these snacks being verifiable tools to the arsonist. Here, three full packets of crisps were selected based on variations in burning during the lab experiments. They were each lit with a lighter to initiate burning, then placed onto a car seat to be timed and observed with video cameras. In all three cases, the fire was significant and sustained by the 200-second mark. On the basis of this data, it was concluded that potato and maize-based snacks were viable accelerants of fire. They remain an effective method of starting fires whilst being cheap, accessible, non-suspicious and non-detectable. The results produced supported the hypothesis that all ‘crisp-type’ snacks in foil packaging (that had been tested) had the potential to act as accelerants and would burn readily in the various experiments. This study serves to raise awareness and provide a basis for research and prevention of arson regarding maize and potato-based snacks as fire accelerants.Keywords: arson, crisps, fires, food
Procedia PDF Downloads 1221151 Parking Service Effectiveness at Commercial Malls
Authors: Ahmad AlAbdullah, Ali AlQallaf, Mahdi Hussain, Mohammed AlAttar, Salman Ashknani, Magdy Helal
Abstract:
We study the effectiveness of the parking service provided at Kuwaiti commercial malls and explore potential problems and feasible improvements. Commercial malls are important to Kuwaitis as the entertainment and shopping centers due to the lack of other alternatives. The difficulty and relatively long times wasted in finding a parking spot at the mall are real annoyances. We applied queuing analysis to one of the major malls that offer paid-parking (1040 parking spots) in addition to free parking. Patrons of the mall usually complained of the traffic jams and delays at entering the paid parking (average delay to park exceeds 15 min for about 62% of the patrons, while average time spent in the mall is about 2.6 hours). However, the analysis showed acceptable service levels at the check-in gates of the parking garage. Detailed review of the vehicle movement at the gateways indicated that arriving and departing cars both had to share parts of the gateway to the garage, which caused the traffic jams and delays. A simple comparison we made indicated that the largest commercial mall in Kuwait does not suffer such parking issues, while other smaller, yet important malls do, including the one we studied. It was suggested that well-designed inlets and outlets of that gigantic mall permitted smooth parking despite being totally free and mall is the first choice for most people for entertainment and shopping. A simulation model is being developed for further analysis and verification. Simulation can overcome the mathematical difficulty in using non-Poisson queuing models. The simulation model is used to explore potential changes to the parking garage entrance layout. And with the inclusion of the drivers’ behavior inside the parking, effectiveness indicators can be derived to address the economic feasibility of extending the parking capacity and increasing service levels. Outcomes of the study are planned to be generalized as appropriate to other commercial malls in KuwaitKeywords: commercial malls, parking service, queuing analysis, simulation modeling
Procedia PDF Downloads 3401150 Investigation of Oscillation Mechanism of a Large-scale Solar Photovoltaic and Wind Hybrid Power Plant
Authors: Ting Kai Chia, Ruifeng Yan, Feifei Bai, Tapan Saha
Abstract:
This research presents a real-world power system oscillation incident in 2022 originated by a hybrid solar photovoltaic (PV) and wind renewable energy farm with a rated capacity of approximately 300MW in Australia. The voltage and reactive power outputs recorded at the point of common coupling (PCC) oscillated at a sub-synchronous frequency region, which sustained for approximately five hours in the network. The reactive power oscillation gradually increased over time and reached a recorded maximum of approximately 250MVar peak-to-peak (from inductive to capacitive). The network service provider was not able to quickly identify the location of the oscillation source because the issue was widespread across the network. After the incident, the original equipment manufacturer (OEM) concluded that the oscillation problem was caused by the incorrect setting recovery of the hybrid power plant controller (HPPC) in the voltage and reactive power control loop after a loss of communication event. The voltage controller normally outputs a reactive (Q) reference value to the Q controller which controls the Q dispatch setpoint of PV and wind plants in the hybrid farm. Meanwhile, a feed-forward (FF) configuration is used to bypass the Q controller in case there is a loss of communication. Further study found that the FF control mode was still engaged when communication was re-established, which ultimately resulted in the oscillation event. However, there was no detailed explanation of why the FF control mode can cause instability in the hybrid farm. Also, there was no duplication of the event in the simulation to analyze the root cause of the oscillation. Therefore, this research aims to model and replicate the oscillation event in a simulation environment and investigate the underlying behavior of the HPPC and the consequent oscillation mechanism during the incident. The outcome of this research will provide significant benefits to the safe operation of large-scale renewable energy generators and power networks.Keywords: PV, oscillation, modelling, wind
Procedia PDF Downloads 401149 The Opinions of Nursing Students Regarding Humanized Care through Volunteer Activities at Boromrajonani College of Nursing, Chonburi
Authors: P. Phenpun, S. Wareewan
Abstract:
This qualitative study aimed to describe the opinions in relation to humanized care emerging from the volunteer activities of nursing students at Boromarajonani College of Nursing, Chonburi, Thailand. One hundred and twenty-seven second-year nursing students participated in this study. The volunteer activity model was composed of preparation, implementation, and evaluation through a learning log, in which students were encouraged to write their daily activities after completing practical training at the healthcare center. The preparation content included three main categories: service minded, analytical thinking, and client participation. The preparation process took over three days that accumulates up to 20 hours only. The implementation process was held over 10 days, but with a total of 70 hours only, with participants taking part in volunteer work activities at a healthcare center. A learning log was used for evaluation and data were analyzed using content analysis. The findings were as follows. With service minded, there were two subcategories that emerged from volunteer activities, which were service minded towards patients and within themselves. There were three categories under service minded towards patients, which were rapport, compassion, and empathy service behaviors, and there were four categories under service minded within themselves, which were self-esteem, self-value, management potential, and preparedness in providing good healthcare services. In line with analytical thinking, there were two components of analytical thinking, which were analytical skill for their works and analytical thinking for themselves. There were four subcategories under analytical thinking for their works, which were evidence based thinking, real situational thinking, cause analysis thinking, and systematic thinking, respectively. There were four subcategories under analytical thinking for themselves, which were comparative between themselves, towards their clients that leads to the changing of their service behaviors, open-minded thinking, modernized thinking, and verifying both verbal and non-verbal cues. Lastly, there were three categories under participation, which were mutual rapport relationship; reconsidering client’s needs services and providing useful health care information.Keywords: humanized care service, volunteer activity, nursing student, learning log
Procedia PDF Downloads 3071148 An Approach to Autonomous Drones Using Deep Reinforcement Learning and Object Detection
Authors: K. R. Roopesh Bharatwaj, Avinash Maharana, Favour Tobi Aborisade, Roger Young
Abstract:
Presently, there are few cases of complete automation of drones and its allied intelligence capabilities. In essence, the potential of the drone has not yet been fully utilized. This paper presents feasible methods to build an intelligent drone with smart capabilities such as self-driving, and obstacle avoidance. It does this through advanced Reinforcement Learning Techniques and performs object detection using latest advanced algorithms, which are capable of processing light weight models with fast training in real time instances. For the scope of this paper, after researching on the various algorithms and comparing them, we finally implemented the Deep-Q-Networks (DQN) algorithm in the AirSim Simulator. In future works, we plan to implement further advanced self-driving and object detection algorithms, we also plan to implement voice-based speech recognition for the entire drone operation which would provide an option of speech communication between users (People) and the drone in the time of unavoidable circumstances. Thus, making drones an interactive intelligent Robotic Voice Enabled Service Assistant. This proposed drone has a wide scope of usability and is applicable in scenarios such as Disaster management, Air Transport of essentials, Agriculture, Manufacturing, Monitoring people movements in public area, and Defense. Also discussed, is the entire drone communication based on the satellite broadband Internet technology for faster computation and seamless communication service for uninterrupted network during disasters and remote location operations. This paper will explain the feasible algorithms required to go about achieving this goal and is more of a reference paper for future researchers going down this path.Keywords: convolution neural network, natural language processing, obstacle avoidance, satellite broadband technology, self-driving
Procedia PDF Downloads 2521147 Re-Stating the Origin of Tetrapod Using Measures of Phylogenetic Support for Phylogenomic Data
Authors: Yunfeng Shan, Xiaoliang Wang, Youjun Zhou
Abstract:
Whole-genome data from two lungfish species, along with other species, present a valuable opportunity to re-investigate the longstanding debate regarding the evolutionary relationships among tetrapods, lungfishes, and coelacanths. However, the use of bootstrap support has become outdated for large-scale phylogenomic data. Without robust phylogenetic support, the phylogenetic trees become meaningless. Therefore, it is necessary to re-evaluate the phylogenies of tetrapods, lungfishes, and coelacanths using novel measures of phylogenetic support specifically designed for phylogenomic data, as the previous phylogenies were based on 100% bootstrap support. Our findings consistently provide strong evidence favoring lungfish as the closest living relative of tetrapods. This conclusion is based on high internode certainty, relative gene support, and high gene concordance factor. The evidence stems from five previous datasets derived from lungfish transcriptomes. These results yield fresh insights into the three hypotheses regarding the phylogenies of tetrapods, lungfishes, and coelacanths. Importantly, these hypotheses are not mere conjectures but are substantiated by a significant number of genes. Analyzing real biological data further demonstrates that the inclusion of additional taxa leads to more diverse tree topologies. Consequently, gene trees and species trees may not be identical even when whole-genome sequencing data is utilized. However, it is worth noting that many gene trees can accurately reflect the species tree if an appropriate number of taxa, typically ranging from six to ten, are sampled. Therefore, it is crucial to carefully select the number of taxa and an appropriate outgroup, such as slow-evolving species, while excluding fast-evolving taxa as outgroups to mitigate the adverse effects of long-branch attraction and achieve an accurate reconstruction of the species tree. This is particularly important as more whole-genome sequencing data becomes available.Keywords: novel measures of phylogenetic support for phylogenomic data, gene concordance factor confidence, relative gene support, internode certainty, origin of tetrapods
Procedia PDF Downloads 601146 Unsupervised Learning and Similarity Comparison of Water Mass Characteristics with Gaussian Mixture Model for Visualizing Ocean Data
Authors: Jian-Heng Wu, Bor-Shen Lin
Abstract:
The temperature-salinity relationship is one of the most important characteristics used for identifying water masses in marine research. Temperature-salinity characteristics, however, may change dynamically with respect to the geographic location and is quite sensitive to the depth at the same location. When depth is taken into consideration, however, it is not easy to compare the characteristics of different water masses efficiently for a wide range of areas of the ocean. In this paper, the Gaussian mixture model was proposed to analyze the temperature-salinity-depth characteristics of water masses, based on which comparison between water masses may be conducted. Gaussian mixture model could model the distribution of a random vector and is formulated as the weighting sum for a set of multivariate normal distributions. The temperature-salinity-depth data for different locations are first used to train a set of Gaussian mixture models individually. The distance between two Gaussian mixture models can then be defined as the weighting sum of pairwise Bhattacharyya distances among the Gaussian distributions. Consequently, the distance between two water masses may be measured fast, which allows the automatic and efficient comparison of the water masses for a wide range area. The proposed approach not only can approximate the distribution of temperature, salinity, and depth directly without the prior knowledge for assuming the regression family, but may restrict the complexity by controlling the number of mixtures when the amounts of samples are unevenly distributed. In addition, it is critical for knowledge discovery in marine research to represent, manage and share the temperature-salinity-depth characteristics flexibly and responsively. The proposed approach has been applied to a real-time visualization system of ocean data, which may facilitate the comparison of water masses by aggregating the data without degrading the discriminating capabilities. This system provides an interface for querying geographic locations with similar temperature-salinity-depth characteristics interactively and for tracking specific patterns of water masses, such as the Kuroshio near Taiwan or those in the South China Sea.Keywords: water mass, Gaussian mixture model, data visualization, system framework
Procedia PDF Downloads 1461145 Study on Planning of Smart GRID Using Landscape Ecology
Authors: Sunglim Lee, Susumu Fujii, Koji Okamura
Abstract:
Smart grid is a new approach for electric power grid that uses information and communications technology to control the electric power grid. Smart grid provides real-time control of the electric power grid, controlling the direction of power flow or time of the flow. Control devices are installed on the power lines of the electric power grid to implement smart grid. The number of the control devices should be determined, in relation with the area one control device covers and the cost associated with the control devices. One approach to determine the number of the control devices is to use the data on the surplus power generated by home solar generators. In current implementations, the surplus power is sent all the way to the power plant, which may cause power loss. To reduce the power loss, the surplus power may be sent to a control device and sent to where the power is needed from the control device. Under assumption that the control devices are installed on a lattice of equal size squares, our goal is to figure out the optimal spacing between the control devices, where the power sharing area (the area covered by one control device) is kept small to avoid power loss, and at the same time the power sharing area is big enough to have no surplus power wasted. To achieve this goal, a simulation using landscape ecology method is conducted on a sample area. First an aerial photograph of the land of interest is turned into a mosaic map where each area is colored according to the ratio of the amount of power production to the amount of power consumption in the area. The amount of power consumption is estimated according to the characteristics of the buildings in the area. The power production is calculated by the sum of the area of the roofs shown in the aerial photograph and assuming that solar panels are installed on all the roofs. The mosaic map is colored in three colors, each color representing producer, consumer, and neither. We started with a mosaic map with 100 m grid size, and the grid size is grown until there is no red grid. One control device is installed on each grid, so that the grid is the area which the control device covers. As the result of this simulation we got 350 m as the optimal spacing between the control devices that makes effective use of the surplus power for the sample area.Keywords: landscape ecology, IT, smart grid, aerial photograph, simulation
Procedia PDF Downloads 4451144 Silicon-Photonic-Sensor System for Botulinum Toxin Detection in Water
Authors: Binh T. T. Nguyen, Zhenyu Li, Eric Yap, Yi Zhang, Ai-Qun Liu
Abstract:
Silicon-photonic-sensor system is an emerging class of analytical technologies that use evanescent field wave to sensitively measure the slight difference in the surrounding environment. The wavelength shift induced by local refractive index change is used as an indicator in the system. These devices can be served as sensors for a wide variety of chemical or biomolecular detection in clinical and environmental fields. In our study, a system including a silicon-based micro-ring resonator, microfluidic channel, and optical processing is designed, fabricated for biomolecule detection. The system is demonstrated to detect Clostridium botulinum type A neurotoxin (BoNT) in different water sources. BoNT is one of the most toxic substances known and relatively easily obtained from a cultured bacteria source. The toxin is extremely lethal with LD50 of about 0.1µg/70kg intravenously, 1µg/ 70 kg by inhalation, and 70µg/kg orally. These factors make botulinum neurotoxins primary candidates as bioterrorism or biothreat agents. It is required to have a sensing system which can detect BoNT in a short time, high sensitive and automatic. For BoNT detection, silicon-based micro-ring resonator is modified with a linker for the immobilization of the anti-botulinum capture antibody. The enzymatic reaction is employed to increase the signal hence gains sensitivity. As a result, a detection limit to 30 pg/mL is achieved by our silicon-photonic sensor within a short period of 80 min. The sensor also shows high specificity versus the other type of botulinum. In the future, by designing the multifunctional waveguide array with fully automatic control system, it is simple to simultaneously detect multi-biomaterials at a low concentration within a short period. The system has a great potential to apply for online, real-time and high sensitivity for the label-free bimolecular rapid detection.Keywords: biotoxin, photonic, ring resonator, sensor
Procedia PDF Downloads 1171143 Expression of ULK-1 mRNA in Human Peripheral Blood Mononuclear Cells from Patients with Alzheimer's Disease
Authors: Ali Bayram, Remzi Yiğiter
Abstract:
Objective: Alzheimer's disease (AD), the most common cause of dementia, is a progressive neurodegenerative disease. At present, diagnosis of AD is rather late in the disease. Therefore, we attempted to find peripheral biomarkers for the early diagnosis of AD. Herein, we conducted a study to investigate the unc-51 like autophagy activating kinase-1 (ULK1) mRNA expression levels in human peripheral blood mononuclear cells from patients with Alzheimer's disease. Method: To determine whether ULK1 gene expression are altered in AD patients, we measured their gene expression in human peripheral blood cell in 50 patients with AD and 50 age and gender matched healthy controls by quantitative real-time PCR technique. Results: We found that both ULK1 gene expression in peripheral blood cell were significantly decreased in patients with AD as compared with controls (p <0.05). Lower levels of ULK1 gene expression were significantly associated with the increased risk for AD. Conclusions: Serine/threonine-protein kinase involved in autophagy in response to starvation. Acts upstream of phosphatidylinositol 3-kinase PIK3C3 to regulate the formation of autophagophores, the precursors of autophagosomes. Part of regulatory feedback loops in autophagy: acts both as a downstream effector and negative regulator of mammalian target of rapamycin complex 1 (mTORC1) via interaction with RPTOR. Activated via phosphorylation by AMPK and also acts as a regulator of AMPK by mediating phosphorylation of AMPK subunits PRKAA1, PRKAB2, and PRKAG1, leading to negatively regulate AMPK activity. May phosphorylate ATG13/KIAA0652 and RPTOR; however such data need additional evidences. Plays a role early in neuronal differentiation and is required for granule cell axon formation. Alzheimer is the most common neurodegenerative disease. Our results provide useful information that the ULK1 gene expression is decreased in the neurodegeneration and AD patients with, indicating their possible systemic involvement in AD.Keywords: Alzheimer’s sisease, ULK1, mRNA expression, RT-PCR
Procedia PDF Downloads 3981142 A Multi-Criteria Decision Method for the Recruitment of Academic Personnel Based on the Analytical Hierarchy Process and the Delphi Method in a Neutrosophic Environment
Authors: Antonios Paraskevas, Michael Madas
Abstract:
For a university to maintain its international competitiveness in education, it is essential to recruit qualitative academic staff as it constitutes its most valuable asset. This selection demonstrates a significant role in achieving strategic objectives, particularly by emphasizing a firm commitment to the exceptional student experience and innovative teaching and learning practices of high quality. In this vein, the appropriate selection of academic staff establishes a very important factor of competitiveness, efficiency and reputation of an academic institute. Within this framework, our work demonstrates a comprehensive methodological concept that emphasizes the multi-criteria nature of the problem and how decision-makers could utilize our approach in order to proceed to the appropriate judgment. The conceptual framework introduced in this paper is built upon a hybrid neutrosophic method based on the Neutrosophic Analytical Hierarchy Process (N-AHP), which uses the theory of neutrosophy sets and is considered suitable in terms of a significant degree of ambiguity and indeterminacy observed in the decision-making process. To this end, our framework extends the N-AHP by incorporating the Neutrosophic Delphi Method (N-DM). By applying the N-DM, we can take into consideration the importance of each decision-maker and their preferences per evaluation criterion. To the best of our knowledge, the proposed model is the first which applies the Neutrosophic Delphi Method in the selection of academic staff. As a case study, it was decided to use our method for a real problem of academic personnel selection, having as the main goal to enhance the algorithm proposed in previous scholars’ work, and thus taking care of the inherent ineffectiveness which becomes apparent in traditional multi-criteria decision-making methods when dealing with situations alike. As a further result, we prove that our method demonstrates greater applicability and reliability when compared to other decision models.Keywords: multi-criteria decision making methods, analytical hierarchy process, delphi method, personnel recruitment, neutrosophic set theory
Procedia PDF Downloads 1181141 Reconciling the Fatigue of Space Property Rights
Authors: King Kumire
Abstract:
The Outer Space Treaty and the Moon Treaty have been the backbone of space law. However, scientists, engineers, and policymakers have been silent about how human settlement on celestial bodies would change the legal dimensions of space law. Indeed, these legal space regimes should have a prescription on how galactic courts should deal with the aspect of space property ownership. On this planet earth, one can vindicate his own assets. In extraterrestrial environments, this is not the case because space law is fatigued by terrestrial body sovereignty, which must be upheld. However, the recent commercialization of microgravity environments requires property ownership laws to be enacted. Space activities have mutated to the extent that it is almost possible to build communities in space. The discussions on the moon village concept will be mentioned as well to give clarity on the subject to the audience. It should be stated that launchers can now explore the cosmos with space tourists. The world is also busy doing feasibility studies on how to implement space mining projects. These activities indisputably show that the research is important because it will not only expose how the cosmic world is constrained by existing legal frameworks, but it will provide a remedy for how the inevitable dilemma of property rights can be resolved through the formulation of multilateral and all-inclusive policies. The discussion will model various aspects of terrestrial property rights and the associated remedies against what can be applicable and customized for use in extraterrestrial environments. Transfer of ownership in space is also another area of interest as the researcher shall try to distinguish between envisaged personal and real rights in the new frontier vis-a-vis mainland transfer transactions. The writer imagines the extent to which the concepts of servitudes, accession, prescription and commixes, and other property templates can act as a starting point when cosmic probers move forward with the revision of orbital law. The article seeks to reconcile these ownership constraints by working towards the development of a living space common law which is elastic and embroidered by sustainable recommendations. A balance between transplanting terrestrial laws to the galactic arena and the need to enact new ones which will complement the existing space treaties will be meticulously pivoted.Keywords: rights, commercialisation, ownership, sovereignty
Procedia PDF Downloads 140