Search results for: open information extraction
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 14757

Search results for: open information extraction

13467 Endoscopic Treatment of Patients with Large Bile Duct Stones

Authors: Yuri Teterin, Lomali Generdukaev, Dmitry Blagovestnov, Peter Yartcev

Abstract:

Introduction: Under the definition "large biliary stones," we referred to stones over 1.5 cm, in which standard transpapillary litho extraction techniques were unsuccessful. Electrohydraulic and laser contact lithotripsy under SpyGlass control have been actively applied for the last decade in order to improve endoscopic treatment results. Aims and Methods: Between January 2019 and July 2022, the N.V. Sklifosovsky Research Institute of Emergency Care treated 706 patients diagnosed with choledocholithiasis who underwent biliary stones removed from the common bile duct. Of them, in 57 (8, 1%) patients, the use of a Dormia basket or Biliary stone extraction balloon was technically unsuccessful due to the size of the stones (more than 15 mm in diameter), which required their destruction. Mechanical lithotripsy was used in 35 patients, and electrohydraulic and laser lithotripsy under SpyGlass direct visualization system - in 26 patients. Results: The efficiency of mechanical lithotripsy was 72%. Complications in this group were observed in 2 patients. In both cases, on day one after lithotripsy, acute pancreatitis developed, which resolved on day three with conservative therapy (Clavin-Dindo type 2). The efficiency of contact lithotripsy was in 100% of patients. Complications were not observed in this group. Bilirubin level in this group normalized on the 3rd-4th day. Conclusion: Our study showed the efficacy and safety of electrohydraulic and laser lithotripsy under SpyGlass control in a well-defined group of patients with large bile duct stones.

Keywords: contact lithotripsy, choledocholithiasis, SpyGlass, cholangioscopy, laser, electrohydraulic system, ERCP

Procedia PDF Downloads 73
13466 Kannada HandWritten Character Recognition by Edge Hinge and Edge Distribution Techniques Using Manhatan and Minimum Distance Classifiers

Authors: C. V. Aravinda, H. N. Prakash

Abstract:

In this paper, we tried to convey fusion and state of art pertaining to SIL character recognition systems. In the first step, the text is preprocessed and normalized to perform the text identification correctly. The second step involves extracting relevant and informative features. The third step implements the classification decision. The three stages which involved are Data acquisition and preprocessing, Feature extraction, and Classification. Here we concentrated on two techniques to obtain features, Feature Extraction & Feature Selection. Edge-hinge distribution is a feature that characterizes the changes in direction of a script stroke in handwritten text. The edge-hinge distribution is extracted by means of a windowpane that is slid over an edge-detected binary handwriting image. Whenever the mid pixel of the window is on, the two edge fragments (i.e. connected sequences of pixels) emerging from this mid pixel are measured. Their directions are measured and stored as pairs. A joint probability distribution is obtained from a large sample of such pairs. Despite continuous effort, handwriting identification remains a challenging issue, due to different approaches use different varieties of features, having different. Therefore, our study will focus on handwriting recognition based on feature selection to simplify features extracting task, optimize classification system complexity, reduce running time and improve the classification accuracy.

Keywords: word segmentation and recognition, character recognition, optical character recognition, hand written character recognition, South Indian languages

Procedia PDF Downloads 489
13465 Spatial and Seasonal Distribution of Persistent Organic Pollutant (Polychlorinated Biphenyl) Along the Course of Buffalo River, Eastern Cape Province, South Africa

Authors: Abdulrazaq Yahaya, Omobola Okoh, Anthony Okoh

Abstract:

Polychlorinated biphenyls (PCBs) are generated from short emission or leakage from capacitors and electrical transformers, industrial chemicals wastewater discharge and careless disposal of wastes. They are toxic, semi-volatile compounds which can persist in the environment, hence classified as persistent organic pollutants. Their presence in the environmental matrices has become a global concern. In this study, we assessed the concentrations and distribution patterns of 19 polychlorinated biphenyls congeners (PCB 1, 5, 18, 31, 44, 52, 66, 87, 101, 110, 138, 141, 151, 153, 170, 180, 183, 187, and 206) at six sampling points in water along the course of Buffalo River, Eastern Cape, South Africa. Solvent extraction followed by sulphuric acid, potassium permanganate and silica gel cleanup were used in this study. The analysis was done with gas chromatography electron capture detector (GC-ECD). The results of the analysis of all the 19 PCBs congeners ranged from not detectable to 0.52 ppb and 2.5 ppb during summer and autumn periods respectively. These values are generally higher than the World Health Organization (WHO) maximum permissible limit. Their presence in the waterbody suggests an increase in anthropogenic activities over the seasons. In view of their volatility, the compounds are transportable over long distances by air currents away from their point of origin putting the health of the communities at risk, thus suggesting the need for strict regulations on the use as well as save disposal of this group of compounds in the communities.

Keywords: organic pollutants, polychlorinated biphenyls, pollution, solvent extraction

Procedia PDF Downloads 312
13464 Large Eddy Simulation of Particle Clouds Using Open-Source CFD

Authors: Ruo-Qian Wang

Abstract:

Open-source CFD has become increasingly popular and promising. The recent progress in multiphase flow enables new CFD applications, which provides an economic and flexible research tool for complex flow problems. Our numerical study using four-way coupling Euler-Lagrangian Large-Eddy Simulations to resolve particle cloud dynamics with OpenFOAM and CFDEM will be introduced: The fractioned Navier-Stokes equations are numerically solved for fluid phase motion, solid phase motion is addressed by Lagrangian tracking for every single particle, and total momentum is conserved by fluid-solid inter-phase coupling. The grid convergence test was performed, which proves the current resolution of the mesh is appropriate. Then, we validated the code by comparing numerical results with experiments in terms of particle cloud settlement and growth. A good comparison was obtained showing reliability of the present numerical schemes. The time and height at phase separations were defined and analyzed for a variety of initial release conditions. Empirical formulas were drawn to fit the results.

Keywords: four-way coupling, dredging, land reclamation, multiphase flows, oil spill

Procedia PDF Downloads 426
13463 Case Study on the Effects of Early Mobilization in the Post-Surgical Recovery of Athletes with Open Triangular Fibrocartilage Complex Repair

Authors: Blair Arthur Agero Jr., Lucia Garcia Heras

Abstract:

The triangular fibrocartilage complex (TFCC) is one of the crucial stabilizing ligaments of the wrist. The TFCC is also subject to excessive stress amongst performance athletes and enthusiasts. The excessive loading of the TFCC may lead to a partial or complete rupture that requires surgery. The recovery from an open TFCC surgical repair may take several months. Immobilization of the repaired wrist for a given period is part of all the current protocols in the post-surgical treatment. The immobilization to prevent the rotation of the forearm can last from six weeks to eight weeks with the wrist held in a neutral position. In all protocols reviewed, the pronosupination is only initiated between the 6th week and 8th week or even later after the cast is removed. The prolonged immobilization can cause stiffness of the wrist and hand. Furthermore, the entire period of post-surgical hand therapy has its economic impact, especially for performing athletes. However, delayed mobilization, specifically rotation of the wrist, is necessary to allow ligament healing. This study aims to report the effects of early mobilization of the wrist in athletes who had an open surgical repair of the TFCC. The surgery was done by the co-author, and the hand therapy was implemented by the main author. The cases documented spans from 2014 to 2019 and were all performed in Dubai, United Arab Emirates. All selected participants in this case study were provided with a follow-up questionnaire to ascertain their current condition since their surgery. The respondents reported high satisfaction in the results of their treatment and have verified zero re-rupture of their TFCC despite mobilizing and rotating the wrist at the third-week post-surgery during their hand therapy. There is also a negligible number of respondents who reported a limitation in their ranges of pronosupination. This case study suggests that early mobilization of the wrist after an open TFCC surgical repair can be more beneficial to the patient as opposed to the traditional treatment of prolonged immobilization. However, it should be considered that the patients selected in this case study are professional performance athletes and advanced fitness enthusiasts. Athletes are known to withstand vigorous physical stress in their training that may correlate to their ability to better cope with the progressive stress that was implemented during their hand therapy. Nevertheless, this approach has its merits, and application of it may be adjusted for patients with a similar injury and surgical procedure.

Keywords: hand therapy, performance athlete, TFCC repair, wrist ligament

Procedia PDF Downloads 149
13462 Process of Analysis, Evaluation and Verification of the 'Real' Redevelopment of the Public Open Space at the Neighborhood’s Stairs: Case Study of Serres, Greece

Authors: Ioanna Skoufali

Abstract:

The present study is directed towards adaptation to climate change closely related to the phenomenon of the urban heat island (UHI). This issue is widespread and common to different urban realities, but particularly in Mediterranean cities that are characterized by dense urban. The attention of this work of redevelopment of the open space is focused on mitigation techniques aiming to solve local problems such as microclimatic parameters and the conditions of thermal comfort in summer, related to urban morphology. This quantitative analysis, evaluation, and verification survey involves the methodological elaboration applied in a real study case by Serres, through the experimental support of the ENVImet Pro V4.1 and BioMet software developed: i) in two phases concerning the anteoperam (phase a1 # 2013) and the post-operam (phase a2 # 2016); ii) in scenario A (+ 25% of green # 2017). The first study tends to identify the main intervention strategies, namely: the application of cool pavements, the increase of green surfaces, the creation of water surface and external fans; moreover, it obtains the minimum results achieved by the National Program 'Bioclimatic improvement project for public open space', EPPERAA (ESPA 2007-2013) related to the four environmental parameters illustrated below: the TAir = 1.5 o C, the TSurface = 6.5 o C, CDH = 30% and PET = 20%. In addition, the second study proposes a greater potential for improvement than postoperam intervention by increasing the vegetation within the district towards the SW/SE. The final objective of this in-depth design is to be transferable in homogeneous cases of urban regeneration processes with obvious effects on the efficiency of microclimatic mitigation and thermal comfort.

Keywords: cool pavements, microclimate parameters (TAir, Tsurface, Tmrt, CDH), mitigation strategies, outdoor thermal comfort (PET & UTCI)

Procedia PDF Downloads 197
13461 Modeling of Water Erosion in the M'Goun Watershed Using OpenGIS Software

Authors: M. Khal, Ab. Algouti, A. Algouti

Abstract:

Water erosion is the major cause of the erosion that shapes the earth's surface. Modeling water erosion requires the use of software and GIS programs, commercial or closed source. The very high prices for commercial GIS licenses, motivates users and researchers to find open source software as relevant and applicable as the proprietary GIS. The objective of this study is the modeling of water erosion and the hydrogeological and morphophysical characterization of the Oued M'Goun watershed (southern flank of the Central High Atlas) developed by free programs of GIS. The very pertinent results are obtained by executing tasks and algorithms in a simple and easy way. Thus, the various geoscientific and geostatistical analyzes of a digital elevation model (SRTM 30 m resolution) and their combination with the treatments and interpretation of satellite imagery information allowed us to characterize the region studied and to map the area most vulnerable to water erosion.

Keywords: central High-Atlas, hydrogeology, M’Goun watershed, OpenGis, water erosion

Procedia PDF Downloads 155
13460 Genetic Diversity of Sugar Beet Pollinators

Authors: Ksenija Taški-Ajdukovic, Nevena Nagl, Živko Ćurčić, Dario Danojević

Abstract:

Information about genetic diversity of sugar beet parental populations is of a great importance for hybrid breeding programs. The aim of this research was to evaluate genetic diversity among and within populations and lines of diploid sugar beet pollinators, by using SSR markers. As plant material were used eight pollinators originating from three USDA-ARS breeding programs and four pollinators from Institute of Field and Vegetable Crops, Novi Sad. Depending on the presence of self-fertility gene, the pollinators were divided into three groups: autofertile (inbred lines), autosterile (open-pollinating populations), and group with partial presence of autofertility gene. A total of 40 SSR primers were screened, out of which 34 were selected for the analysis of genetic diversity. A total of 129 different alleles were obtained with mean value 3.2 alleles per SSR primer. According to the results of genetic variability assessment the number and percentage of polymorphic loci was the maximal in pollinators NS1 and tester cms2 while effective number of alleles, expected heterozygosis and Shannon’s index was highest in pollinator EL0204. Analysis of molecular variance (AMOVA) showed that 77.34% of the total genetic variation was attributed to intra-varietal variance. Correspondence analysis results were very similar to grouping by neighbor-joining algorithm. Number of groups was smaller by one, because correspondence analysis merged IFVCNS pollinators with CZ25 into one group. Pollinators FC220, FC221 and C 51 were in the next group, while self-fertile pollinators CR10 and C930-35 from USDA-Salinas were separated. On another branch were self-sterile pollinators ЕL0204 and ЕL53 from USDA-East Lansing. Sterile testers cms1 and cms2 formed separate group. The presented results confirmed that SSR analysis can be successfully used in estimation of genetic diversity within and among sugar beet populations. Since the tested pollinator differed considering the presence of self-fertility gene, their heterozygosity differed as well. It was lower in genotypes with fixed self-fertility genes. Since the most of tested populations were open-pollinated, which rarely self-pollinate, high variability within the populations was expected. Cluster analysis grouped populations according to their origin.

Keywords: auto fertility, genetic diversity, pollinator, SSR, sugar beet

Procedia PDF Downloads 457
13459 Real Time Detection of Application Layer DDos Attack Using Log Based Collaborative Intrusion Detection System

Authors: Farheen Tabassum, Shoab Ahmed Khan

Abstract:

The brutality of attacks on networks and decisive infrastructures are on the climb over recent years and appears to continue to do so. Distributed Denial of service attack is the most prevalent and easy attack on the availability of a service due to the easy availability of large botnet computers at cheap price and the general lack of protection against these attacks. Application layer DDoS attack is DDoS attack that is targeted on wed server, application server or database server. These types of attacks are much more sophisticated and challenging as they get around most conventional network security devices because attack traffic often impersonate normal traffic and cannot be recognized by network layer anomalies. Conventional techniques of single-hosted security systems are becoming gradually less effective in the face of such complicated and synchronized multi-front attacks. In order to protect from such attacks and intrusion, corporation among all network devices is essential. To overcome this issue, a collaborative intrusion detection system (CIDS) is proposed in which multiple network devices share valuable information to identify attacks, as a single device might not be capable to sense any malevolent action on its own. So it helps us to take decision after analyzing the information collected from different sources. This novel attack detection technique helps to detect seemingly benign packets that target the availability of the critical infrastructure, and the proposed solution methodology shall enable the incident response teams to detect and react to DDoS attacks at the earliest stage to ensure that the uptime of the service remain unaffected. Experimental evaluation shows that the proposed collaborative detection approach is much more effective and efficient than the previous approaches.

Keywords: Distributed Denial-of-Service (DDoS), Collaborative Intrusion Detection System (CIDS), Slowloris, OSSIM (Open Source Security Information Management tool), OSSEC HIDS

Procedia PDF Downloads 351
13458 Endoscopic Versus Open Treatment of Carpal Tunnel Syndrome: Postoperative Complications in Patients on Anticoagulation

Authors: Arman Kishan, Mark Haft, Kiyanna Thomas, Duc Nguyen, Dawn Laporte

Abstract:

Objective: Patients receiving anticoagulation therapy frequently experience increased rates of postoperative complications. Presently, limited data exist regarding the outcomes of patients undergoing carpal tunnel release surgery (CTR) while on anticoagulation. Our objective is to examine and compare the occurrence of complications in patients on anticoagulation who underwent either endoscopic CTR (ECTR) or open CTR (OCTR) for CTS. Methods: The Trinet X database was utilized to retrospectively identify patients who underwent OCTR or ECTR while concurrently on anticoagulation. Demographic data, medical comorbidities, and complication rates were analyzed. We used multivariable analysis to identify differences in postoperative complications, including wound infection within 90 days, wound dehiscence within 90 days, and intraoperative median nerve injury between the two surgical methods in patients on anticoagulation. Results: A total of 10,919 carpal tunnel syndrome patients on anticoagulation were included in the study, with 9082 and 1837 undergoing OCTR and ECTR, respectively. Among patients on anticoagulation, those undergoing ECTR exhibited a significantly lower occurrence of 90-day wound infection (p < 0.001) and nerve injury (p < 0.001) compared to those who underwent OCTR. However, there was no statistically significant difference in the risk of 90-day wound dehiscence between the two groups (p = 0.323). Conclusion:  In prior studies, ECTR demonstrated reduced rates of postoperative complications compared to OCTR in the general population. Our study demonstrates that among patients on anticoagulation, those undergoing ECTR experienced a significantly lower incidence of 90-day wound infection and nerve injury, with risk reductions of 35% and 40%, respectively. These findings support using ECTR as a preferred surgical method for patients with CTS who are on anticoagulation therapy.

Keywords: endoscopic treatment of carpal tunnel syndrome, open treatment of carpal tunnel syndrome, postoperative complications in patients on anticoagulation, carpal tunnel syndrome

Procedia PDF Downloads 66
13457 Health Information Needs and Utilization of Information and Communication Technologies by Medical Professionals in a Northern City of India

Authors: Sonika Raj, Amarjeet Singh, Vijay Lakshmi Sharma

Abstract:

Introduction: In 21st century, due to revolution in Information and Communication Technologies (ICTs), there has been phenomenal development in quality and quantity of knowledge in the field of medical science. So, the access to relevant information to physicians is critical to the delivery of effective healthcare services to patients. The study was conducted to assess the information needs and attitudes of the medical professionals; to determine the sources and channels of information used by them; to ascertain the current usage of ICTs and the barriers faced by them in utilization of ICTs in health information access. Methodology: This descriptive cross-sectional study was carried in 2015 on hundred medical professionals working in public and private sectors of Chandigarh. The study used both quantitative and qualitative method for data collection. A semi structured questionnaire and interview schedule was used to collect data on information seeking needs, access to ICTs and barriers to healthcare information access. Five Data analysis was done using SPSS-16 and qualitative data was analyzed using thematic approach. Results: The most preferred sources to access healthcare information were internet (85%), trainings (61%) and communication with colleagues (57%). They wanted information on new drug therapy and latest developments in respective fields. All had access to computer with but almost half assessed their computer knowledge as average and only 3% had received training regarding usage. Educational status (p=0.004), place of work (p=0.004), number of years in job (p=0.004) and sector of job (p=0.04) of doctors were found to be significantly associated with their active search for information. The major themes that emerged from in-views were need; types and sources of healthcare information; exchange of information among different levels of healthcare providers; usage of ICTs to obtain and share information; barriers to access of healthcare information and quality of health information materials and involvement in their development process Conclusion and Recommendations: The medical professionals need information in their in their due course of work. However, information needs of medical professionals were not being adequately met. There should be training of professional regarding internet skills and the course on bioinformatics should be incorporated in the curricula of medical students. The policy framework must be formulated that will encourage and promote the use of ICTs as tools for health information access and dissemination.

Keywords: health information, ICTs, medical professionals, qualitative

Procedia PDF Downloads 344
13456 Variation of Manning’s Coefficient in a Meandering Channel with Emergent Vegetation Cover

Authors: Spandan Sahu, Amiya Kumar Pati, Kishanjit Kumar Khatua

Abstract:

Vegetation plays a major role in deciding the flow parameters in an open channel. It enhances the aesthetic view of the revetments. The major types of vegetation in river typically comprises of herbs, grasses, weeds, trees, etc. The vegetation in an open channel usually consists of aquatic plants with complete submergence, partial submergence, floating plants. The presence of vegetative plants can have both benefits and problems. The major benefits of aquatic plants are they reduce the soil erosion, which provides the water with a free surface to move on without hindrance. The obvious problems are they retard the flow of water and reduce the hydraulic capacity of the channel. The degree to which the flow parameters are affected depends upon the density of the vegetation, degree of submergence, pattern of vegetation, vegetation species. Vegetation in open channel tends to provide resistance to flow, which in turn provides a background to study the varying trends in flow parameters having vegetative growth in the channel surface. In this paper, an experiment has been conducted on a meandering channel having sinuosity of 1.33 with rigid vegetation cover to investigate the effect on flow parameters, variation of manning’s n with degree of the denseness of vegetation, vegetation pattern and submergence criteria. The measurements have been carried out in four different cross-sections two on trough portion of the meanders, two on the crest portion. In this study, the analytical solution of Shiono and knight (SKM) for lateral distributions of depth-averaged velocity and bed shear stress have been taken into account. Dimensionless eddy viscosity and bed friction have been incorporated to modify the SKM to provide more accurate results. A mathematical model has been formulated to have a comparative analysis with the results obtained from Shiono-Knight Method.

Keywords: bed friction, depth averaged velocity, eddy viscosity, SKM

Procedia PDF Downloads 134
13455 Automatic Multi-Label Image Annotation System Guided by Firefly Algorithm and Bayesian Method

Authors: Saad M. Darwish, Mohamed A. El-Iskandarani, Guitar M. Shawkat

Abstract:

Nowadays, the amount of available multimedia data is continuously on the rise. The need to find a required image for an ordinary user is a challenging task. Content based image retrieval (CBIR) computes relevance based on the visual similarity of low-level image features such as color, textures, etc. However, there is a gap between low-level visual features and semantic meanings required by applications. The typical method of bridging the semantic gap is through the automatic image annotation (AIA) that extracts semantic features using machine learning techniques. In this paper, a multi-label image annotation system guided by Firefly and Bayesian method is proposed. Firstly, images are segmented using the maximum variance intra cluster and Firefly algorithm, which is a swarm-based approach with high convergence speed, less computation rate and search for the optimal multiple threshold. Feature extraction techniques based on color features and region properties are applied to obtain the representative features. After that, the images are annotated using translation model based on the Net Bayes system, which is efficient for multi-label learning with high precision and less complexity. Experiments are performed using Corel Database. The results show that the proposed system is better than traditional ones for automatic image annotation and retrieval.

Keywords: feature extraction, feature selection, image annotation, classification

Procedia PDF Downloads 583
13454 Optimization Based Extreme Learning Machine for Watermarking of an Image in DWT Domain

Authors: RAM PAL SINGH, VIKASH CHAUDHARY, MONIKA VERMA

Abstract:

In this paper, we proposed the implementation of optimization based Extreme Learning Machine (ELM) for watermarking of B-channel of color image in discrete wavelet transform (DWT) domain. ELM, a regularization algorithm, works based on generalized single-hidden-layer feed-forward neural networks (SLFNs). However, hidden layer parameters, generally called feature mapping in context of ELM need not to be tuned every time. This paper shows the embedding and extraction processes of watermark with the help of ELM and results are compared with already used machine learning models for watermarking.Here, a cover image is divide into suitable numbers of non-overlapping blocks of required size and DWT is applied to each block to be transformed in low frequency sub-band domain. Basically, ELM gives a unified leaning platform with a feature mapping, that is, mapping between hidden layer and output layer of SLFNs, is tried for watermark embedding and extraction purpose in a cover image. Although ELM has widespread application right from binary classification, multiclass classification to regression and function estimation etc. Unlike SVM based algorithm which achieve suboptimal solution with high computational complexity, ELM can provide better generalization performance results with very small complexity. Efficacy of optimization method based ELM algorithm is measured by using quantitative and qualitative parameters on a watermarked image even though image is subjected to different types of geometrical and conventional attacks.

Keywords: BER, DWT, extreme leaning machine (ELM), PSNR

Procedia PDF Downloads 307
13453 The Extraction of Sage Essential Oil and the Improvement of Sleeping Quality for Female Menopause by Sage Essential Oil

Authors: Bei Shan Lin, Tzu Yu Huang, Ya Ping Chen, Chun Mel Lu

Abstract:

This research is divided into two parts. The first part is to adopt the method of supercritical carbon dioxide fluid extraction to extract sage essential oil (Salvia officinalis) and to find out the differences when the procedure is under different pressure conditions. Meanwhile, this research is going to probe into the composition of the extracted sage essential oil. The second part will talk about the effect of the aromatherapy with extracted sage essential oil to improve the sleeping quality for women in menopause. The extracted sage substance is tested by inhibiting DPPH radical to identify its antioxidant capacity, and the extracted component was analyzed by gas chromatography-mass spectrometer. Under two different pressure conditions, the extracted experiment gets different results. By 3000 psi, the extracted substance is IC50 180.94mg/L, which is higher than IC50 657.43mg/L by 1800 psi. By 3000 psi, the extracted yield is 1.05%, which is higher than 0.68% by 1800 psi. Through the experimental data, the researcher also can conclude that the extracted substance with 3000psi contains more materials than the one with 1800 psi. The main overlapped materials are the compounds of cyclic ether, flavonoid, and terpenes. Cyclic ether and flavonoids have the function of soothing and calming. They can be applied to relieve cramps and to eliminate menopause disorders. The second part of the research is to apply extracted sage essential oil to aromatherapy for women who are in menopause and to discuss the effect of the improvement for the sleeping quality. This research adopts the approaching of Swedish upper back massage, evaluates the sleeping quality with the Pittsburgh Sleep Quality Index, and detects the changes with heart rate variability apparatus. The experimental group intervenes with extracted sage essential oil to the aromatherapy. The average heart beats detected by the apparatus has a better result in SDNN, low frequency, and high frequency. The performance is better than the control group. According to the statistical analysis of the Pittsburgh Sleep Quality Index, this research has reached the effect of sleep quality improvement. It proves that extracted sage essential oil has a significant effect on increasing the activities of parasympathetic nerves. It is able to improve the sleeping quality for women in menopause

Keywords: supercritical carbon dioxide fluid extraction, Salvia officinalis, aromatherapy, Swedish massage, Pittsburgh sleep quality index, heart rate variability, parasympathetic nerves

Procedia PDF Downloads 116
13452 Speech Detection Model Based on Deep Neural Networks Classifier for Speech Emotions Recognition

Authors: A. Shoiynbek, K. Kozhakhmet, P. Menezes, D. Kuanyshbay, D. Bayazitov

Abstract:

Speech emotion recognition has received increasing research interest all through current years. There was used emotional speech that was collected under controlled conditions in most research work. Actors imitating and artificially producing emotions in front of a microphone noted those records. There are four issues related to that approach, namely, (1) emotions are not natural, and it means that machines are learning to recognize fake emotions. (2) Emotions are very limited by quantity and poor in their variety of speaking. (3) There is language dependency on SER. (4) Consequently, each time when researchers want to start work with SER, they need to find a good emotional database on their language. In this paper, we propose the approach to create an automatic tool for speech emotion extraction based on facial emotion recognition and describe the sequence of actions of the proposed approach. One of the first objectives of the sequence of actions is a speech detection issue. The paper gives a detailed description of the speech detection model based on a fully connected deep neural network for Kazakh and Russian languages. Despite the high results in speech detection for Kazakh and Russian, the described process is suitable for any language. To illustrate the working capacity of the developed model, we have performed an analysis of speech detection and extraction from real tasks.

Keywords: deep neural networks, speech detection, speech emotion recognition, Mel-frequency cepstrum coefficients, collecting speech emotion corpus, collecting speech emotion dataset, Kazakh speech dataset

Procedia PDF Downloads 93
13451 From Bureaucracy to Organizational Learning Model: An Organizational Change Process Study

Authors: Vania Helena Tonussi Vidal, Ester Eliane Jeunon

Abstract:

This article aims to analyze the change processes of management related bureaucracy and learning organization model. The theoretical framework was based on Beer and Nohria (2001) model, identified as E and O Theory. Based on this theory the empirical research was conducted in connection with six key dimensions: goal, leadership, focus, process, reward systems and consulting. We used a case study of an educational Institution located in Barbacena, Minas Gerais. This traditional center of technical knowledge for long time adopted the bureaucratic way of management. After many changes in a business model, as the creation of graduate and undergraduate courses they decided to make a deep change in management model that is our research focus. The data were collected through semi-structured interviews with director, managers and courses supervisors. The analysis were processed by the procedures of Collective Subject Discourse (CSD) method, develop by Lefèvre & Lefèvre (2000), Results showed the incremental growing of management model toward a learning organization. Many impacts could be seeing. As negative factors we have: people resistance; poor information about the planning and implementation process; old politics inside the new model and so on. Positive impacts are: new procedures in human resources, mainly related to manager skills and empowerment; structure downsizing, open discussions channel; integrated information system. The process is still under construction and now great stimulus is done to managers and employee commitment in the process.

Keywords: bureaucracy, organizational learning, organizational change, E and O theory

Procedia PDF Downloads 431
13450 Information Technologies in Human Resources Management - Selected Examples

Authors: A. Karasek

Abstract:

Rapid growth of Information Technologies (IT) has had huge influence on enterprises, and it has contributed to its promotion and increasingly extensive use in enterprises. Information Technologies have to a large extent determined the processes taking place in a enterprise; what is more, IT development has brought the need to adopt a brand new approach to human resources management in an enterprise. The use of IT in Human Resource Management (HRM) is of high importance due to the growing role of information and information technologies. The aim of this paper is to evaluate the use of information technologies in human resources management in enterprises. These practices will be presented in the following areas: Recruitment and selection, development and training, employee assessment, motivation, talent management, personnel service. Results of conducted survey show diversity of solutions applied in particular areas of human resource management. In the future, further development in this area should be expected, as well as integration of individual HRM areas, growing mobile-enabled HR processes and their transfer into the cloud. Presented IT solutions applied in HRM are highly innovative, which is of great significance due to their possible implementation in other enterprises.

Keywords: e-HR, human resources management, HRM practices, HRMS, information technologies

Procedia PDF Downloads 345
13449 Comparison of Polyphonic Profile of a Berry from Two Different Sources, Using an Optimized Extraction Method

Authors: G. Torabian, A. Fathi, P. Valtchev, F. Dehghani

Abstract:

The superior polyphenol content of Sambucus nigra berries has high health potentials for the production of nutraceutical products. Numerous factors influence the polyphenol content of the final products including the berries’ source and the subsequent processing production steps. The aim of this study is to compare the polyphenol content of berries from two different sources and also to optimise the polyphenol extraction process from elderberries. Berries from source B obtained more acceptable physical properties than source A; a single berry from source B was double in size and weight (both wet and dry weight) compared with a source A berry. Despite the appropriate physical characteristics of source B berries, their polyphenolic profile was inferior; as source A berries had 2.3 fold higher total anthocyanin content, and nearly two times greater total phenolic content and total flavonoid content compared to source B. Moreover, the result of this study showed that almost 50 percent of the phenolic content of berries are entrapped within their skin and pulp that potentially cannot be extracted by press juicing. To address this challenge and to increase the total polyphenol yield of the extract, we used cold-shock blade grinding method to break the cell walls. The result of this study showed that using cultivars with higher phenolic content as well as using the whole fruit including juice, skin and pulp can increase polyphenol yield significantly; and thus, may boost the potential of using elderberries as therapeutic products.

Keywords: different sources, elderberry, grinding, juicing, polyphenols

Procedia PDF Downloads 290
13448 Object-Scene: Deep Convolutional Representation for Scene Classification

Authors: Yanjun Chen, Chuanping Hu, Jie Shao, Lin Mei, Chongyang Zhang

Abstract:

Traditional image classification is based on encoding scheme (e.g. Fisher Vector, Vector of Locally Aggregated Descriptor) with low-level image features (e.g. SIFT, HoG). Compared to these low-level local features, deep convolutional features obtained at the mid-level layer of convolutional neural networks (CNN) have richer information but lack of geometric invariance. For scene classification, there are scattered objects with different size, category, layout, number and so on. It is crucial to find the distinctive objects in scene as well as their co-occurrence relationship. In this paper, we propose a method to take advantage of both deep convolutional features and the traditional encoding scheme while taking object-centric and scene-centric information into consideration. First, to exploit the object-centric and scene-centric information, two CNNs that trained on ImageNet and Places dataset separately are used as the pre-trained models to extract deep convolutional features at multiple scales. This produces dense local activations. By analyzing the performance of different CNNs at multiple scales, it is found that each CNN works better in different scale ranges. A scale-wise CNN adaption is reasonable since objects in scene are at its own specific scale. Second, a fisher kernel is applied to aggregate a global representation at each scale and then to merge into a single vector by using a post-processing method called scale-wise normalization. The essence of Fisher Vector lies on the accumulation of the first and second order differences. Hence, the scale-wise normalization followed by average pooling would balance the influence of each scale since different amount of features are extracted. Third, the Fisher vector representation based on the deep convolutional features is followed by a linear Supported Vector Machine, which is a simple yet efficient way to classify the scene categories. Experimental results show that the scale-specific feature extraction and normalization with CNNs trained on object-centric and scene-centric datasets can boost the results from 74.03% up to 79.43% on MIT Indoor67 when only two scales are used (compared to results at single scale). The result is comparable to state-of-art performance which proves that the representation can be applied to other visual recognition tasks.

Keywords: deep convolutional features, Fisher Vector, multiple scales, scale-specific normalization

Procedia PDF Downloads 325
13447 The Systems Biology Verification Endeavor: Harness the Power of the Crowd to Address Computational and Biological Challenges

Authors: Stephanie Boue, Nicolas Sierro, Julia Hoeng, Manuel C. Peitsch

Abstract:

Systems biology relies on large numbers of data points and sophisticated methods to extract biologically meaningful signal and mechanistic understanding. For example, analyses of transcriptomics and proteomics data enable to gain insights into the molecular differences in tissues exposed to diverse stimuli or test items. Whereas the interpretation of endpoints specifically measuring a mechanism is relatively straightforward, the interpretation of big data is more complex and would benefit from comparing results obtained with diverse analysis methods. The sbv IMPROVER project was created to implement solutions to verify systems biology data, methods, and conclusions. Computational challenges leveraging the wisdom of the crowd allow benchmarking methods for specific tasks, such as signature extraction and/or samples classification. Four challenges have already been successfully conducted and confirmed that the aggregation of predictions often leads to better results than individual predictions and that methods perform best in specific contexts. Whenever the scientific question of interest does not have a gold standard, but may greatly benefit from the scientific community to come together and discuss their approaches and results, datathons are set up. The inaugural sbv IMPROVER datathon was held in Singapore on 23-24 September 2016. It allowed bioinformaticians and data scientists to consolidate their ideas and work on the most promising methods as teams, after having initially reflected on the problem on their own. The outcome is a set of visualization and analysis methods that will be shared with the scientific community via the Garuda platform, an open connectivity platform that provides a framework to navigate through different applications, databases and services in biology and medicine. We will present the results we obtained when analyzing data with our network-based method, and introduce a datathon that will take place in Japan to encourage the analysis of the same datasets with other methods to allow for the consolidation of conclusions.

Keywords: big data interpretation, datathon, systems toxicology, verification

Procedia PDF Downloads 273
13446 Using Information Theory to Observe Natural Intelligence and Artificial Intelligence

Authors: Lipeng Zhang, Limei Li, Yanming Pearl Zhang

Abstract:

This paper takes a philosophical view as axiom, and reveals the relationship between information theory and Natural Intelligence and Artificial Intelligence under real world conditions. This paper also derives the relationship between natural intelligence and nature. According to communication principle of information theory, Natural Intelligence can be divided into real part and virtual part. Based on information theory principle that Information does not increase, the restriction mechanism of Natural Intelligence creativity is conducted. The restriction mechanism of creativity reveals the limit of natural intelligence and artificial intelligence. The paper provides a new angle to observe natural intelligence and artificial intelligence.

Keywords: natural intelligence, artificial intelligence, creativity, information theory, restriction of creativity

Procedia PDF Downloads 379
13445 King versus God: An Introduction to Dhanujatra of Odisha

Authors: Kailash Pattanaik, Giribala Mohanty

Abstract:

Dhanujatra is a folk performance of ODISHA, India, that transports the participants, on lookers and all alike into a mythical atmosphere for eleven days and nights as well. In this performance the whole town becomes stage. The uniqueness of the festival lies in the fact that all the episodes of this Jatra enacted in different parts of the town making it the largest open air theatre in the world. The paper would emphasize on the uniqueness and the impact of this performance.Different episodes are enacted at different places in the regime. So, Dhanujatra does not confine itself to a fixed static or dead stage, as in case of other Jatra’s; it rather becomes the stage for the world at large. For that, it is said that, Worlds biggest open air theatre held in the tiny town called Bargarh in the western part of Orissa. The play moves sequentially day after day and the audience moves from locale to locale. Here it is analogues to the Ramleela of Ramnagar of Benars. Parallal enactment is a significant feature of this Jatra. From the second day, parallal performances take place in both Bargarh town and Ambapalli epitomising ‘Mathura’ and ‘Gokul’ respectively. Krishna is born in the prison on the second day of the jatra. Basudeb exchanges the child with the Nanda’s newborn baby in Gokul. In this way, parallal performances go on both in Mathura and Gokul. The ordinary persons who act as the mythological characters, or become historical heroes or the legendary Saints or Bhaktas in a Jatra in the evening, lead the lives of ordinary persons during day time. The dramatic personas of those individuals are shed with the end of the Jatra. On the contrary, the persons who act as the main characters of Dhanujatra are exceptions in this regard. They are identified as the characters they enact for the whole period of performance, both in the evenings and during daytime. It is worth mentioning that generally in the folk performances there is an ample scope to touch upon or interpret or comment or satirize the issues of contemporary relevance with the sole purpose to convey some specific message. Dhanujatra is no exception to that.

Keywords: folk performance, Jatra, parallel enactment, open-air stage, Odisha

Procedia PDF Downloads 282
13444 Thermochemical Modelling for Extraction of Lithium from Spodumene and Prediction of Promising Reagents for the Roasting Process

Authors: Allen Yushark Fosu, Ndue Kanari, James Vaughan, Alexandre Changes

Abstract:

Spodumene is a lithium-bearing mineral of great interest due to increasing demand of lithium in emerging electric and hybrid vehicles. The conventional method of processing the mineral for the metal requires inevitable thermal transformation of α-phase to the β-phase followed by roasting with suitable reagents to produce lithium salts for downstream processes. The selection of appropriate reagent for roasting is key for the success of the process and overall lithium recovery. Several researches have been conducted to identify good reagents for the process efficiency, leading to sulfation, alkaline, chlorination, fluorination, and carbonizing as the methods of lithium recovery from the mineral.HSC Chemistry is a thermochemical software that can be used to model metallurgical process feasibility and predict possible reaction products prior to experimental investigation. The software was employed to investigate and explain the various reagent characteristics as employed in literature during spodumene roasting up to 1200°C. The simulation indicated that all used reagents for sulfation and alkaline were feasible in the direction of lithium salt production. Chlorination was only feasible when Cl2 and CaCl2 were used as chlorination agents but not NaCl nor KCl. Depending on the kind of lithium salt formed during carbonizing and fluorination, the process was either spontaneous or nonspontaneous throughout the temperature range investigated. The HSC software was further used to simulate and predict some promising reagents which may be equally good for roasting the mineral for efficient lithium extraction but have not yet been considered by researchers.

Keywords: thermochemical modelling, HSC chemistry software, lithium, spodumene, roasting

Procedia PDF Downloads 154
13443 An Unsupervised Domain-Knowledge Discovery Framework for Fake News Detection

Authors: Yulan Wu

Abstract:

With the rapid development of social media, the issue of fake news has gained considerable prominence, drawing the attention of both the public and governments. The widespread dissemination of false information poses a tangible threat across multiple domains of society, including politics, economy, and health. However, much research has concentrated on supervised training models within specific domains, their effectiveness diminishes when applied to identify fake news across multiple domains. To solve this problem, some approaches based on domain labels have been proposed. By segmenting news to their specific area in advance, judges in the corresponding field may be more accurate on fake news. However, these approaches disregard the fact that news records can pertain to multiple domains, resulting in a significant loss of valuable information. In addition, the datasets used for training must all be domain-labeled, which creates unnecessary complexity. To solve these problems, an unsupervised domain knowledge discovery framework for fake news detection is proposed. Firstly, to effectively retain the multidomain knowledge of the text, a low-dimensional vector for each news text to capture domain embeddings is generated. Subsequently, a feature extraction module utilizing the unsupervisedly discovered domain embeddings is used to extract the comprehensive features of news. Finally, a classifier is employed to determine the authenticity of the news. To verify the proposed framework, a test is conducted on the existing widely used datasets, and the experimental results demonstrate that this method is able to improve the detection performance for fake news across multiple domains. Moreover, even in datasets that lack domain labels, this method can still effectively transfer domain knowledge, which can educe the time consumed by tagging without sacrificing the detection accuracy.

Keywords: fake news, deep learning, natural language processing, multiple domains

Procedia PDF Downloads 88
13442 Fault-Tolerant Configuration for T-Type Nested Neutral Point Clamped Converter

Authors: S. Masoud Barakati, Mohsen Rahmani Haredasht

Abstract:

Recently, the use of T-type nested neutral point clamped (T-NNPC) converter has increased in medium voltage applications. However, the T-NNPC converter architecture's reliability and continuous operation are at risk by including semiconductor switches. Semiconductor switches are a prone option for open-circuit faults. As a result, fault-tolerant converters are required to improve the system's reliability and continuous functioning. This study's primary goal is to provide a fault-tolerant T-NNPC converter configuration. In the proposed design utilizing the cold reservation approach, a redundant phase is considered, which replaces the faulty phase once the fault is diagnosed in each phase. The suggested fault-tolerant configuration can be easily implemented in practical applications due to the use of a simple PWM control mechanism. The performance evaluation of the proposed configuration under different scenarios in the MATLAB-Simulink environment proves its efficiency.

Keywords: T-type nested neutral point clamped converter, reliability, continuous operation, open-circuit faults, fault-tolerant converters

Procedia PDF Downloads 112
13441 ‘Saying’ the Nuclear Power in France: Evolution of the Images and Perceptions of a Sensitive Theme

Authors: Jandot Aurélia

Abstract:

As the nuclear power is a sensitive field leading to controversy, the quality of the communication about it is important. Between 1965 and 1981, in France, this one had gradually changed. This change is studied here in the main French news magazine L’Express, in connection with several parameters. As this represents a huge number of copies and occurrences, thus a considerable amount of information; this paper is focused on the main articles as well as the main “mental images”. These ones are important, as their aim is to direct the thought of the readers, and as they have led the public awareness to evolve. Over this 17 years, two trends are in confrontation: The first one is promoting the perception of the nuclear power, while the second one is discrediting it. These trends are organized in two axes: the evolution of engineering, and the risks. In both cases, the changes in the language allow discerning the deepest intentions of the magazine editing, over a period when the nuclear technology, to there a laboratory object accompanied with mystery and secret, has become a social issue seemingly open to all.

Keywords: French news magazine, mental images, nuclear power, public awareness

Procedia PDF Downloads 301
13440 The Efficacy of Video Education to Improve Treatment or Illness-Related Knowledge in Patients with a Long-Term Physical Health Condition: A Systematic Review

Authors: Megan Glyde, Louise Dye, David Keane, Ed Sutherland

Abstract:

Background: Typically patient education is provided either verbally, in the form of written material, or with a multimedia-based tool such as videos, CD-ROMs, DVDs, or via the internet. By providing patients with effective educational tools, this can help to meet their information needs and subsequently empower these patients and allow them to participate within medical-decision making. Video education may have some distinct advantages compared to other modalities. For instance, whilst eHealth is emerging as a promising modality of patient education, an individual’s ability to access, read, and navigate through websites or online modules varies dramatically in relation to health literacy levels. Literacy levels may also limit patients’ ability to understand written education, whereas video education can be watched passively by patients and does not require high literacy skills. Other benefits of video education include that the same information is provided consistently to each patient, it can be a cost-effective method after the initial cost of producing the video, patients can choose to watch the videos by themselves or in the presence of others, and they can pause and re-watch videos to suit their needs. Health information videos are not only viewed by patients in formal educational sessions, but are increasingly being viewed on websites such as YouTube. Whilst there is a lot of anecdotal and sometimes misleading information on YouTube, videos from government organisations and professional associations contain trustworthy and high-quality information and could enable YouTube to become a powerful information dissemination platform for patients and carers. This systematic review will examine the efficacy of video education to improve treatment or illness-related knowledge in patients with various long-term conditions, in comparison to other modalities of education. Methods: Only studies which match the following criteria will be included: participants will have a long-term physical health condition, video education will aim to improve treatment or illness related knowledge and will be tested in isolation, and the study must be a randomised controlled trial. Knowledge will be the primary outcome measure, with modality preference, anxiety, and behaviour change as secondary measures. The searches have been conducted in the following databases: OVID Medline, OVID PsycInfo, OVID Embase, CENTRAL and ProQuest, and hand searching for relevant published and unpublished studies has also been carried out. Screening and data extraction will be conducted independently by 2 researchers. Included studies will be assessed for their risk of bias in accordance with Cochrane guidelines, and heterogeneity will also be assessed before deciding whether a meta-analysis is appropriate or not. Results and Conclusions: Appropriate synthesis of the studies in relation to each outcome measure will be reported, along with the conclusions and implications.

Keywords: long-term condition, patient education, systematic review, video

Procedia PDF Downloads 108
13439 A QoE-driven Cross-layer Resource Allocation Scheme for High Traffic Service over Open Wireless Network Downlink

Authors: Liya Shan, Qing Liao, Qinyue Hu, Shantao Jiang, Tao Wang

Abstract:

In this paper, a Quality of Experience (QoE)-driven cross-layer resource allocation scheme for high traffic service over Open Wireless Network (OWN) downlink is proposed, and the related problem about the users in the whole cell including the users in overlap region of different cells has been solved.A method, in which assess models of the BestEffort service and the no-reference assess algorithm for video service are adopted, to calculate the Mean Opinion Score (MOS) value for high traffic service has been introduced. The cross-layer architecture considers the parameters in application layer, media access control layer and physical layer jointly. Based on this architecture and the MOS value, the Binary Constrained Particle Swarm Optimization (B_CPSO) algorithm is used to solve the cross-layer resource allocation problem. In addition,simulationresults show that the proposed scheme significantly outperforms other schemes in terms of maximizing average users’ MOS value for the whole system as well as maintaining fairness among users.

Keywords: high traffic service, cross-layer resource allocation, QoE, B_CPSO, OWN

Procedia PDF Downloads 537
13438 Multi-Temporal Mapping of Built-up Areas Using Daytime and Nighttime Satellite Images Based on Google Earth Engine Platform

Authors: S. Hutasavi, D. Chen

Abstract:

The built-up area is a significant proxy to measure regional economic growth and reflects the Gross Provincial Product (GPP). However, an up-to-date and reliable database of built-up areas is not always available, especially in developing countries. The cloud-based geospatial analysis platform such as Google Earth Engine (GEE) provides an opportunity with accessibility and computational power for those countries to generate the built-up data. Therefore, this study aims to extract the built-up areas in Eastern Economic Corridor (EEC), Thailand using day and nighttime satellite imagery based on GEE facilities. The normalized indices were generated from Landsat 8 surface reflectance dataset, including Normalized Difference Built-up Index (NDBI), Built-up Index (BUI), and Modified Built-up Index (MBUI). These indices were applied to identify built-up areas in EEC. The result shows that MBUI performs better than BUI and NDBI, with the highest accuracy of 0.85 and Kappa of 0.82. Moreover, the overall accuracy of classification was improved from 79% to 90%, and error of total built-up area was decreased from 29% to 0.7%, after night-time light data from the Visible and Infrared Imaging Suite (VIIRS) Day Night Band (DNB). The results suggest that MBUI with night-time light imagery is appropriate for built-up area extraction and be utilize for further study of socioeconomic impacts of regional development policy over the EEC region.

Keywords: built-up area extraction, google earth engine, adaptive thresholding method, rapid mapping

Procedia PDF Downloads 119