Search results for: restricted Boltzmann machine
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 3382

Search results for: restricted Boltzmann machine

1072 Online Learning for Modern Business Models: Theoretical Considerations and Algorithms

Authors: Marian Sorin Ionescu, Olivia Negoita, Cosmin Dobrin

Abstract:

This scientific communication reports and discusses learning models adaptable to modern business problems and models specific to digital concepts and paradigms. In the PAC (probably approximately correct) learning model approach, in which the learning process begins by receiving a batch of learning examples, the set of learning processes is used to acquire a hypothesis, and when the learning process is fully used, this hypothesis is used in the prediction of new operational examples. For complex business models, a lot of models should be introduced and evaluated to estimate the induced results so that the totality of the results are used to develop a predictive rule, which anticipates the choice of new models. In opposition, for online learning-type processes, there is no separation between the learning (training) and predictive phase. Every time a business model is approached, a test example is considered from the beginning until the prediction of the appearance of a model considered correct from the point of view of the business decision. After choosing choice a part of the business model, the label with the logical value "true" is known. Some of the business models are used as examples of learning (training), which helps to improve the prediction mechanisms for future business models.

Keywords: machine learning, business models, convex analysis, online learning

Procedia PDF Downloads 136
1071 Use of Improved Genetic Algorithm in Cloud Computing to Reduce Energy Consumption in Migration of Virtual Machines

Authors: Marziyeh Bahrami, Hamed Pahlevan Hsseini, Behnam Ghamami, Arman Alvanpour, Hamed Ezzati, Amir Salar Sadeghi

Abstract:

One of the ways to increase the efficiency of services in the system of agents and, of course, in the world of cloud computing, is to use virtualization techniques. The aim of this research is to create changes in cloud computing services that will reduce as much as possible the energy consumption related to the migration of virtual machines and, in some way, the energy related to the allocation of resources and reduce the amount of pollution. So far, several methods have been proposed to increase the efficiency of cloud computing services in order to save energy in the cloud environment. The method presented in this article tries to prevent energy consumption by data centers and the subsequent production of carbon and biological pollutants as much as possible by increasing the efficiency of cloud computing services. The results show that the proposed algorithm, using the improvement in virtualization techniques and with the help of a genetic algorithm, improves the efficiency of cloud services in the matter of migrating virtual machines and finally saves consumption. becomes energy.

Keywords: consumption reduction, cloud computing, genetic algorithm, live migration, virtual Machine

Procedia PDF Downloads 55
1070 Using Speech Emotion Recognition as a Longitudinal Biomarker for Alzheimer’s Diseases

Authors: Yishu Gong, Liangliang Yang, Jianyu Zhang, Zhengyu Chen, Sihong He, Xusheng Zhang, Wei Zhang

Abstract:

Alzheimer’s disease (AD) is a progressive neurodegenerative disorder that affects millions of people worldwide and is characterized by cognitive decline and behavioral changes. People living with Alzheimer’s disease often find it hard to complete routine tasks. However, there are limited objective assessments that aim to quantify the difficulty of certain tasks for AD patients compared to non-AD people. In this study, we propose to use speech emotion recognition (SER), especially the frustration level, as a potential biomarker for quantifying the difficulty patients experience when describing a picture. We build an SER model using data from the IEMOCAP dataset and apply the model to the DementiaBank data to detect the AD/non-AD group difference and perform longitudinal analysis to track the AD disease progression. Our results show that the frustration level detected from the SER model can possibly be used as a cost-effective tool for objective tracking of AD progression in addition to the Mini-Mental State Examination (MMSE) score.

Keywords: Alzheimer’s disease, speech emotion recognition, longitudinal biomarker, machine learning

Procedia PDF Downloads 109
1069 Benefits of Tourist Experiences for Families: A Systematic Literature Review Using Nvivo

Authors: Diana Cunha, Catarina Coelho, Ana Paula Relvas, Elisabeth Kastenholz

Abstract:

Context: Tourist experiences have a recognized impact on the well-being of individuals. However, studies on the specific benefits of tourist experiences for families are scattered across different disciplines. This study aims to systematically review the literature to synthesize the evidence on the benefits of tourist experiences for families. Research Aim: The main objective is to systematize the evidence in the literature regarding the benefits of tourist experiences for families. Methodology: A systematic literature review was conducted using Nvivo, analyzing 33 scientific studies obtained from various databases. The search terms used were "family"/ "couple" and "tourist experience". The studies included quantitative, qualitative, mixed methods, and literature reviews. All works prior to the year 2000 were excluded, and the search was restricted to full text. A language filter was also used, considering articles in Portuguese, English, and Spanish. For NVivo analysis, information was coded based on both deductive and inductive perspectives. To minimize the subjectivity of the selection and coding process, two of the authors discussed the process and agreed on criteria that would make the coding more objective. Once the coding process in NVivo was completed, the data relating to the identification/characterization of the works were exported to the Statistical Package for the Social Sciences (SPPS), to characterize the sample. Findings: The results highlight that tourist experiences have several benefits for family systems, including the strengthening of family and marital bonds, the creation of family memories, and overall well-being and life satisfaction. These benefits contribute to both immediate relationship quality improvement and long-term family identity construction and transgenerational transmission. Theoretical Importance: This study emphasizes the systemic nature of the effects and relationships within family systems. It also shows that no harm was reported within these experiences, with only some challenges related to positive outcomes. Data Collection and Analysis Procedures: The study collected data from 33 scientific studies published predominantly after 2013. The data were analyzed using Nvivo, employing a systematic review approach. Question Addressed: The study addresses the question of the benefits of tourist experiences for families and how these experiences contribute to family functioning and individual well-being. Conclusion: Tourist experiences provide opportunities for families to enhance their interpersonal relationships and create lasting memories. The findings suggest that formal interventions based on evidence could further enhance the potential benefits of these experiences and be a valuable preventive tool in therapeutic interventions.

Keywords: family systems, individual and family well-being, marital satisfaction, tourist experiences

Procedia PDF Downloads 65
1068 Removal of Copper from Wastewaters by Nano-Micro Bubble Ion Flotation

Authors: R. Ahmadi, A. Khodadadi, M. Abdollahi

Abstract:

The removal of copper from a dilute synthetic wastewater (10 mg/L) was studied by ion flotation at laboratory scale. Anionic sodium dodecyl sulfate (SDS) was used as a collector and ethanol as a frother. Different parameters such as pH, collector and frother concentrations, foam height and bubble size distribution (multi bubble ion flotation) were tested to determine the optimum flotation conditions in a Denver type flotation machine. To see into the effect of bubbles size distribution in this paper, a nano-micro bubble generator was designed. The nano and microbubbles that are generated in this way were combined with normal size bubbles generated mechanically. Under the optimum conditions (concentration of SDS: 192mg/l, ethanol: 0.5%v/v, pH value: 4 and froth height=12.5 cm) the best removal obtained for the system Cu/SDS with a dry foam (water recovery: 15.5%) was 85.6%. Coalescence of nano-microbubbles with bubbles of normal size belonging to mechanical flotation cell improved the removal of Cu to a maximum floatability of 92.8% and reduced the water recovery to a 13.1%.The flotation time decreased considerably at 37.5% when the multi bubble ion flotation was used.

Keywords: froth flotation, copper, water treatment, optimization, recycling

Procedia PDF Downloads 497
1067 1D Convolutional Networks to Compute Mel-Spectrogram, Chromagram, and Cochleogram for Audio Networks

Authors: Elias Nemer, Greg Vines

Abstract:

Time-frequency transformation and spectral representations of audio signals are commonly used in various machine learning applications. Training networks on frequency features such as the Mel-Spectrogram or Cochleogram have been proven more effective and convenient than training on-time samples. In practical realizations, these features are created on a different processor and/or pre-computed and stored on disk, requiring additional efforts and making it difficult to experiment with different features. In this paper, we provide a PyTorch framework for creating various spectral features as well as time-frequency transformation and time-domain filter-banks using the built-in trainable conv1d() layer. This allows computing these features on the fly as part of a larger network and enabling easier experimentation with various combinations and parameters. Our work extends the work in the literature developed for that end: First, by adding more of these features and also by allowing the possibility of either starting from initialized kernels or training them from random values. The code is written as a template of classes and scripts that users may integrate into their own PyTorch classes or simply use as is and add more layers for various applications.

Keywords: neural networks Mel-Spectrogram, chromagram, cochleogram, discrete Fourrier transform, PyTorch conv1d()

Procedia PDF Downloads 227
1066 Improving Security in Healthcare Applications Using Federated Learning System With Blockchain Technology

Authors: Aofan Liu, Qianqian Tan, Burra Venkata Durga Kumar

Abstract:

Data security is of the utmost importance in the healthcare area, as sensitive patient information is constantly sent around and analyzed by many different parties. The use of federated learning, which enables data to be evaluated locally on devices rather than being transferred to a central server, has emerged as a potential solution for protecting the privacy of user information. To protect against data breaches and unauthorized access, federated learning alone might not be adequate. In this context, the application of blockchain technology could provide the system extra protection. This study proposes a distributed federated learning system that is built on blockchain technology in order to enhance security in healthcare. This makes it possible for a wide variety of healthcare providers to work together on data analysis without raising concerns about the confidentiality of the data. The technical aspects of the system, including as the design and implementation of distributed learning algorithms, consensus mechanisms, and smart contracts, are also investigated as part of this process. The technique that was offered is a workable alternative that addresses concerns about the safety of healthcare while also fostering collaborative research and the interchange of data.

Keywords: data privacy, distributed system, federated learning, machine learning

Procedia PDF Downloads 118
1065 Towards an Intelligent Ontology Construction Cost Estimation System: Using BIM and New Rules of Measurement Techniques

Authors: F. H. Abanda, B. Kamsu-Foguem, J. H. M. Tah

Abstract:

Construction cost estimation is one of the most important aspects of construction project design. For generations, the process of cost estimating has been manual, time-consuming and error-prone. This has partly led to most cost estimates to be unclear and riddled with inaccuracies that at times lead to over- or under-estimation of construction cost. The development of standard set of measurement rules that are understandable by all those involved in a construction project, have not totally solved the challenges. Emerging Building Information Modelling (BIM) technologies can exploit standard measurement methods to automate cost estimation process and improves accuracies. This requires standard measurement methods to be structured in ontologically and machine readable format; so that BIM software packages can easily read them. Most standard measurement methods are still text-based in textbooks and require manual editing into tables or Spreadsheet during cost estimation. The aim of this study is to explore the development of an ontology based on New Rules of Measurement (NRM) commonly used in the UK for cost estimation. The methodology adopted is Methontology, one of the most widely used ontology engineering methodologies. The challenges in this exploratory study are also reported and recommendations for future studies proposed.

Keywords: BIM, construction projects, cost estimation, NRM, ontology

Procedia PDF Downloads 547
1064 Stress Analysis of Vertebra Using Photoelastic and Finite Element Methods

Authors: Jamal A. Hassan, Ali Q. Abdulrazzaq, Sadiq J. Abass

Abstract:

In this study, both the photoelastic, as well as the finite element methods, are used to study the stress distribution within human vertebra (L4) under forces similar to those that occur during normal life. Two & three dimensional models of vertebra were created by the software AutoCAD. The coordinates obtained were fed into a computer numerical control (CNC) tensile machine to fabricate the models from photoelastic sheets. Completed models were placed in a transmission polariscope and loaded with static force (up to 1500N). Stresses can be quantified and localized by counting the number of fringes. In both methods the Principle stresses were calculated at different regions. The results noticed that the maximum von-mises stress on the area of the extreme superior vertebral body surface and the facet surface with high normal stress (σ) and shear stress (τ). The facets and other posterior elements have a load-bearing function to help support the weight of the upper body and anything that it carries, and are also acted upon by spinal muscle forces. The numerical FE results have been compared with the experimental method using photoelasticity which shows good agreement between experimental and simulation results.

Keywords: photoelasticity, stress, load, finite element

Procedia PDF Downloads 284
1063 Machine Learning-Driven Prediction of Cardiovascular Diseases: A Supervised Approach

Authors: Thota Sai Prakash, B. Yaswanth, Jhade Bhuvaneswar, Marreddy Divakar Reddy, Shyam Ji Gupta

Abstract:

Across the globe, there are a lot of chronic diseases, and heart disease stands out as one of the most perilous. Sadly, many lives are lost to this condition, even though early intervention could prevent such tragedies. However, identifying heart disease in its initial stages is not easy. To address this challenge, we propose an automated system aimed at predicting the presence of heart disease using advanced techniques. By doing so, we hope to empower individuals with the knowledge needed to take proactive measures against this potentially fatal illness. Our approach towards this problem involves meticulous data preprocessing and the development of predictive models utilizing classification algorithms such as Support Vector Machines (SVM), Decision Tree, and Random Forest. We assess the efficiency of every model based on metrics like accuracy, ensuring that we select the most reliable option. Additionally, we conduct thorough data analysis to reveal the importance of different attributes. Among the models considered, Random Forest emerges as the standout performer with an accuracy rate of 96.04% in our study.

Keywords: support vector machines, decision tree, random forest

Procedia PDF Downloads 33
1062 Time-Frequency Feature Extraction Method Based on Micro-Doppler Signature of Ground Moving Targets

Authors: Ke Ren, Huiruo Shi, Linsen Li, Baoshuai Wang, Yu Zhou

Abstract:

Since some discriminative features are required for ground moving targets classification, we propose a new feature extraction method based on micro-Doppler signature. Firstly, the time-frequency analysis of measured data indicates that the time-frequency spectrograms of the three kinds of ground moving targets, i.e., single walking person, two people walking and a moving wheeled vehicle, are discriminative. Then, a three-dimensional time-frequency feature vector is extracted from the time-frequency spectrograms to depict these differences. At last, a Support Vector Machine (SVM) classifier is trained with the proposed three-dimensional feature vector. The classification accuracy to categorize ground moving targets into the three kinds of the measured data is found to be over 96%, which demonstrates the good discriminative ability of the proposed micro-Doppler feature.

Keywords: micro-doppler, time-frequency analysis, feature extraction, radar target classification

Procedia PDF Downloads 402
1061 An Efficient Algorithm for Solving the Transmission Network Expansion Planning Problem Integrating Machine Learning with Mathematical Decomposition

Authors: Pablo Oteiza, Ricardo Alvarez, Mehrdad Pirnia, Fuat Can

Abstract:

To effectively combat climate change, many countries around the world have committed to a decarbonisation of their electricity, along with promoting a large-scale integration of renewable energy sources (RES). While this trend represents a unique opportunity to effectively combat climate change, achieving a sound and cost-efficient energy transition towards low-carbon power systems poses significant challenges for the multi-year Transmission Network Expansion Planning (TNEP) problem. The objective of the multi-year TNEP is to determine the necessary network infrastructure to supply the projected demand in a cost-efficient way, considering the evolution of the new generation mix, including the integration of RES. The rapid integration of large-scale RES increases the variability and uncertainty in the power system operation, which in turn increases short-term flexibility requirements. To meet these requirements, flexible generating technologies such as energy storage systems must be considered within the TNEP as well, along with proper models for capturing the operational challenges of future power systems. As a consequence, TNEP formulations are becoming more complex and difficult to solve, especially for its application in realistic-sized power system models. To meet these challenges, there is an increasing need for developing efficient algorithms capable of solving the TNEP problem with reasonable computational time and resources. In this regard, a promising research area is the use of artificial intelligence (AI) techniques for solving large-scale mixed-integer optimization problems, such as the TNEP. In particular, the use of AI along with mathematical optimization strategies based on decomposition has shown great potential. In this context, this paper presents an efficient algorithm for solving the multi-year TNEP problem. The algorithm combines AI techniques with Column Generation, a traditional decomposition-based mathematical optimization method. One of the challenges of using Column Generation for solving the TNEP problem is that the subproblems are of mixed-integer nature, and therefore solving them requires significant amounts of time and resources. Hence, in this proposal we solve a linearly relaxed version of the subproblems, and trained a binary classifier that determines the value of the binary variables, based on the results obtained from the linearized version. A key feature of the proposal is that we integrate the binary classifier into the optimization algorithm in such a way that the optimality of the solution can be guaranteed. The results of a study case based on the HRP 38-bus test system shows that the binary classifier has an accuracy above 97% for estimating the value of the binary variables. Since the linearly relaxed version of the subproblems can be solved with significantly less time than the integer programming counterpart, the integration of the binary classifier into the Column Generation algorithm allowed us to reduce the computational time required for solving the problem by 50%. The final version of this paper will contain a detailed description of the proposed algorithm, the AI-based binary classifier technique and its integration into the CG algorithm. To demonstrate the capabilities of the proposal, we evaluate the algorithm in case studies with different scenarios, as well as in other power system models.

Keywords: integer optimization, machine learning, mathematical decomposition, transmission planning

Procedia PDF Downloads 82
1060 Cytotoxic Effect of Biologically Transformed Propolis on HCT-116 Human Colon Cancer Cells

Authors: N. Selvi Gunel, L. M. Oktay, H. Memmedov, B. Durmaz, H. Kalkan Yildirim, E. Yildirim Sozmen

Abstract:

Object: Propolis which consists of compounds that are accepted as antioxidant, antimicrobial, antiseptic, antibacterial, anti-inflammatory, anti-mutagenic, immune-modulator and cytotoxic, is frequently used in current therapeutic applications. However, some of them result in allergic side effects, causing consumption to be restricted. Previously our group has succeeded in producing a new biotechnological product which was less allergenic. In this study, we purpose to optimize production conditions of this biologically-transformed propolis and determine the cytotoxic effects of obtained new products on colon cancer cell line (HCT-116). Method: Firstly, solid propolis samples were dissolved in water after weighing, grinding and sizing (sieve-35mesh) and applied 40 kHz/10 min ultrasonication. Samples were prepared according to inoculation with Lactobacillus plantarum in two different proportions (2.5% and 3.5%). Chromatographic analyzes of propolis were performed by UPLC-MS/MS (Waters, Milford, MA) system. Results were analysed by UPLC-MS/MS system MassLynx™ 4.1 software. HCT-116 cells were treated with propolis examples at 25-1000 µg/ml concentrations and cytotoxicity were measured by using WST-8 assay at 24, 48, and 72 hours. Samples with biological transformation were compared with the non-transformed control group samples. Our experiment groups were formed as follows: untreated (group 1), propolis dissolved in water ultrasonicated at 40 kHz/10 min (group 2), propolis dissolved in water ultrasonicated at 40 kHz/10 min and inoculated 2.5% L. plantarum L1 strain (group 3), propolis dissolved in water ultrasonicated at 40 kHz/10 min and inoculated 3.5% L. plantarum L3 strain (group 4). Obtained data were calculated with Graphpad Software V5 and analyzed by two-way ANOVA test followed by Bonferroni test. Result: As a result of our study, the cytotoxic effect of propolis samples on HCT-116 cells was evaluated. There was a 7.21 fold increase in group 3 compared to group 2 in the concentration of 1000 µg/ml, and it was a 6.66 fold increase in group 3 compared to group 1 at the end of 24 hours. At the end of 48 hours, in the concentration of 500 µg/ml, it was determined 4.7 fold increase in group 4 compared to group 3. At the same time, in the concentration of 750 µg/ml it was determined 2.01 fold increase in group 4 compared to group 3 and in the same concentration, it was determined 3.1 fold increase in group 4 compared to group 2. Also, at the 72 hours, in the concentration of 750 µg/ml, it was determined 2.42 fold increase in group 3 according to group 2 and in the same time, in the concentration of 1000 µg/ml, it was determined 2.13 fold increase in group 4 according to group 2. According to cytotoxicity results, the group which were ultrasonicated at 40 kHz/10min and inoculated 3.5% L. plantarum L3-strain had a higher cytotoxic effect. Conclusion: It is known that bioavailability of propolis is halved in six months. The data obtained from our results indicated that biologically-transformed propolis had more cytotoxic effect than non-transformed group on colon cancer cells. Consequently, we suggested that L. plantarum-transformation provides both reduction of allergenicity and extension of bioavailability period by enhancing healthful polyphenols.

Keywords: bio-transformation, propolis, colon cancer, cytotoxicity

Procedia PDF Downloads 131
1059 Open Joint Surgery for Temporomandibular Joint Internal Derangement: Wilkes Stages III-V

Authors: T. N. Goh, M. Hashmi, O. Hussain

Abstract:

Temporomandibular joint (TMJ) dysfunction (TMD) is a condition that may affect patients via restricted mouth opening, significant pain during normal functioning, and/or reproducible joint noise. TMD includes myofascial pain, TMJ functional derangements (internal derangement, dislocation), and TMJ degenerative/inflammatory joint disease. Internal derangement (ID) is the most common cause of TMD-related clicking and locking. These patients are managed in a stepwise approach, from patient education (homecare advice and analgesia), splint therapy, physiotherapy, botulinum toxin treatment, to arthrocentesis. Arthrotomy is offered when the aforementioned treatment options fail to alleviate symptoms and improve quality of life. The aim of this prospective study was to review the outcomes of jaw joint open surgery in TMD patients. Patients who presented from 2015-2022 at the Oral and Maxillofacial Surgery Department in the Doncaster NHS Foundation Trust, UK, with a Wilkes classification of III -V were included. These patients underwent either i) discopexy with bone-anchoring suture (9); ii) intrapositional temporalis flap (ITF) with bone-anchoring suture (3); iii) eminoplasty and discopexy with suturing to the capsule (3); iii) discectomy + ITF with bone-anchoring suture (1); iv) discoplasty + bone-anchoring suture (1); v) ITF (1). Maximum incisal opening (MIO) was assessed pre-operatively and at each follow-up. Pain score, determined via the visual analogue scale (VAS, with 0 being no pain and 10 being the worst pain), was also recorded. A total of 18 eligible patients were identified with a mean age of 45 (range 22 - 79), of which 16 were female. The patients were scored by Wilkes Classification as III (14), IV (1), or V (4). Twelve patients had anterior disc displacement without reduction (66%) and six had degenerative/arthritic changes (33%) to the TMJ. The open joint procedure resulted in an increase in MIO and reduction in pain VAS and for the majority of patients, across all Wilkes Classifications. Pre-procedural MIO was 22.9 ± 7.4 mm and VAS was 7.8 ± 1.5. At three months post-procedure there was an increase in MIO to 34.4 ± 10.4 mm (p < 0.01) and a decrease in the VAS to 1.5 ± 2.9 (p < 0.01). Three patients were lost to follow-up prior to six months. Six were discharged at six month review and five patients were discharged at 12 months review as they were asymptomatic with good mouth opening. Four patients are still attending for annual botulinum toxin treatment. Two patients (Wilkes III and V) subsequently underwent TMJ replacement (11%). One of these patients (Wilkes III) had improvement initially to MIO of 40 mm, but subsequently relapsed to less than 20 mm due to lack of compliance with jaw rehabilitation device post-operatively. Clinical improvements in 89% of patients within the study group were found, with a return to near normal MIO range and reduced pain score. Intraoperatively, the operator found bone-anchoring suture used for discopexy/discoplasty more secure than the soft tissue anchoring suturing technique.

Keywords: bone anchoring suture, open temporomandibular joint surgery, temporomandibular joint, temporomandibular joint dysfunction

Procedia PDF Downloads 103
1058 Optical and Near-UV Spectroscopic Properties of Low-Redshift Jetted Quasars in the Main Sequence in the Main Sequence Context

Authors: Shimeles Terefe Mengistue, Ascensión Del Olmo, Paola Marziani, Mirjana Pović, María Angeles Martínez-Carballo, Jaime Perea, Isabel M. Árquez

Abstract:

Quasars have historically been classified into two distinct classes, radio-loud (RL) and radio-quiet (RQ), taking into account the presence and absence of relativistic radio jets, respectively. The absence of spectra with a high S/N ratio led to the impression that all quasars (QSOs) are spectroscopically similar. Although different attempts were made to unify these two classes, there is a long-standing open debate involving the possibility of a real physical dichotomy between RL and RQ quasars. In this work, we present new high S/N spectra of 11 extremely powerful jetted quasars with radio-to-optical flux density ratio > 1000 that concomitantly cover the low-ionization emission of Mgii𝜆2800 and Hbeta𝛽 as well as the Feii blends in the redshift range 0.35 < z < 1, observed at Calar Alto Observatory (Spain). This work aims to quantify broad emission line differences between RL and RQ quasars by using the four-dimensional eigenvector 1 (4DE1) parameter space and its main sequence (MS) and to check the effect of powerful radio ejection on the low ionization broad emission lines. Emission lines are analysed by making two complementary approaches, a multicomponent non-linear fitting to account for the individual components of the broad emission lines and by analysing the full profile of the lines through parameters such as total widths, centroid velocities at different fractional intensities, asymmetry, and kurtosis indices. It is found that broad emission lines show large reward asymmetry both in Hbeta𝛽 and Mgii2800A. The location of our RL sources in a UV plane looks similar to the optical one, with weak Feii UV emission and broad Mgii2800A. We supplement the 11 sources with large samples from previous work to gain some general inferences. The result shows, compared to RQ, our extreme RL quasars show larger median Hbeta full width at half maximum (FWHM), weaker Feii emission, larger 𝑀BH, lower 𝐿bol/𝐿Edd, and a restricted space occupation in the optical and UV MS planes. The differences are more elusive when the comparison is carried out by restricting the RQ population to the region of the MS occupied by RL quasars, albeit an unbiased comparison matching 𝑀BH and 𝐿bol/𝐿Edd suggests that the most powerful RL quasars show the highest redward asymmetries in Hbeta.

Keywords: galaxies, active, line, profiles, quasars, emission lines, supermassive black holes

Procedia PDF Downloads 58
1057 The OLOS® Way to Cultural Heritage: User Interface with Anthropomorphic Characteristics

Authors: Daniele Baldacci, Remo Pareschi

Abstract:

Augmented Reality and Augmented Intelligence are radically changing information technology. The path that starts from the keyboard and then, passing through milestones such as Siri, Alexa and other vocal avatars, reaches a more fluid and natural communication with computers, thus converting the dichotomy between man and machine into a harmonious interaction, now heads unequivocally towards a new IT paradigm, where holographic computing will play a key role. The OLOS® platform contributes substantially to this trend in that it infuses computers with human features, by transferring the gestures and expressions of persons of flesh and bones to anthropomorphic holographic interfaces which in turn will use them to interact with real-life humans. In fact, we could say, boldly but with a solid technological background to back the statement, that OLOS® gives reality to an altogether new entity, placed at the exact boundary between nature and technology, namely the holographic human being. Holographic humans qualify as the perfect carriers for the virtual reincarnation of characters handed down from history and tradition. Thus, they provide for an innovative and highly immersive way of experiencing our cultural heritage as something alive and pulsating in the present.

Keywords: digital cinematography, human-computer interfaces, holographic simulation, interactive museum exhibits

Procedia PDF Downloads 112
1056 Cloud Computing in Data Mining: A Technical Survey

Authors: Ghaemi Reza, Abdollahi Hamid, Dashti Elham

Abstract:

Cloud computing poses a diversity of challenges in data mining operation arising out of the dynamic structure of data distribution as against the use of typical database scenarios in conventional architecture. Due to immense number of users seeking data on daily basis, there is a serious security concerns to cloud providers as well as data providers who put their data on the cloud computing environment. Big data analytics use compute intensive data mining algorithms (Hidden markov, MapReduce parallel programming, Mahot Project, Hadoop distributed file system, K-Means and KMediod, Apriori) that require efficient high performance processors to produce timely results. Data mining algorithms to solve or optimize the model parameters. The challenges that operation has to encounter is the successful transactions to be established with the existing virtual machine environment and the databases to be kept under the control. Several factors have led to the distributed data mining from normal or centralized mining. The approach is as a SaaS which uses multi-agent systems for implementing the different tasks of system. There are still some problems of data mining based on cloud computing, including design and selection of data mining algorithms.

Keywords: cloud computing, data mining, computing models, cloud services

Procedia PDF Downloads 476
1055 Design of Cartesian Robot for Electric Vehicle Wireless Charging Systems

Authors: Kaan Karaoglu, Raif Bayir

Abstract:

In this study, a cartesian robot is developed to improve the performance and efficiency of wireless charging of electric vehicles. The cartesian robot has three axes, each of which moves linearly. Magnetic positioning is used to align the cartesian robot transmitter charging pad. There are two different wireless charging methods, static and dynamic, for charging electric vehicles. The current state of charge information (SOC State of Charge) and location information are received wirelessly from the electric vehicle. Based on this information, the power to be transmitted is determined, and the transmitter and receiver charging pads are aligned for maximum efficiency. With this study, a fully automated cartesian robot structure will be used to charge electric vehicles with the highest possible efficiency. With the wireless communication established between the electric vehicle and the charging station, the charging status will be monitored in real-time. The cartesian robot developed in this study is a fully automatic system that can be easily used in static wireless charging systems with vehicle-machine communication.

Keywords: electric vehicle, wireless charging systems, energy efficiency, cartesian robot, location detection, trajectory planning

Procedia PDF Downloads 67
1054 Analyzing the Feasibility of Low-Cost Composite Wind Turbine Blades for Residential Energy Production

Authors: Aravindhan Nepolean, Chidamabaranathan Bibin, Rajesh K., Gopinath S., Ashok Kumar R., Arun Kumar S., Sadasivan N.

Abstract:

Wind turbine blades are an important parameter for surging renewable energy production. Optimizing blade profiles and developing new materials for wind turbine blades take a lot of time and effort. Even though many standards for wind turbine blades have been developed for large-scale applications, they are not more effective in small-scale applications. We used acrylonitrile-butadiene-styrene to make small-scale wind turbine blades in this study (ABS). We chose the material because it is inexpensive and easy to machine into the desired form. They also have outstanding chemical, stress, and creep resistance. The blade measures 332 mm in length and has a 664 mm rotor diameter. A modal study of blades is carried out, as well as a comparison with current e-glass fiber. They were able to balance the output with less vibration, according to the findings. Q blade software is used to simulate rotating output. The modal analysis testing and prototype validation of wind turbine blades were used for experimental validation.

Keywords: acrylonitrile-butadiene-styrene, e-glass fiber, modal, renewable energy, q-blade

Procedia PDF Downloads 154
1053 Rehabilitation Robot in Primary Walking Pattern Training for SCI Patient at Home

Authors: Taisuke Sakaki, Toshihiko Shimokawa, Nobuhiro Ushimi, Koji Murakami, Yong-Kwun Lee, Kazuhiro Tsuruta, Kanta Aoki, Kaoru Fujiie, Ryuji Katamoto, Atsushi Sugyo

Abstract:

Recently attention has been focused on incomplete spinal cord injuries (SCI) to the central spine caused by pressure on parts of the white matter conduction pathway, such as the pyramidal tract. In this paper, we focus on a training robot designed to assist with primary walking-pattern training. The target patient for this training robot is relearning the basic functions of the usual walking pattern; it is meant especially for those with incomplete-type SCI to the central spine, who are capable of standing by themselves but not of performing walking motions. From the perspective of human engineering, we monitored the operator’s actions to the robot and investigated the movement of joints of the lower extremities, the circumference of the lower extremities, and exercise intensity with the machine. The concept of the device was to provide mild training without any sudden changes in heart rate or blood pressure, which will be particularly useful for the elderly and disabled. The mechanism of the robot is modified to be simple and lightweight with the expectation that it will be used at home.

Keywords: training, rehabilitation, SCI patient, welfare, robot

Procedia PDF Downloads 421
1052 A Deep Learning Based Method for Faster 3D Structural Topology Optimization

Authors: Arya Prakash Padhi, Anupam Chakrabarti, Rajib Chowdhury

Abstract:

Topology or layout optimization often gives better performing economic structures and is very helpful in the conceptual design phase. But traditionally it is being done in finite element-based optimization schemes which, although gives a good result, is very time-consuming especially in 3D structures. Among other alternatives machine learning, especially deep learning-based methods, have a very good potential in resolving this computational issue. Here convolutional neural network (3D-CNN) based variational auto encoder (VAE) is trained using a dataset generated from commercially available topology optimization code ABAQUS Tosca using solid isotropic material with penalization (SIMP) method for compliance minimization. The encoded data in latent space is then fed to a 3D generative adversarial network (3D-GAN) to generate the outcome in 64x64x64 size. Here the network consists of 3D volumetric CNN with rectified linear unit (ReLU) activation in between and sigmoid activation in the end. The proposed network is seen to provide almost optimal results with significantly reduced computational time, as there is no iteration involved.

Keywords: 3D generative adversarial network, deep learning, structural topology optimization, variational auto encoder

Procedia PDF Downloads 169
1051 Robust Inference with a Skew T Distribution

Authors: M. Qamarul Islam, Ergun Dogan, Mehmet Yazici

Abstract:

There is a growing body of evidence that non-normal data is more prevalent in nature than the normal one. Examples can be quoted from, but not restricted to, the areas of Economics, Finance and Actuarial Science. The non-normality considered here is expressed in terms of fat-tailedness and asymmetry of the relevant distribution. In this study a skew t distribution that can be used to model a data that exhibit inherent non-normal behavior is considered. This distribution has tails fatter than a normal distribution and it also exhibits skewness. Although maximum likelihood estimates can be obtained by solving iteratively the likelihood equations that are non-linear in form, this can be problematic in terms of convergence and in many other respects as well. Therefore, it is preferred to use the method of modified maximum likelihood in which the likelihood estimates are derived by expressing the intractable non-linear likelihood equations in terms of standardized ordered variates and replacing the intractable terms by their linear approximations obtained from the first two terms of a Taylor series expansion about the quantiles of the distribution. These estimates, called modified maximum likelihood estimates, are obtained in closed form. Hence, they are easy to compute and to manipulate analytically. In fact the modified maximum likelihood estimates are equivalent to maximum likelihood estimates, asymptotically. Even in small samples the modified maximum likelihood estimates are found to be approximately the same as maximum likelihood estimates that are obtained iteratively. It is shown in this study that the modified maximum likelihood estimates are not only unbiased but substantially more efficient than the commonly used moment estimates or the least square estimates that are known to be biased and inefficient in such cases. Furthermore, in conventional regression analysis, it is assumed that the error terms are distributed normally and, hence, the well-known least square method is considered to be a suitable and preferred method for making the relevant statistical inferences. However, a number of empirical researches have shown that non-normal errors are more prevalent. Even transforming and/or filtering techniques may not produce normally distributed residuals. Here, a study is done for multiple linear regression models with random error having non-normal pattern. Through an extensive simulation it is shown that the modified maximum likelihood estimates of regression parameters are plausibly robust to the distributional assumptions and to various data anomalies as compared to the widely used least square estimates. Relevant tests of hypothesis are developed and are explored for desirable properties in terms of their size and power. The tests based upon modified maximum likelihood estimates are found to be substantially more powerful than the tests based upon least square estimates. Several examples are provided from the areas of Economics and Finance where such distributions are interpretable in terms of efficient market hypothesis with respect to asset pricing, portfolio selection, risk measurement and capital allocation, etc.

Keywords: least square estimates, linear regression, maximum likelihood estimates, modified maximum likelihood method, non-normality, robustness

Procedia PDF Downloads 394
1050 Application of IED to Condition Based Maintenance of Medium Voltage GCB/VCB

Authors: Ming-Ta Yang, Jyh-Cherng Gu, Chun-Wei Huang, Jin-Lung Guan

Abstract:

Time base maintenance (TBM) is conventionally applied by the power utilities to maintain circuit breakers (CBs), transformers, bus bars and cables, which may result in under maintenance or over maintenance. As information and communication technology (ICT) industry develops, the maintenance policies of many power utilities have gradually changed from TBM to condition base maintenance (CBM) to improve system operating efficiency, operation cost and power supply reliability. This paper discusses the feasibility of using intelligent electronic devices (IEDs) to construct a CB CBM management platform. CBs in power substations can be monitored using IEDs with additional logic configuration and wire connections. The CB monitoring data can be sent through intranet to a control center and be analyzed and integrated by the Elipse Power Studio software. Finally, a human-machine interface (HMI) of supervisory control and data acquisition (SCADA) system can be designed to construct a CBM management platform to provide maintenance decision information for the maintenance personnel, management personnel and CB manufacturers.

Keywords: circuit breaker, condition base maintenance, intelligent electronic device, time base maintenance, SCADA

Procedia PDF Downloads 323
1049 Examining the European Central Bank's Marginal Attention to Human Rights Concerns during the Eurozone Crisis through the Lens of Organizational Culture

Authors: Hila Levi

Abstract:

Respect for human rights is a fundamental element of the European Union's (EU) identity and law. Surprisingly, however, the protection of human rights has been significantly restricted in the austerity programs ordered by the International Monetary Fund (IMF), the European Central Bank (ECB) and the European Commission (EC) (often labeled 'the Troika') in return for financial aid to the crisis-hit countries. This paper focuses on the role of the ECB in the crisis management. While other international financial institutions, such as the IMF or the World Bank, may opt to neglect human rights obligations, one might expect a greater respect of human rights from the ECB, which is bound by the EU Charter of Fundamental Rights. However, this paper argues that ECB officials made no significant effort to protect human rights or strike an adequate balance between competing financial and human rights needs while coping with the crisis. ECB officials were preoccupied with the need to stabilize the economy and prevent a collapse of the Eurozone, and paid only marginal attention to human rights concerns in the design and implementation of Troikas' programs. This paper explores the role of Organizational Culture (OC) in explaining this marginalization. While International Relations (IR) research on Intergovernmental Organizations (IGOs) behavior has traditionally focused on external interests of powerful member states, and on national and economic considerations, this study focuses on particular institutions' internal factors and independent processes. OC characteristics have been identified in OC literature as an important determinant of organizational behavior. This paper suggests that cultural characteristics are also vital for the examination of IGOs, and particularly for understanding the ECB's behavior during the crisis. In order to assess the OC of the ECB and the impact it had on its policies and decisions during the Eurozone crisis, the paper uses the results of numerous qualitative interviews conducted with high-ranking officials and staff members of the ECB involved in the crisis management. It further reviews primary sources of the ECB (such as ECB statutes, and the Memoranda of Understanding signed between the crisis countries and the Troika), and secondary sources (such as the report of the UN High Commissioner for Human Rights on Austerity measures and economic, social, and cultural rights). It thus analyzes the interaction between the ECBs culture and the almost complete absence of human rights considerations in the Eurozone crisis resolution scheme. This paper highlights the importance and influence of internal ideational factors on IGOs behavior. From a more practical perspective, this paper may contribute to understanding one of the obstacles in the process of human rights implementation in international organizations, and provide instruments for better protection of social and economic rights.

Keywords: European central bank, eurozone crisis, intergovernmental organizations, organizational culture

Procedia PDF Downloads 152
1048 Modeling and Simulating Productivity Loss Due to Project Changes

Authors: Robert Pellerin, Michel Gamache, Remi Trudeau, Nathalie Perrier

Abstract:

The context of large engineering projects is particularly favorable to the appearance of engineering changes and contractual modifications. These elements are potential causes for claims. In this paper, we investigate one of the critical components of the claim management process: the calculation of the impacts of changes in terms of losses of productivity due to the need to accelerate some project activities. When project changes are initiated, delays can arise. Indeed, project activities are often executed in fast-tracking in an attempt to respect the completion date. But the acceleration of project execution and the resulting rework can entail important costs as well as induce productivity losses. In the past, numerous methods have been proposed to quantify the duration of delays, the gains achieved by project acceleration, and the loss of productivity. The calculation related to those changes can be divided into two categories: direct cost and indirect cost. The direct cost is easily quantifiable as opposed to indirect costs which are rarely taken into account during the calculation of the cost of an engineering change or contract modification despite several research projects have been made on this subject. However, proposed models have not been accepted by companies yet, nor they have been accepted in court. Those models require extensive data and are often seen as too specific to be used for all projects. These techniques are also ignoring the resource constraints and the interdependencies between the causes of delays and the delays themselves. To resolve this issue, this research proposes a simulation model that mimics how major engineering changes or contract modifications are handled in large construction projects. The model replicates the use of overtime in a reactive scheduling mode in order to simulate the loss of productivity present when a project change occurs. Multiple tests were conducted to compare the results of the proposed simulation model with statistical analysis conducted by other researchers. Different scenarios were also conducted in order to determine the impact the number of activities, the time of occurrence of the change, the availability of resources, and the type of project changes on productivity loss. Our results demonstrate that the number of activities in the project is a critical variable influencing the productivity of a project. When changes occur, the presence of a large number of activities leads to a much lower productivity loss than a small number of activities. The speed of reducing productivity for 30-job projects is about 25 percent faster than the reduction speed for 120-job projects. The moment of occurrence of a change also shows a significant impact on productivity. Indeed, the sooner the change occurs, the lower the productivity of the labor force. The availability of resources also impacts the productivity of a project when a change is implemented. There is a higher loss of productivity when the amount of resources is restricted.

Keywords: engineering changes, indirect costs overtime, productivity, scheduling, simulation

Procedia PDF Downloads 235
1047 Predictive Modelling Approach to Identify Spare Parts Inventory Obsolescence

Authors: Madhu Babu Cherukuri, Tamoghna Ghosh

Abstract:

Factory supply chain management spends billions of dollars every year to procure and manage equipment spare parts. Due to technology -and processes changes some of these spares become obsolete/dead inventory. Factories have huge dead inventory worth millions of dollars accumulating over time. This is due to lack of a scientific methodology to identify them and send the inventory back to the suppliers on a timely basis. The standard approach followed across industries to deal with this is: if a part is not used for a set pre-defined period of time it is declared dead. This leads to accumulation of dead parts over time and these parts cannot be sold back to the suppliers as it is too late as per contract agreement. Our main idea is the time period for identifying a part as dead cannot be a fixed pre-defined duration across all parts. Rather, it should depend on various properties of the part like historical consumption pattern, type of part, how many machines it is being used in, whether it- is a preventive maintenance part etc. We have designed a predictive algorithm which predicts part obsolescence well in advance with reasonable accuracy and which can help save millions.

Keywords: obsolete inventory, machine learning, big data, supply chain analytics, dead inventory

Procedia PDF Downloads 315
1046 Performance of Constant Load Feed Machining for Robotic Drilling

Authors: Youji Miyake

Abstract:

In aircraft assembly, a large number of preparatory holes are required for screw and rivet joints. Currently, many holes are drilled manually because it is difficult to machine the holes using conventional computerized numerical control(CNC) machines. The application of industrial robots to drill the hole has been considered as an alternative to the CNC machines. However, the rigidity of robot arms is so low that vibration is likely to occur during drilling. In this study, it is proposed constant-load feed machining as a method to perform high-precision drilling while minimizing the thrust force, which is considered to be the cause of vibration. In this method, the drill feed is realized by a constant load applied onto the tool so that the thrust force is theoretically kept below the applied load. The performance of the proposed method was experimentally examined through the deep hole drilling of plastic and simultaneous drilling of metal/plastic stack plates. It was confirmed that the deep hole drilling and simultaneous drilling could be performed without generating vibration by controlling the tool feed rate in the appropriate range.

Keywords: constant load feed machining, robotic drilling, deep hole, simultaneous drilling

Procedia PDF Downloads 189
1045 Deep Learning Based Fall Detection Using Simplified Human Posture

Authors: Kripesh Adhikari, Hamid Bouchachia, Hammadi Nait-Charif

Abstract:

Falls are one of the major causes of injury and death among elderly people aged 65 and above. A support system to identify such kind of abnormal activities have become extremely important with the increase in ageing population. Pose estimation is a challenging task and to add more to this, it is even more challenging when pose estimations are performed on challenging poses that may occur during fall. Location of the body provides a clue where the person is at the time of fall. This paper presents a vision-based tracking strategy where available joints are grouped into three different feature points depending upon the section they are located in the body. The three feature points derived from different joints combinations represents the upper region or head region, mid-region or torso and lower region or leg region. Tracking is always challenging when a motion is involved. Hence the idea is to locate the regions in the body in every frame and consider it as the tracking strategy. Grouping these joints can be beneficial to achieve a stable region for tracking. The location of the body parts provides a crucial information to distinguish normal activities from falls.

Keywords: fall detection, machine learning, deep learning, pose estimation, tracking

Procedia PDF Downloads 183
1044 An Efficient Architecture for Dynamic Customization and Provisioning of Virtual Appliance in Cloud Environment

Authors: Rajendar Kandan, Mohammad Zakaria Alli, Hong Ong

Abstract:

Cloud computing is a business model which provides an easier management of computing resources. Cloud users can request virtual machine and install additional softwares and configure them if needed. However, user can also request virtual appliance which provides a better solution to deploy application in much faster time, as it is ready-built image of operating system with necessary softwares installed and configured. Large numbers of virtual appliances are available in different image format. User can download available appliances from public marketplace and start using it. However, information published about the virtual appliance differs from each providers leading to the difficulty in choosing required virtual appliance as it is composed of specific OS with standard software version. However, even if user choses the appliance from respective providers, user doesn’t have any flexibility to choose their own set of softwares with required OS and application. In this paper, we propose a referenced architecture for dynamically customizing virtual appliance and provision them in an easier manner. We also add our experience in integrating our proposed architecture with public marketplace and Mi-Cloud, a cloud management software.

Keywords: cloud computing, marketplace, virtualization, virtual appliance

Procedia PDF Downloads 291
1043 Assessing the Nutritional Characteristics and Habitat Modeling of the Comorian’s Yam (Dioscorea comorensis) in a Fragmented Landscape

Authors: Mounir Soule, Hindatou Saidou, Razafimahefa, Mohamed Thani Ibouroi

Abstract:

High levels of habitat fragmentation and loss are the main drivers of plant species extinction. They reduce the habitat quality, which is a determining factor for the reproduction of plant species, and generate strong selective pressures for habitat selection, with impacts on the reproduction and survival of individuals. The Comorian’s yam (Dioscorea comorensis) is one of the most threatened plant species of the Comoros archipelago. The species faces one of the highest rates of habitat loss worldwide (9.3 % per year) and is classified as Endangered in the IUCN red list. Despite the nutritional potential of this tuber, the Comorian’s yam cultivation remains neglected by local populations due probably to lack of knowledge on its nutritional importance and the factors driving its spatial distribution and development. In this study, we assessed the nutritional characteristics of Dioscorea comorensis and the drivers of spatial distribution and abundance to propose conservation measures and improve crop yields. To determine the nutritional characteristics, the Kjeldahl method, the Soxhlet method, and Atwater's specific calorific coefficients methods were applied for analyzing proteins, lipids, and caloric energy respectively. In addition, atomic absorption spectrometry was used to measure mineral particles. By combining species occurrences with ecological (habitat types), climatic (temperature, rainfall, etc.), and physicochemical (soil types and quality) variables, we assessed habitat suitability and spatial distribution of the species and the factors explaining the origin, persistence, distribution and competitive capacity of a species using a Species Distribution Modeling (SDM) method. The results showed that the species contains 83.37% carbohydrates, 6.37% protein, and 0.45% lipids. In 100 grams, the quantities of Calcium, Sodium, Zinc, Iron, Copper, Potassium, Phosphorus, Magnesium, and Manganese are respectively 422.70, 599.41, 223.11, 252.32, 332.20, 780.41, 444.17, 287.71 and 220.73 mg. Its PRAL index is negative (- 9.80 mEq/100 g), and its Ca/P (0.95) and Na/K (0.77) ratios are less than 1. This species provides an energy value of 357.46 Kcal per 100 g, thanks to its carbohydrates and minerals and is distinguished from others by its high protein content, offering benefits for cardiovascular health. According to our SDM, the species has a very limited distribution, restricted to forests with higher biomass, humidity, and clay. Our findings highlight how distribution patterns are related to ecological and environmental factors. They also emphasize how the Comoros yam is beneficial in terms of nutritional quality. Our results represent a basic knowledge that will help scientists and decision-makers to develop conservation strategies and to improve crop yields.

Keywords: Dioscorea comorensis, nutritional characteristics, species distribution modeling, conservation strategies, crop yields improvement

Procedia PDF Downloads 22