Search results for: high-speed data transmission
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 26346

Search results for: high-speed data transmission

25476 Data Mining and Knowledge Management Application to Enhance Business Operations: An Exploratory Study

Authors: Zeba Mahmood

Abstract:

The modern business organizations are adopting technological advancement to achieve competitive edge and satisfy their consumer. The development in the field of Information technology systems has changed the way of conducting business today. Business operations today rely more on the data they obtained and this data is continuously increasing in volume. The data stored in different locations is difficult to find and use without the effective implementation of Data mining and Knowledge management techniques. Organizations who smartly identify, obtain and then convert data in useful formats for their decision making and operational improvements create additional value for their customers and enhance their operational capabilities. Marketers and Customer relationship departments of firm use Data mining techniques to make relevant decisions, this paper emphasizes on the identification of different data mining and Knowledge management techniques that are applied to different business industries. The challenges and issues of execution of these techniques are also discussed and critically analyzed in this paper.

Keywords: knowledge, knowledge management, knowledge discovery in databases, business, operational, information, data mining

Procedia PDF Downloads 533
25475 Indexing and Incremental Approach Using Map Reduce Bipartite Graph (MRBG) for Mining Evolving Big Data

Authors: Adarsh Shroff

Abstract:

Big data is a collection of dataset so large and complex that it becomes difficult to process using data base management tools. To perform operations like search, analysis, visualization on big data by using data mining; which is the process of extraction of patterns or knowledge from large data set. In recent years, the data mining applications become stale and obsolete over time. Incremental processing is a promising approach to refreshing mining results. It utilizes previously saved states to avoid the expense of re-computation from scratch. This project uses i2MapReduce, an incremental processing extension to Map Reduce, the most widely used framework for mining big data. I2MapReduce performs key-value pair level incremental processing rather than task level re-computation, supports not only one-step computation but also more sophisticated iterative computation, which is widely used in data mining applications, and incorporates a set of novel techniques to reduce I/O overhead for accessing preserved fine-grain computation states. To optimize the mining results, evaluate i2MapReduce using a one-step algorithm and three iterative algorithms with diverse computation characteristics for efficient mining.

Keywords: big data, map reduce, incremental processing, iterative computation

Procedia PDF Downloads 345
25474 GIS-Based Identification of Overloaded Distribution Transformers and Calculation of Technical Electric Power Losses

Authors: Awais Ahmed, Javed Iqbal

Abstract:

Pakistan has been for many years facing extreme challenges in energy deficit due to the shortage of power generation compared to increasing demand. A part of this energy deficit is also contributed by the power lost in transmission and distribution network. Unfortunately, distribution companies are not equipped with modern technologies and methods to identify and eliminate these losses. According to estimate, total energy lost in early 2000 was between 20 to 26 percent. To address this issue the present research study was designed with the objectives of developing a standalone GIS application for distribution companies having the capability of loss calculation as well as identification of overloaded transformers. For this purpose, Hilal Road feeder in Faisalabad Electric Supply Company (FESCO) was selected as study area. An extensive GPS survey was conducted to identify each consumer, linking it to the secondary pole of the transformer, geo-referencing equipment and documenting conductor sizes. To identify overloaded transformer, accumulative kWH reading of consumer on transformer was compared with threshold kWH. Technical losses of 11kV and 220V lines were calculated using the data from substation and resistance of the network calculated from the geo-database. To automate the process a standalone GIS application was developed using ArcObjects with engineering analysis capabilities. The application uses GIS database developed for 11kV and 220V lines to display and query spatial data and present results in the form of graphs. The result shows that about 14% of the technical loss on both high tension (HT) and low tension (LT) network while about 4 out of 15 general duty transformers were found overloaded. The study shows that GIS can be a very effective tool for distribution companies in management and planning of their distribution network.

Keywords: geographical information system, GIS, power distribution, distribution transformers, technical losses, GPS, SDSS, spatial decision support system

Procedia PDF Downloads 372
25473 Soliton Solutions in (3+1)-Dimensions

Authors: Magdy G. Asaad

Abstract:

Solitons are among the most beneficial solutions for science and technology for their applicability in physical applications including plasma, energy transport along protein molecules, wave transport along poly-acetylene molecules, ocean waves, constructing optical communication systems, transmission of information through optical fibers and Josephson junctions. In this talk, we will apply the bilinear technique to generate a class of soliton solutions to the (3+1)-dimensional nonlinear soliton equation of Jimbo-Miwa type. Examples of the resulting soliton solutions are computed and a few solutions are plotted.

Keywords: Pfaffian solutions, N-soliton solutions, soliton equations, Jimbo-Miwa

Procedia PDF Downloads 448
25472 Analyzing Large Scale Recurrent Event Data with a Divide-And-Conquer Approach

Authors: Jerry Q. Cheng

Abstract:

Currently, in analyzing large-scale recurrent event data, there are many challenges such as memory limitations, unscalable computing time, etc. In this research, a divide-and-conquer method is proposed using parametric frailty models. Specifically, the data is randomly divided into many subsets, and the maximum likelihood estimator from each individual data set is obtained. Then a weighted method is proposed to combine these individual estimators as the final estimator. It is shown that this divide-and-conquer estimator is asymptotically equivalent to the estimator based on the full data. Simulation studies are conducted to demonstrate the performance of this proposed method. This approach is applied to a large real dataset of repeated heart failure hospitalizations.

Keywords: big data analytics, divide-and-conquer, recurrent event data, statistical computing

Procedia PDF Downloads 161
25471 A Group Setting of IED in Microgrid Protection Management System

Authors: Jyh-Cherng Gu, Ming-Ta Yang, Chao-Fong Yan, Hsin-Yung Chung, Yung-Ruei Chang, Yih-Der Lee, Chen-Min Chan, Chia-Hao Hsu

Abstract:

There are a number of distributed generations (DGs) installed in microgrid, which may have diverse path and direction of power flow or fault current. The overcurrent protection scheme for the traditional radial type distribution system will no longer meet the needs of microgrid protection. Integrating the intelligent electronic device (IED) and a supervisory control and data acquisition (SCADA) with IEC 61850 communication protocol, the paper proposes a microgrid protection management system (MPMS) to protect power system from the fault. In the proposed method, the MPMS performs logic programming of each IED to coordinate their tripping sequence. The GOOSE message defined in IEC 61850 is used as the transmission information medium among IEDs. Moreover, to cope with the difference in fault current of microgrid between grid-connected mode and islanded mode, the proposed MPMS applies the group setting feature of IED to protect system and robust adaptability. Once the microgrid topology varies, the MPMS will recalculate the fault current and update the group setting of IED. Provided there is a fault, IEDs will isolate the fault at once. Finally, the Matlab/Simulink and Elipse Power Studio software are used to simulate and demonstrate the feasibility of the proposed method.

Keywords: IEC 61850, IED, group Setting, microgrid

Procedia PDF Downloads 454
25470 Ethically Integrating Robots to Assist Elders and Patients with Dementia

Authors: Suresh Lokiah

Abstract:

The emerging trend of integrating robots into elderly care, particularly for assisting patients with dementia, holds the potential to greatly transform the sector. Assisted living facilities, which house a significant number of elderly individuals and dementia patients, constantly strive to engage their residents in stimulating activities. However, due to staffing shortages, they often rely on volunteers to introduce new activities. Despite the availability of social interaction, these residents, frequently overlooked in society, are in desperate need of additional support. Robots designed for elder care are categorized based on their design and functionality. These categories include companion robots, telepresence robots, health monitoring robots, and rehab robots. However, the integration of such robots raises significant ethical concerns, notably regarding privacy, autonomy, and the risk of dehumanization. Privacy issues arise as these robots may need to continually monitor patient activities. There is also a risk of patients becoming overly dependent on these robots, potentially undermining their autonomy. Furthermore, the replacement of human touch with robotic interaction may lead to the dehumanization of care. This paper delves into the ethical considerations of incorporating robotic assistance in eldercare. It proposes a series of guidelines and strategies to ensure the ethical deployment of these robots. These guidelines suggest involving patients in the design and development process of the robots and emphasize the critical need for human oversight to respect the dignity and rights of the elderly and dementia patients. The paper also recommends implementing robust privacy measures, including secure data transmission and data anonymization. In conclusion, this paper offers a thorough examination of the ethical implications of using robotic assistance in elder care. It provides a strategic roadmap to ensure this technology is utilized ethically, thereby maximizing its potential benefits and minimizing any potential harm.

Keywords: human-robot interaction, robots for eldercare, ethics, health, dementia

Procedia PDF Downloads 90
25469 Adoption of Big Data by Global Chemical Industries

Authors: Ashiff Khan, A. Seetharaman, Abhijit Dasgupta

Abstract:

The new era of big data (BD) is influencing chemical industries tremendously, providing several opportunities to reshape the way they operate and help them shift towards intelligent manufacturing. Given the availability of free software and the large amount of real-time data generated and stored in process plants, chemical industries are still in the early stages of big data adoption. The industry is just starting to realize the importance of the large amount of data it owns to make the right decisions and support its strategies. This article explores the importance of professional competencies and data science that influence BD in chemical industries to help it move towards intelligent manufacturing fast and reliable. This article utilizes a literature review and identifies potential applications in the chemical industry to move from conventional methods to a data-driven approach. The scope of this document is limited to the adoption of BD in chemical industries and the variables identified in this article. To achieve this objective, government, academia, and industry must work together to overcome all present and future challenges.

Keywords: chemical engineering, big data analytics, industrial revolution, professional competence, data science

Procedia PDF Downloads 81
25468 Mesoporous Material Nanofibers by Electrospinning

Authors: Sh. Sohrabnezhad, A. Jafarzadeh

Abstract:

In this paper, MCM-41 mesoporous material nanofibers were synthesized by an electrospinning technique. The nanofibers were characterized by scanning electron microscopy (SEM), transmission electron microscopy (TEM), x-ray diffraction (XRD), and nitrogen adsorption–desorption measurement. Tetraethyl orthosilicate (TEOS) and polyvinyl alcohol (PVA) were used as a silica source and fiber forming source, respectively. TEM and SEM images showed synthesis of MCM-41 nanofibers with a diameter of 200 nm. The pore diameter and surface area of calcined MCM-41 nanofibers was 2.2 nm and 970 m2/g, respectively. The morphology of the MCM-41 nanofibers depended on spinning voltages.

Keywords: electrospinning, electron microscopy, fiber technology, porous materials, X-ray techniques

Procedia PDF Downloads 245
25467 Secure Multiparty Computations for Privacy Preserving Classifiers

Authors: M. Sumana, K. S. Hareesha

Abstract:

Secure computations are essential while performing privacy preserving data mining. Distributed privacy preserving data mining involve two to more sites that cannot pool in their data to a third party due to the violation of law regarding the individual. Hence in order to model the private data without compromising privacy and information loss, secure multiparty computations are used. Secure computations of product, mean, variance, dot product, sigmoid function using the additive and multiplicative homomorphic property is discussed. The computations are performed on vertically partitioned data with a single site holding the class value.

Keywords: homomorphic property, secure product, secure mean and variance, secure dot product, vertically partitioned data

Procedia PDF Downloads 408
25466 Comparison of Fundamental Frequency Model and PWM Based Model for UPFC

Authors: S. A. Al-Qallaf, S. A. Al-Mawsawi, A. Haider

Abstract:

Among all FACTS devices, the unified power flow controller (UPFC) is considered to be the most versatile device. This is due to its capability to control all the transmission system parameters (impedance, voltage magnitude, and phase angle). With the growing interest in UPFC, the attention to develop a mathematical model has increased. Several models were introduced for UPFC in literature for different type of studies in power systems. In this paper a novel comparison study between two dynamic models of UPFC with their proposed control strategies.

Keywords: FACTS, UPFC, dynamic modeling, PWM, fundamental frequency

Procedia PDF Downloads 341
25465 Performance Enrichment of Deep Feed Forward Neural Network and Deep Belief Neural Networks for Fault Detection of Automobile Gearbox Using Vibration Signal

Authors: T. Praveenkumar, Kulpreet Singh, Divy Bhanpuriya, M. Saimurugan

Abstract:

This study analysed the classification accuracy for gearbox faults using Machine Learning Techniques. Gearboxes are widely used for mechanical power transmission in rotating machines. Its rotating components such as bearings, gears, and shafts tend to wear due to prolonged usage, causing fluctuating vibrations. Increasing the dependability of mechanical components like a gearbox is hampered by their sealed design, which makes visual inspection difficult. One way of detecting impending failure is to detect a change in the vibration signature. The current study proposes various machine learning algorithms, with aid of these vibration signals for obtaining the fault classification accuracy of an automotive 4-Speed synchromesh gearbox. Experimental data in the form of vibration signals were acquired from a 4-Speed synchromesh gearbox using Data Acquisition System (DAQs). Statistical features were extracted from the acquired vibration signal under various operating conditions. Then the extracted features were given as input to the algorithms for fault classification. Supervised Machine Learning algorithms such as Support Vector Machines (SVM) and unsupervised algorithms such as Deep Feed Forward Neural Network (DFFNN), Deep Belief Networks (DBN) algorithms are used for fault classification. The fusion of DBN & DFFNN classifiers were architected to further enhance the classification accuracy and to reduce the computational complexity. The fault classification accuracy for each algorithm was thoroughly studied, tabulated, and graphically analysed for fused and individual algorithms. In conclusion, the fusion of DBN and DFFNN algorithm yielded the better classification accuracy and was selected for fault detection due to its faster computational processing and greater efficiency.

Keywords: deep belief networks, DBN, deep feed forward neural network, DFFNN, fault diagnosis, fusion of algorithm, vibration signal

Procedia PDF Downloads 108
25464 Unequal Error Protection of VQ Image Transmission System

Authors: Khelifi Mustapha, A. Moulay lakhdar, I. Elawady

Abstract:

We will study the unequal error protection for VQ image. We have used the Reed Solomon (RS) Codes as Channel coding because they offer better performance in terms of channel error correction over a binary output channel. One such channel (binary input and output) should be considered if it is the case of the application layer, because it includes all the features of the layers located below and on the what it is usually not feasible to make changes.

Keywords: vector quantization, channel error correction, Reed-Solomon channel coding, application

Procedia PDF Downloads 361
25463 Analyzing the Perception of Identity in Bilingual Communities: Case Study of Eritrean Immigrants in Switzerland

Authors: Warsa Melles

Abstract:

This study examines the way second-generation Eritrean immigrants living in the French-speaking part of Switzerland behave linguistically and culturally. The aim of this research is to demonstrate how the participants deal with their bilingualism (Tigrinya and French). More precisely, how does their language use correlates with their socio-cultural attitudes and how do these aspects (re)construct their identity? Data for this research was collected via, questionnaires and semi-structured interviews. Participants were asked to answer questions regarding their linguistic habits, their perception on being bilingual and their cultural identity. The major findings demonstrate that generation 2 relates more with the host country’s language since French is used as the main language in their daily interactions. On the other hand, due to the fact that they have never lived in Eritrea yet were raised by Eritrean born parents in a foreign country, it is more difficult for them to unanimously identify with just one culture. In that sense, intergenerational transmission plays a major role in the perception of identity. All the participants have at least a basic knowledge of Tigrinya, but the use of languages varies according to the purpose. Proficiency in the native language and sense of belonging can be correlated with the frequency of visits to Eritrea. In conclusion, the question of identity in the second-generation Eritrean community cannot be given a categorical and clear-cut answer instead, the new-self image that this social group aims to build is shaped by different factors that are essential to take into consideration.

Keywords: biculturalism, identity, language, migration

Procedia PDF Downloads 244
25462 Cross Project Software Fault Prediction at Design Phase

Authors: Pradeep Singh, Shrish Verma

Abstract:

Software fault prediction models are created by using the source code, processed metrics from the same or previous version of code and related fault data. Some company do not store and keep track of all artifacts which are required for software fault prediction. To construct fault prediction model for such company, the training data from the other projects can be one potential solution. The earlier we predict the fault the less cost it requires to correct. The training data consists of metrics data and related fault data at function/module level. This paper investigates fault predictions at early stage using the cross-project data focusing on the design metrics. In this study, empirical analysis is carried out to validate design metrics for cross project fault prediction. The machine learning techniques used for evaluation is Naïve Bayes. The design phase metrics of other projects can be used as initial guideline for the projects where no previous fault data is available. We analyze seven data sets from NASA Metrics Data Program which offer design as well as code metrics. Overall, the results of cross project is comparable to the within company data learning.

Keywords: software metrics, fault prediction, cross project, within project.

Procedia PDF Downloads 338
25461 Generation of ZnO-Au Nanocomposite in Water Using Pulsed Laser Irradiation

Authors: Elmira Solati, Atousa Mehrani, Davoud Dorranian

Abstract:

Generation of ZnO-Au nanocomposite under laser irradiation of a mixture of the ZnO and Au colloidal suspensions are experimentally investigated. In this work, firstly ZnO and Au nanoparticles are prepared by pulsed laser ablation of the corresponding metals in water using the 1064 nm wavelength of Nd:YAG laser. In a second step, the produced ZnO and Au colloidal suspensions were mixed in different volumetric ratio and irradiated using the second harmonic of a Nd:YAG laser operating at 532 nm wavelength. The changes in the size of the nanostructure and optical properties of the ZnO-Au nanocomposite are studied as a function of the volumetric ratio of ZnO and Au colloidal suspensions. The crystalline structure of the ZnO-Au nanocomposites was analyzed by X-ray diffraction (XRD). The optical properties of the samples were examined at room temperature by a UV-Vis-NIR absorption spectrophotometer. Transmission electron microscopy (TEM) was done by placing a drop of the concentrated suspension on a carbon-coated copper grid. To further confirm the morphology of ZnO-Au nanocomposites, we performed Scanning electron microscopy (SEM) analysis. Room temperature photoluminescence (PL) of the ZnO-Au nanocomposites was measured to characterize the luminescence properties of the ZnO-Au nanocomposites. The ZnO-Au nanocomposites were characterized by Fourier transform infrared (FTIR) spectroscopy. The X-ray diffraction pattern shows that the ZnO-Au nanocomposites had the polycrystalline structure of Au. The behavior observed by images of transmission electron microscope reveals that soldering of Au and ZnO nanoparticles include their adhesion. The plasmon peak in ZnO-Au nanocomposites was red-shifted and broadened in comparison with pure Au nanoparticles. By using the Tauc’s equation, the band gap energy for ZnO-Au nanocomposites is calculated to be 3.15–3.27 eV. In this work, the formation of ZnO-Au nanocomposites shifts the FTIR peak of metal oxide bands to higher wavenumbers. PL spectra of the ZnO-Au nanocomposites show that several weak peaks in the ultraviolet region and several relatively strong peaks in the visible region. SEM image indicates that the morphology of ZnO-Au nanocomposites produced in water was spherical. The TEM images of ZnO-Au nanocomposites demonstrate that with increasing the volumetric ratio of Au colloidal suspension the adhesion increased. According to the size distribution graphs of ZnO-Au nanocomposites with increasing the volumetric ratio of Au colloidal suspension the amount of ZnO-Au nanocomposites with the smaller size is further.

Keywords: Au nanoparticles, pulsed laser ablation, ZnO-Au nanocomposites, ZnO nanoparticles

Procedia PDF Downloads 337
25460 Comparing Emotion Recognition from Voice and Facial Data Using Time Invariant Features

Authors: Vesna Kirandziska, Nevena Ackovska, Ana Madevska Bogdanova

Abstract:

The problem of emotion recognition is a challenging problem. It is still an open problem from the aspect of both intelligent systems and psychology. In this paper, both voice features and facial features are used for building an emotion recognition system. A Support Vector Machine classifiers are built by using raw data from video recordings. In this paper, the results obtained for the emotion recognition are given, and a discussion about the validity and the expressiveness of different emotions is presented. A comparison between the classifiers build from facial data only, voice data only and from the combination of both data is made here. The need for a better combination of the information from facial expression and voice data is argued.

Keywords: emotion recognition, facial recognition, signal processing, machine learning

Procedia PDF Downloads 312
25459 Data Recording for Remote Monitoring of Autonomous Vehicles

Authors: Rong-Terng Juang

Abstract:

Autonomous vehicles offer the possibility of significant benefits to social welfare. However, fully automated cars might not be going to happen in the near further. To speed the adoption of the self-driving technologies, many governments worldwide are passing laws requiring data recorders for the testing of autonomous vehicles. Currently, the self-driving vehicle, (e.g., shuttle bus) has to be monitored from a remote control center. When an autonomous vehicle encounters an unexpected driving environment, such as road construction or an obstruction, it should request assistance from a remote operator. Nevertheless, large amounts of data, including images, radar and lidar data, etc., have to be transmitted from the vehicle to the remote center. Therefore, this paper proposes a data compression method of in-vehicle networks for remote monitoring of autonomous vehicles. Firstly, the time-series data are rearranged into a multi-dimensional signal space. Upon the arrival, for controller area networks (CAN), the new data are mapped onto a time-data two-dimensional space associated with the specific CAN identity. Secondly, the data are sampled based on differential sampling. Finally, the whole set of data are encoded using existing algorithms such as Huffman, arithmetic and codebook encoding methods. To evaluate system performance, the proposed method was deployed on an in-house built autonomous vehicle. The testing results show that the amount of data can be reduced as much as 1/7 compared to the raw data.

Keywords: autonomous vehicle, data compression, remote monitoring, controller area networks (CAN), Lidar

Procedia PDF Downloads 158
25458 Prevalence of Seropositivity for Cytomegalovirus in Patients with Hereditary Bleeding Diseases in West Azerbaijan of Iran

Authors: Zakieh Rostamzadeh, Zahra Shirmohammadi

Abstract:

Human cytomegalovirus is a species of the cytomegalovirus family of viruses, which in turn is a member of the viral family known as herpesviridae or herpesviruses. Although they may be found throughout the body, HCMV infections are frequently associated with the salivary glands. HCMV infection is typically unnoticed in healthy people, but can be life-threatening for the immunocompromised such as HIV-infected persons, organ transplant recipients, or newborn infants. After infection, HCMV has an ability to remain latent within the body over long periods. Cytomegalovirus (CMV) causes infection in immunocompromised, hemophilia patients and those who received blood transfusion frequently. This study aimed at determining the prevalence of cytomegalovirus (CMV) antibodies in hemophilia patients. Materials and Methods: A retrospective observational study was carried out in Urmia, North West of Iran. The study population comprised a sample of 50 hemophilic patients born after 1985 and have received blood factors in West Azerbaijan. The exclusion criteria include: drug abusing, high risk sexual contacts, vertical transmission of mother to fetus and suspicious needling. All samples were evaluated with the method of ELISA, with a certain kind of kit and by a certain laboratory. Results: Fifty hemophiliacs from 250 patients registered with Urmia Hemophilia Society were enrolled in the study including 43 (86%) male, and 7 (14%) female. The mean age of patients was 10.3 years, range 3 to 25 years. None of patients had risk factors mentioned above. Among our studied population, 34(68%) had hemophilia A, 1 (2%) hemophilia B, 8 (16%) VWF, 3(6%) factor VII deficiency, 1 (2%) factor V deficiency, 1 (2%) factor X deficiency, 1 (2%). Sera of 50 Hemodialysis patients were investigated for CMV-specific immunoglobulin G (IgG) and IgM. % 91.89 patients were anti-CMV IgG positive and %40.54 was seropositive for anti-CMV IgM. 37.8% patient had serological evidence of reactivation and 2.7% of patients had the primary infection. Discussion: There was no relationship between the antibody titer and: drug abusing, high risk sexual contacts, vertical transmission of mother to fetus and suspicious needling.

Keywords: bioinformatics, biomedicine, cytomegalovirus, immunocompromise

Procedia PDF Downloads 355
25457 Multimedia Data Fusion for Event Detection in Twitter by Using Dempster-Shafer Evidence Theory

Authors: Samar M. Alqhtani, Suhuai Luo, Brian Regan

Abstract:

Data fusion technology can be the best way to extract useful information from multiple sources of data. It has been widely applied in various applications. This paper presents a data fusion approach in multimedia data for event detection in twitter by using Dempster-Shafer evidence theory. The methodology applies a mining algorithm to detect the event. There are two types of data in the fusion. The first is features extracted from text by using the bag-ofwords method which is calculated using the term frequency-inverse document frequency (TF-IDF). The second is the visual features extracted by applying scale-invariant feature transform (SIFT). The Dempster - Shafer theory of evidence is applied in order to fuse the information from these two sources. Our experiments have indicated that comparing to the approaches using individual data source, the proposed data fusion approach can increase the prediction accuracy for event detection. The experimental result showed that the proposed method achieved a high accuracy of 0.97, comparing with 0.93 with texts only, and 0.86 with images only.

Keywords: data fusion, Dempster-Shafer theory, data mining, event detection

Procedia PDF Downloads 408
25456 Legal Issues of Collecting and Processing Big Health Data in the Light of European Regulation 679/2016

Authors: Ioannis Iglezakis, Theodoros D. Trokanas, Panagiota Kiortsi

Abstract:

This paper aims to explore major legal issues arising from the collection and processing of Health Big Data in the light of the new European secondary legislation for the protection of personal data of natural persons, placing emphasis on the General Data Protection Regulation 679/2016. Whether Big Health Data can be characterised as ‘personal data’ or not is really the crux of the matter. The legal ambiguity is compounded by the fact that, even though the processing of Big Health Data is premised on the de-identification of the data subject, the possibility of a combination of Big Health Data with other data circulating freely on the web or from other data files cannot be excluded. Another key point is that the application of some provisions of GPDR to Big Health Data may both absolve the data controller of his legal obligations and deprive the data subject of his rights (e.g., the right to be informed), ultimately undermining the fundamental right to the protection of personal data of natural persons. Moreover, data subject’s rights (e.g., the right not to be subject to a decision based solely on automated processing) are heavily impacted by the use of AI, algorithms, and technologies that reclaim health data for further use, resulting in sometimes ambiguous results that have a substantial impact on individuals. On the other hand, as the COVID-19 pandemic has revealed, Big Data analytics can offer crucial sources of information. In this respect, this paper identifies and systematises the legal provisions concerned, offering interpretative solutions that tackle dangers concerning data subject’s rights while embracing the opportunities that Big Health Data has to offer. In addition, particular attention is attached to the scope of ‘consent’ as a legal basis in the collection and processing of Big Health Data, as the application of data analytics in Big Health Data signals the construction of new data and subject’s profiles. Finally, the paper addresses the knotty problem of role assignment (i.e., distinguishing between controller and processor/joint controllers and joint processors) in an era of extensive Big Health data sharing. The findings are the fruit of a current research project conducted by a three-member research team at the Faculty of Law of the Aristotle University of Thessaloniki and funded by the Greek Ministry of Education and Religious Affairs.

Keywords: big health data, data subject rights, GDPR, pandemic

Procedia PDF Downloads 124
25455 Investigation of Existing Guidelines for Four-Legged Angular Telecommunication Tower

Authors: Sankara Ganesh Dhoopam, Phaneendra Aduri

Abstract:

Lattice towers are light weight structures which are primarily governed by the effects of wind loading. Ensuring a precise assessment of wind loads on the tower structure, antennas, and associated equipment is vital for the safety and efficiency of tower design. Earlier, the Indian standards are not available for design of telecom towers. Instead, the industry conventionally relied on the general building wind loading standard for calculating loads on tower components and the transmission line tower design standard for designing the angular members of the towers. Subsequently, the Bureau of Indian Standards (BIS) revised these standards and angular member design standard. While the transmission line towers are designed using the above standard, a full-scale model test will be done to prove the design. Telecom angular towers are also designed using the same with overload factor/factor of safety without full scale tower model testing. General construction in steel design code is available with limit state design approach and is applicable to the design of general structures involving angles and tubes but not used for angle member design of towers. Recently, in response to the evolving industry needs, the Bureau of Indian Standards (BIS) introduced a new standard titled “Isolated Towers, Masts, and Poles using structural steel -Code of practice” for the design of telecom towers. This study focuses on a 40m four legged angular tower to compare loading calculations and member designs between old and new standards. Additionally, a comparative analysis aligning with the new code provisions with international loading and design standards with a specific focus on American standards has been carried out. This paper elaborates code-based provisions used for load and member design calculations, including the influence of "ka" area averaging factor introduced in new wind load case.

Keywords: telecom, angular tower, PLS tower, GSM antenna, microwave antenna, IS 875(Part-3):2015, IS 802(Part-1/sec-2):2016, IS 800:2007, IS 17740:2022, ANSI/TIA-222G, ANSI/TIA-222H.

Procedia PDF Downloads 79
25454 Adaptive Data Approximations Codec (ADAC) for AI/ML-based Cyber-Physical Systems

Authors: Yong-Kyu Jung

Abstract:

The fast growth in information technology has led to de-mands to access/process data. CPSs heavily depend on the time of hardware/software operations and communication over the network (i.e., real-time/parallel operations in CPSs (e.g., autonomous vehicles). Since data processing is an im-portant means to overcome the issue confronting data management, reducing the gap between the technological-growth and the data-complexity and channel-bandwidth. An adaptive perpetual data approximation method is intro-duced to manage the actual entropy of the digital spectrum. An ADAC implemented as an accelerator and/or apps for servers/smart-connected devices adaptively rescales digital contents (avg.62.8%), data processing/access time/energy, encryption/decryption overheads in AI/ML applications (facial ID/recognition).

Keywords: adaptive codec, AI, ML, HPC, cyber-physical, cybersecurity

Procedia PDF Downloads 75
25453 Knowledge regarding Sexual and Reproductive Health among Adolescents in Higher Secondary School

Authors: Kopila Shrestha

Abstract:

Adolescent sexual reproductive health is one of the most important issues in the world. Reproductive ability is taking place at an earlier age and adolescents are indulging in risk taking behaviors day by day. A descriptive cross-sectional study was conducted in Kathmandu valley to assess the knowledge regarding sexual and reproductive health among adolescent. Total of 200 respondents were selected through non-probability convenient sampling technique. Self-administered written questionnaires using semi-structured questions were used. The collected data were analyzed by using descriptive statistics such as frequency, percentage, mean, standard deviation and inferential statistics such as Chi-square test. The findings revealed that most of the respondents had adequate knowledge regarding transmission and protection of HIV/AIDs and STIs but still some respondents had a misconception regarding it. Few respondents had knowledge regarding legal age for marriage and the minimum age for first child bearing. The statistical analysis revealed that the total mean knowledge score with standard deviation was 45.02±8.674. Nearly half of the respondents (49.5%) had a moderate level of knowledge, followed by an inadequate level of knowledge 29.5% and adequate level of knowledge 21.0% regarding sexual and reproductive health. There was significant association of level of knowledge with area of residence (p-value .002) but no association with age (p-value .067), sex (p-value .999), religion (p-value .082) and ethnicity (p-value .114). Nearly half of the participants possess some knowledge about sexual and reproductive health but still effective educational intervention is required in higher secondary school to encourage more sensible and healthy behaviour.

Keywords: adolescents, higher secondary school, knowledge, sexual and reproductive health

Procedia PDF Downloads 281
25452 The Ontological Memory in Bergson as a Conceptual Tool for the Analysis of the Digital Conjuncture

Authors: Douglas Rossi Ramos

Abstract:

The current digital conjuncture, called by some authors as 'Internet of Things' (IoT), 'Web 2.0' or even 'Web 3.0', consists of a network that encompasses any communication of objects and entities, such as data, information, technologies, and people. At this juncture, especially characterized by an "object socialization," communication can no longer be represented as a simple informational flow of messages from a sender, crossing a channel or medium, reaching a receiver. The idea of communication must, therefore, be thought of more broadly in which it is possible to analyze the process communicative from interactions between humans and nonhumans. To think about this complexity, a communicative process that encompasses both humans and other beings or entities communicating (objects and things), it is necessary to constitute a new epistemology of communication to rethink concepts and notions commonly attributed to humans such as 'memory.' This research aims to contribute to this epistemological constitution from the discussion about the notion of memory according to the complex ontology of Henri Bergson. Among the results (the notion of memory in Bergson presents itself as a conceptual tool for the analysis of posthumanism and the anthropomorphic conjuncture of the new advent of digital), there was the need to think about an ontological memory, analyzed as a being itself (being itself of memory), as a strategy for understanding the forms of interaction and communication that constitute the new digital conjuncture, in which communicating beings or entities tend to interact with each other. Rethinking the idea of communication beyond the dimension of transmission in informative sequences paves the way for an ecological perspective of the digital dwelling condition.

Keywords: communication, digital, Henri Bergson, memory

Procedia PDF Downloads 158
25451 Scanning Transmission Electron Microscopic Analysis of Gamma Ray Exposed Perovskite Solar Cells

Authors: Aleksandra Boldyreva, Alexander Golubnichiy, Artem Abakumov

Abstract:

Various perovskite materials have surprisingly high resistance towards high-energy electrons, protons, and hard ionization, such as X-rays and gamma-rays. Superior radiation hardness makes a family of perovskite semiconductors an attractive candidate for single- and multijunction solar cells for the space environment and as X-ray and gamma-ray detectors. One of the methods to study the radiation hardness of different materials is by exposing them to gamma photons with high energies (above 500 keV) Herein, we have explored the recombination dynamics and defect concentration of a mixed cation mixed halide perovskite Cs0.17FA0.83PbI1.8Br1.2 with 1.74 eV bandgap after exposure to a gamma-ray source (2.5 Gy/min). We performed an advanced STEM EDX analysis to reveal different types of defects formed during gamma exposure. It was found that 10 kGy dose results in significant improvement of perovskite crystallinity and homogeneous distribution of I ions. While the absorber layer withstood gamma exposure, the hole transport layer (PTAA) as well as indium tin oxide (ITO) were significantly damaged, which increased the interface recombination rate and reduction of fill factor in solar cells. Thus, STEM analysis is a powerful technique that can reveal defects formed by gamma exposure in perovskite solar cells. Methods: Data will be collected from perovskite solar cells (PSCs) and thin films exposed to gamma ionisator. For thin films 50 μL of the Cs0.17FA0.83PbI1.8Br1.2 solution in DMF was deposited (dynamically) at 3000 rpm followed by quenching with 100 μL of ethyl acetate (dropped 10 sec after perovskite precursor) applied at the same spin-coating frequency. The deposited Cs0.17FA0.83PbI1.8Br1.2 films were annealed for 10 min at 100 °C, which led to the development of a dark brown color. For the solar cells, 10% suspension of SnO2 nanoparticles (Alfa Aesar) was deposited at 4000 rpm, followed by annealing on air at 170 ˚C for 20 min. Next, samples were introduced into a nitrogen glovebox for the deposition of all remaining layers. Perovskite film was applied in the same way as in thin films described earlier. Solution of poly-triaryl amine PTAA (Sigma Aldrich) (4 mg in chlorobenzene) was applied at 1000 rpm atop of perovskite layer. Next, 30 nm of VOx was deposited atop the PTAA layer on the whole sample surface using the physical vapor deposition (PVD) technique. Silver electrodes (100 nm) were evaporated in a high vacuum (10-6 mbar) through a shadow mask, defining the active area of each device as ~0.16 cm2. The prepared samples (thin films and solar cells) were packed in Al lamination foil inside the argon glove box. The set of samples consisted of 6 thin films and 6 solar cells, which were exposed to 6, 10, and 21 kGy (2 samples per dose) with 137Cs gamma-ray source (E = 662 keV) with a dose rate of 2.5 Gy/min. The exposed samples will be studied on a focused ion beam (FIB) on a dual-beam scanning electron microscope from ThermoFisher, the Helios G4 Plasma FIB Uxe, operating with a xenon plasma.

Keywords: perovskite solar cells, transmission electron microscopy, radiation hardness, gamma irradiation

Procedia PDF Downloads 16
25450 Real-Time Visualization Using GPU-Accelerated Filtering of LiDAR Data

Authors: Sašo Pečnik, Borut Žalik

Abstract:

This paper presents a real-time visualization technique and filtering of classified LiDAR point clouds. The visualization is capable of displaying filtered information organized in layers by the classification attribute saved within LiDAR data sets. We explain the used data structure and data management, which enables real-time presentation of layered LiDAR data. Real-time visualization is achieved with LOD optimization based on the distance from the observer without loss of quality. The filtering process is done in two steps and is entirely executed on the GPU and implemented using programmable shaders.

Keywords: filtering, graphics, level-of-details, LiDAR, real-time visualization

Procedia PDF Downloads 301
25449 Wind Power Forecasting Using Echo State Networks Optimized by Big Bang-Big Crunch Algorithm

Authors: Amir Hossein Hejazi, Nima Amjady

Abstract:

In recent years, due to environmental issues traditional energy sources had been replaced by renewable ones. Wind energy as the fastest growing renewable energy shares a considerable percent of energy in power electricity markets. With this fast growth of wind energy worldwide, owners and operators of wind farms, transmission system operators, and energy traders need reliable and secure forecasts of wind energy production. In this paper, a new forecasting strategy is proposed for short-term wind power prediction based on Echo State Networks (ESN). The forecast engine utilizes state-of-the-art training process including dynamical reservoir with high capability to learn complex dynamics of wind power or wind vector signals. The study becomes more interesting by incorporating prediction of wind direction into forecast strategy. The Big Bang-Big Crunch (BB-BC) evolutionary optimization algorithm is adopted for adjusting free parameters of ESN-based forecaster. The proposed method is tested by real-world hourly data to show the efficiency of the forecasting engine for prediction of both wind vector and wind power output of aggregated wind power production.

Keywords: wind power forecasting, echo state network, big bang-big crunch, evolutionary optimization algorithm

Procedia PDF Downloads 570
25448 The Nature and the Structure of Scientific and Innovative Collaboration Networks

Authors: Afshin Moazami, Andrea Schiffauerova

Abstract:

The objective of this work is to investigate the development and the role of collaboration networks in the creation of knowledge and innovations in the US and Canada, with a special focus on Quebec. In order to create scientific networks, the data on journal articles were extracted from SCOPUS, and the networks were built based on the co-authorship of the journal papers. For innovation networks, the USPTO database was used, and the networks were built on the patent co-inventorship. Various indicators characterizing the evolution of the network structure and the positions of the researchers and inventors in the networks were calculated. The comparison between the United States, Canada, and Quebec was then carried out. The preliminary results show that the nature of scientific collaboration networks differs from the one seen in innovation networks. Scientists work in bigger teams and are mostly interconnected within one giant network component, whereas the innovation network is much more clustered and fragmented, the inventors work more repetitively with the same partners, often in smaller isolated groups. In both Canada and the US, an increasing tendency towards collaboration was observed, and it was found that networks are getting bigger and more centralized with time. Moreover, a declining share of knowledge transfers per scientist was detected, suggesting an increasing specialization of science. The US collaboration networks tend to be more centralized than the Canadian ones. Quebec shares a lot of features with the Canadian network, but some differences were observed, for example, Quebec inventors rely more on the knowledge transmission through intermediaries.

Keywords: Canada, collaboration, innovation network, scientific network, Quebec, United States

Procedia PDF Downloads 197
25447 Dengue Virus Infection Rate in Mosquitoes Collected in Thailand Related to Environmental Factors

Authors: Chanya Jetsukontorn

Abstract:

Dengue hemorrhagic fever is the most important Mosquito-borne disease and the major public health problem in Thailand. The most important vector is Aedes aegypti. Environmental factors such as temperature, relative humidity, and biting rate affect dengue virus infection. The most effective measure for prevention is controlling of vector mosquitoes. In addition, surveillance of field-caught mosquitoes is imperative for determining the natural vector and can provide an early warning sign at risk of transmission in an area. In this study, Aedes aegypti mosquitoes were collected in Amphur Muang, Phetchabun Province, Thailand. The mosquitoes were collected in the rainy season and the dry season both indoor and outdoor. During mosquito’s collection, the data of environmental factors such as temperature, humidity and breeding sites were observed and recorded. After identified to species, mosquitoes were pooled according to genus/species, and sampling location. Pools consisted of a maximum of 10 Aedes mosquitoes. 70 pools of 675 Aedes aegypti were screened with RT-PCR for flaviviruses. To confirm individual infection for determining True infection rate, individual mosquitoes which gave positive results of flavivirus detection were tested for dengue virus by RT-PCR. The infection rate was 5.93% (4 positive individuals from 675 mosquitoes). The probability to detect dengue virus in mosquitoes at the neighbour’s houses was 1.25 times, especially where distances between neighboring houses and patient’s houses were less than 50 meters. The relative humidity in dengue-infected villages with dengue-infected mosquitoes was significantly higher than villages that free from dengue-infected mosquitoes. Indoor biting rate of Aedes aegypti was 14.87 times higher than outdoor, and biting times of 09.00-10.00, 10.00-11.00, 11.00-12.00 yielded 1.77, 1.46, 0.68mosquitoes/man-hour, respectively. These findings confirm environmental factors were related to Dengue infection in Thailand. Data obtained from this study will be useful for the prevention and control of the diseases.

Keywords: Aedes aegypti, Dengue virus, environmental factors, one health, PCR

Procedia PDF Downloads 141