Search results for: data personalization
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 25155

Search results for: data personalization

25095 Employee Well-being in the Age of AI: Perceptions, Concerns, Behaviors, and Outcomes

Authors: Soheila Sadeghi

Abstract:

— The growing integration of Artificial Intelligence (AI) into Human Resources (HR) processes has transformed the way organizations manage recruitment, performance evaluation, and employee engagement. While AI offers numerous advantages—such as improved efficiency, reduced bias, and hyper-personalization—it raises significant concerns about employee well-being, job security, fairness, and transparency. The study examines how AI shapes employee perceptions, job satisfaction, mental health, and retention. Key findings reveal that: (a) while AI can enhance efficiency and reduce bias, it also raises concerns about job security, fairness, and privacy; (b) transparency in AI systems emerges as a critical factor in fostering trust and positive employee attitudes; and (c) AI systems can both support and undermine employee well-being, depending on how they are implemented and perceived. The research introduces an AI-employee well-being Interaction Framework, illustrating how AI influences employee perceptions, behaviors, and outcomes. Organizational strategies, such as (a) clear communication, (b) upskilling programs, and (c) employee involvement in AI implementation, are identified as crucial for mitigating negative impacts and enhancing positive outcomes. The study concludes that the successful integration of AI in HR requires a balanced approach that (a) prioritizes employee well-being, (b) facilitates human-AI collaboration, and (c) ensures ethical and transparent AI practices alongside technological advancement.

Keywords: artificial intelligence, human resources, employee well-being, job satisfaction, organizational support, transparency in AI

Procedia PDF Downloads 29
25094 Integrating AI in Education: Enhancing Learning Processes and Personalization

Authors: Waleed Afandi

Abstract:

Artificial intelligence (AI) has rapidly transformed various sectors, including education. This paper explores the integration of AI in education, emphasizing its potential to revolutionize learning processes, enhance teaching methodologies, and personalize education. We examine the historical context of AI in education, current applications, and the potential challenges and ethical considerations associated with its implementation. By reviewing a wide range of literature, this study aims to provide a comprehensive understanding of how AI can be leveraged to improve educational outcomes and the future directions of AI-driven educational innovations. Additionally, the paper discusses the impact of AI on student engagement, teacher support, and administrative efficiency. Case studies highlighting successful AI applications in diverse educational settings are presented, showcasing the practical benefits and real-world implications. The analysis also addresses potential disparities in access to AI technologies and suggests strategies to ensure equitable implementation. Through a balanced examination of the promises and pitfalls of AI in education, this study seeks to inform educators, policymakers, and technologists about the optimal pathways for integrating AI to foster an inclusive, effective, and innovative educational environment.

Keywords: artificial intelligence, education, personalized learning, teaching methodologies, educational outcomes, AI applications, student engagement, teacher support, administrative efficiency, equity in education

Procedia PDF Downloads 31
25093 DesignChain: Automated Design of Products Featuring a Large Number of Variants

Authors: Lars Rödel, Jonas Krebs, Gregor Müller

Abstract:

The growing price pressure due to the increasing number of global suppliers, the growing individualization of products and ever-shorter delivery times are upcoming challenges in the industry. In this context, Mass Personalization stands for the individualized production of customer products in batch size 1 at the price of standardized products. The possibilities of digitalization and automation of technical order processing open up the opportunity for companies to significantly reduce their cost of complexity and lead times and thus enhance their competitiveness. Many companies already use a range of CAx tools and configuration solutions today. Often, the expert knowledge of employees is hidden in "knowledge silos" and is rarely networked across processes. DesignChain describes the automated digital process from the recording of individual customer requirements, through design and technical preparation, to production. Configurators offer the possibility of mapping variant-rich products within the Design Chain. This transformation of customer requirements into product features makes it possible to generate even complex CAD models, such as those for large-scale plants, on a rule-based basis. With the aid of an automated CAx chain, production-relevant documents are thus transferred digitally to production. This process, which can be fully automated, allows variants to always be generated on the basis of current version statuses.

Keywords: automation, design, CAD, CAx

Procedia PDF Downloads 76
25092 Implementation of an IoT Sensor Data Collection and Analysis Library

Authors: Jihyun Song, Kyeongjoo Kim, Minsoo Lee

Abstract:

Due to the development of information technology and wireless Internet technology, various data are being generated in various fields. These data are advantageous in that they provide real-time information to the users themselves. However, when the data are accumulated and analyzed, more various information can be extracted. In addition, development and dissemination of boards such as Arduino and Raspberry Pie have made it possible to easily test various sensors, and it is possible to collect sensor data directly by using database application tools such as MySQL. These directly collected data can be used for various research and can be useful as data for data mining. However, there are many difficulties in using the board to collect data, and there are many difficulties in using it when the user is not a computer programmer, or when using it for the first time. Even if data are collected, lack of expert knowledge or experience may cause difficulties in data analysis and visualization. In this paper, we aim to construct a library for sensor data collection and analysis to overcome these problems.

Keywords: clustering, data mining, DBSCAN, k-means, k-medoids, sensor data

Procedia PDF Downloads 378
25091 Government (Big) Data Ecosystem: Definition, Classification of Actors, and Their Roles

Authors: Syed Iftikhar Hussain Shah, Vasilis Peristeras, Ioannis Magnisalis

Abstract:

Organizations, including governments, generate (big) data that are high in volume, velocity, veracity, and come from a variety of sources. Public Administrations are using (big) data, implementing base registries, and enforcing data sharing within the entire government to deliver (big) data related integrated services, provision of insights to users, and for good governance. Government (Big) data ecosystem actors represent distinct entities that provide data, consume data, manipulate data to offer paid services, and extend data services like data storage, hosting services to other actors. In this research work, we perform a systematic literature review. The key objectives of this paper are to propose a robust definition of government (big) data ecosystem and a classification of government (big) data ecosystem actors and their roles. We showcase a graphical view of actors, roles, and their relationship in the government (big) data ecosystem. We also discuss our research findings. We did not find too much published research articles about the government (big) data ecosystem, including its definition and classification of actors and their roles. Therefore, we lent ideas for the government (big) data ecosystem from numerous areas that include scientific research data, humanitarian data, open government data, industry data, in the literature.

Keywords: big data, big data ecosystem, classification of big data actors, big data actors roles, definition of government (big) data ecosystem, data-driven government, eGovernment, gaps in data ecosystems, government (big) data, public administration, systematic literature review

Procedia PDF Downloads 162
25090 Government Big Data Ecosystem: A Systematic Literature Review

Authors: Syed Iftikhar Hussain Shah, Vasilis Peristeras, Ioannis Magnisalis

Abstract:

Data that is high in volume, velocity, veracity and comes from a variety of sources is usually generated in all sectors including the government sector. Globally public administrations are pursuing (big) data as new technology and trying to adopt a data-centric architecture for hosting and sharing data. Properly executed, big data and data analytics in the government (big) data ecosystem can be led to data-driven government and have a direct impact on the way policymakers work and citizens interact with governments. In this research paper, we conduct a systematic literature review. The main aims of this paper are to highlight essential aspects of the government (big) data ecosystem and to explore the most critical socio-technical factors that contribute to the successful implementation of government (big) data ecosystem. The essential aspects of government (big) data ecosystem include definition, data types, data lifecycle models, and actors and their roles. We also discuss the potential impact of (big) data in public administration and gaps in the government data ecosystems literature. As this is a new topic, we did not find specific articles on government (big) data ecosystem and therefore focused our research on various relevant areas like humanitarian data, open government data, scientific research data, industry data, etc.

Keywords: applications of big data, big data, big data types. big data ecosystem, critical success factors, data-driven government, egovernment, gaps in data ecosystems, government (big) data, literature review, public administration, systematic review

Procedia PDF Downloads 228
25089 A Machine Learning Decision Support Framework for Industrial Engineering Purposes

Authors: Anli Du Preez, James Bekker

Abstract:

Data is currently one of the most critical and influential emerging technologies. However, the true potential of data is yet to be exploited since, currently, about 1% of generated data are ever actually analyzed for value creation. There is a data gap where data is not explored due to the lack of data analytics infrastructure and the required data analytics skills. This study developed a decision support framework for data analytics by following Jabareen’s framework development methodology. The study focused on machine learning algorithms, which is a subset of data analytics. The developed framework is designed to assist data analysts with little experience, in choosing the appropriate machine learning algorithm given the purpose of their application.

Keywords: Data analytics, Industrial engineering, Machine learning, Value creation

Procedia PDF Downloads 168
25088 Goal-Setting in a Peer Leader HIV Prevention Intervention to Improve Preexposure Prophylaxis Access among Black Men Who Have Sex with Men

Authors: Tim J. Walsh, Lindsay E. Young, John A. Schneider

Abstract:

Background: The disproportionate rate of HIV infection among Black men who have sex with men (BMSM) in the United States suggest the importance of Preexposure Prophylaxis (PrEP) interventions for this population. As such, there is an urgent need for innovative outreach strategies that extend beyond the traditional patient-provider relationship to reach at-risk populations. Training members of the BMSM community as peer change agents (PCAs) is one such strategy. An important piece of this training is goal-setting. Goal-setting not only encourages PCAs to define the parameters of the intervention according to their lived experience, it also helps them plan courses of action. Therefore, the aims of this mixed methods study are: (1) Characterize the goals that BMSM set at the end of their PrEP training and (2) Assess the relationship between goal types and PCA engagement. Methods: Between March 2016 and July 2016, preliminary data were collected from 68 BMSM, ages 18-33, in Chicago as part of an ongoing PrEP intervention. Once enrolled, PCAs participate in a half-day training in which they learn about PrEP, practice initiating conversations about PrEP, and identify strategies for supporting at-risk peers through the PrEP adoption process. Training culminates with a goal-setting exercise, whereby participants establish a goal related to their role as a PCA. Goals were coded for features that either emerged from the data itself or existed in extant goal-setting literature. The main outcomes were (1) number of PrEP conversations PCAs self-report during booster conversations two weeks following the intervention and (2) number of peers PCAs recruit into the study that completed the PrEP workshop. Results: PCA goals (N=68) were characterized in terms of four features: Specificity, target population, personalization, and purpose defined. To date, PCAs report a collective 52 PrEP conversations. 56, 25, and 6% of PrEP conversations occurred with friends, family, and sexual partners, respectively. PCAs with specific goals had more PrEP conversations with at-risk peers compared to those with vague goals (58% vs. 42%); PCAs with personalized goals had more PrEP conversations compared to those with de-personalized goals (60% vs. 53%); and PCAs with goals that defined a purpose had more PrEP conversations compared to those who did not define a purpose (75% vs. 52%). 100% of PCAs with goals that defined a purpose recruited peers into the study compared to 45 percent of PCAs with goals that did not define a purpose. Conclusion: Our preliminary analysis demonstrates that BMSM are motivated to set and work toward a diverse set of goals to support peers in PrEP adoption. PCAs with goals involving a clearly defined purpose had more PrEP conversations and greater peer recruitment than those with goals lacking a defined purpose. This may indicate that PCAs who define their purpose at the outset of their participation will be more engaged in the study than those who do not. Goal-setting may be considered as a component of future HIV prevention interventions to advance intervention goals and as an indicator of PCAs understanding of the intervention.

Keywords: HIV prevention, MSM, peer change agent, preexposure prophylaxis

Procedia PDF Downloads 196
25087 Providing Security to Private Cloud Using Advanced Encryption Standard Algorithm

Authors: Annapureddy Srikant Reddy, Atthanti Mahendra, Samala Chinni Krishna, N. Neelima

Abstract:

In our present world, we are generating a lot of data and we, need a specific device to store all these data. Generally, we store data in pen drives, hard drives, etc. Sometimes we may loss the data due to the corruption of devices. To overcome all these issues, we implemented a cloud space for storing the data, and it provides more security to the data. We can access the data with just using the internet from anywhere in the world. We implemented all these with the java using Net beans IDE. Once user uploads the data, he does not have any rights to change the data. Users uploaded files are stored in the cloud with the file name as system time and the directory will be created with some random words. Cloud accepts the data only if the size of the file is less than 2MB.

Keywords: cloud space, AES, FTP, NetBeans IDE

Procedia PDF Downloads 206
25086 Business Intelligence for Profiling of Telecommunication Customer

Authors: Rokhmatul Insani, Hira Laksmiwati Soemitro

Abstract:

Business Intelligence is a methodology that exploits the data to produce information and knowledge systematically, business intelligence can support the decision-making process. Some methods in business intelligence are data warehouse and data mining. A data warehouse can store historical data from transactional data. For data modelling in data warehouse, we apply dimensional modelling by Kimball. While data mining is used to extracting patterns from the data and get insight from the data. Data mining has many techniques, one of which is segmentation. For profiling of telecommunication customer, we use customer segmentation according to customer’s usage of services, customer invoice and customer payment. Customers can be grouped according to their characteristics and can be identified the profitable customers. We apply K-Means Clustering Algorithm for segmentation. The input variable for that algorithm we use RFM (Recency, Frequency and Monetary) model. All process in data mining, we use tools IBM SPSS modeller.

Keywords: business intelligence, customer segmentation, data warehouse, data mining

Procedia PDF Downloads 483
25085 Imputation Technique for Feature Selection in Microarray Data Set

Authors: Younies Saeed Hassan Mahmoud, Mai Mabrouk, Elsayed Sallam

Abstract:

Analysing DNA microarray data sets is a great challenge, which faces the bioinformaticians due to the complication of using statistical and machine learning techniques. The challenge will be doubled if the microarray data sets contain missing data, which happens regularly because these techniques cannot deal with missing data. One of the most important data analysis process on the microarray data set is feature selection. This process finds the most important genes that affect certain disease. In this paper, we introduce a technique for imputing the missing data in microarray data sets while performing feature selection.

Keywords: DNA microarray, feature selection, missing data, bioinformatics

Procedia PDF Downloads 574
25084 PDDA: Priority-Based, Dynamic Data Aggregation Approach for Sensor-Based Big Data Framework

Authors: Lutful Karim, Mohammed S. Al-kahtani

Abstract:

Sensors are being used in various applications such as agriculture, health monitoring, air and water pollution monitoring, traffic monitoring and control and hence, play the vital role in the growth of big data. However, sensors collect redundant data. Thus, aggregating and filtering sensors data are significantly important to design an efficient big data framework. Current researches do not focus on aggregating and filtering data at multiple layers of sensor-based big data framework. Thus, this paper introduces (i) three layers data aggregation and framework for big data and (ii) a priority-based, dynamic data aggregation scheme (PDDA) for the lowest layer at sensors. Simulation results show that the PDDA outperforms existing tree and cluster-based data aggregation scheme in terms of overall network energy consumptions and end-to-end data transmission delay.

Keywords: big data, clustering, tree topology, data aggregation, sensor networks

Procedia PDF Downloads 346
25083 Control the Flow of Big Data

Authors: Shizra Waris, Saleem Akhtar

Abstract:

Big data is a research area receiving attention from academia and IT communities. In the digital world, the amounts of data produced and stored have within a short period of time. Consequently this fast increasing rate of data has created many challenges. In this paper, we use functionalism and structuralism paradigms to analyze the genesis of big data applications and its current trends. This paper presents a complete discussion on state-of-the-art big data technologies based on group and stream data processing. Moreover, strengths and weaknesses of these technologies are analyzed. This study also covers big data analytics techniques, processing methods, some reported case studies from different vendor, several open research challenges and the chances brought about by big data. The similarities and differences of these techniques and technologies based on important limitations are also investigated. Emerging technologies are suggested as a solution for big data problems.

Keywords: computer, it community, industry, big data

Procedia PDF Downloads 194
25082 High Performance Computing and Big Data Analytics

Authors: Branci Sarra, Branci Saadia

Abstract:

Because of the multiplied data growth, many computer science tools have been developed to process and analyze these Big Data. High-performance computing architectures have been designed to meet the treatment needs of Big Data (view transaction processing standpoint, strategic, and tactical analytics). The purpose of this article is to provide a historical and global perspective on the recent trend of high-performance computing architectures especially what has a relation with Analytics and Data Mining.

Keywords: high performance computing, HPC, big data, data analysis

Procedia PDF Downloads 520
25081 A Landscape of Research Data Repositories in Re3data.org Registry: A Case Study of Indian Repositories

Authors: Prashant Shrivastava

Abstract:

The purpose of this study is to explore re3dat.org registry to identify research data repositories registration workflow process. Further objective is to depict a graph for present development of research data repositories in India. Preliminarily with an approach to understand re3data.org registry framework and schema design then further proceed to explore the status of research data repositories of India in re3data.org registry. Research data repositories are getting wider relevance due to e-research concepts. Now available registry re3data.org is a good tool for users and researchers to identify appropriate research data repositories as per their research requirements. In Indian environment, a compatible National Research Data Policy is the need of the time to boost the management of research data. Registry for Research Data Repositories is a crucial tool to discover specific information in specific domain. Also, Research Data Repositories in India have not been studied. Re3data.org registry and status of Indian research data repositories both discussed in this study.

Keywords: research data, research data repositories, research data registry, re3data.org

Procedia PDF Downloads 324
25080 A Study of Cloud Computing Solution for Transportation Big Data Processing

Authors: Ilgin Gökaşar, Saman Ghaffarian

Abstract:

The need for fast processed big data of transportation ridership (eg., smartcard data) and traffic operation (e.g., traffic detectors data) which requires a lot of computational power is incontrovertible in Intelligent Transportation Systems. Nowadays cloud computing is one of the important subjects and popular information technology solution for data processing. It enables users to process enormous measure of data without having their own particular computing power. Thus, it can also be a good selection for transportation big data processing as well. This paper intends to examine how the cloud computing can enhance transportation big data process with contrasting its advantages and disadvantages, and discussing cloud computing features.

Keywords: big data, cloud computing, Intelligent Transportation Systems, ITS, traffic data processing

Procedia PDF Downloads 467
25079 Harmonic Data Preparation for Clustering and Classification

Authors: Ali Asheibi

Abstract:

The rapid increase in the size of databases required to store power quality monitoring data has demanded new techniques for analysing and understanding the data. One suggested technique to assist in analysis is data mining. Preparing raw data to be ready for data mining exploration take up most of the effort and time spent in the whole data mining process. Clustering is an important technique in data mining and machine learning in which underlying and meaningful groups of data are discovered. Large amounts of harmonic data have been collected from an actual harmonic monitoring system in a distribution system in Australia for three years. This amount of acquired data makes it difficult to identify operational events that significantly impact the harmonics generated on the system. In this paper, harmonic data preparation processes to better understanding of the data have been presented. Underlying classes in this data has then been identified using clustering technique based on the Minimum Message Length (MML) method. The underlying operational information contained within the clusters can be rapidly visualised by the engineers. The C5.0 algorithm was used for classification and interpretation of the generated clusters.

Keywords: data mining, harmonic data, clustering, classification

Procedia PDF Downloads 248
25078 Linguistic Summarization of Structured Patent Data

Authors: E. Y. Igde, S. Aydogan, F. E. Boran, D. Akay

Abstract:

Patent data have an increasingly important role in economic growth, innovation, technical advantages and business strategies and even in countries competitions. Analyzing of patent data is crucial since patents cover large part of all technological information of the world. In this paper, we have used the linguistic summarization technique to prove the validity of the hypotheses related to patent data stated in the literature.

Keywords: data mining, fuzzy sets, linguistic summarization, patent data

Procedia PDF Downloads 272
25077 Proposal of Data Collection from Probes

Authors: M. Kebisek, L. Spendla, M. Kopcek, T. Skulavik

Abstract:

In our paper we describe the security capabilities of data collection. Data are collected with probes located in the near and distant surroundings of the company. Considering the numerous obstacles e.g. forests, hills, urban areas, the data collection is realized in several ways. The collection of data uses connection via wireless communication, LAN network, GSM network and in certain areas data are collected by using vehicles. In order to ensure the connection to the server most of the probes have ability to communicate in several ways. Collected data are archived and subsequently used in supervisory applications. To ensure the collection of the required data, it is necessary to propose algorithms that will allow the probes to select suitable communication channel.

Keywords: communication, computer network, data collection, probe

Procedia PDF Downloads 360
25076 A Review on Big Data Movement with Different Approaches

Authors: Nay Myo Sandar

Abstract:

With the growth of technologies and applications, a large amount of data has been producing at increasing rate from various resources such as social media networks, sensor devices, and other information serving devices. This large collection of massive, complex and exponential growth of dataset is called big data. The traditional database systems cannot store and process such data due to large and complexity. Consequently, cloud computing is a potential solution for data storage and processing since it can provide a pool of resources for servers and storage. However, moving large amount of data to and from is a challenging issue since it can encounter a high latency due to large data size. With respect to big data movement problem, this paper reviews the literature of previous works, discusses about research issues, finds out approaches for dealing with big data movement problem.

Keywords: Big Data, Cloud Computing, Big Data Movement, Network Techniques

Procedia PDF Downloads 86
25075 Optimized Approach for Secure Data Sharing in Distributed Database

Authors: Ahmed Mateen, Zhu Qingsheng, Ahmad Bilal

Abstract:

In the current age of technology, information is the most precious asset of a company. Today, companies have a large amount of data. As the data become larger, access to data for some particular information is becoming slower day by day. Faster data processing to shape it in the form of information is the biggest issue. The major problems in distributed databases are the efficiency of data distribution and response time of data distribution. The security of data distribution is also a big issue. For these problems, we proposed a strategy that can maximize the efficiency of data distribution and also increase its response time. This technique gives better results for secure data distribution from multiple heterogeneous sources. The newly proposed technique facilitates the companies for secure data sharing efficiently and quickly.

Keywords: ER-schema, electronic record, P2P framework, API, query formulation

Procedia PDF Downloads 333
25074 Data Mining Algorithms Analysis: Case Study of Price Predictions of Lands

Authors: Julio Albuja, David Zaldumbide

Abstract:

Data analysis is an important step before taking a decision about money. The aim of this work is to analyze the factors that influence the final price of the houses through data mining algorithms. To our best knowledge, previous work was researched just to compare results. Furthermore, before using the data of the data set, the Z-Transformation were used to standardize the data in the same range. Hence, the data was classified into two groups to visualize them in a readability format. A decision tree was built, and graphical data is displayed where clearly is easy to see the results and the factors' influence in these graphics. The definitions of these methods are described, as well as the descriptions of the results. Finally, conclusions and recommendations are presented related to the released results that our research showed making it easier to apply these algorithms using a customized data set.

Keywords: algorithms, data, decision tree, transformation

Procedia PDF Downloads 374
25073 Application of Blockchain Technology in Geological Field

Authors: Mengdi Zhang, Zhenji Gao, Ning Kang, Rongmei Liu

Abstract:

Management and application of geological big data is an important part of China's national big data strategy. With the implementation of a national big data strategy, geological big data management becomes more and more critical. At present, there are still a lot of technology barriers as well as cognition chaos in many aspects of geological big data management and application, such as data sharing, intellectual property protection, and application technology. Therefore, it’s a key task to make better use of new technologies for deeper delving and wider application of geological big data. In this paper, we briefly introduce the basic principle of blockchain technology at the beginning and then make an analysis of the application dilemma of geological data. Based on the current analysis, we bring forward some feasible patterns and scenarios for the blockchain application in geological big data and put forward serval suggestions for future work in geological big data management.

Keywords: blockchain, intellectual property protection, geological data, big data management

Procedia PDF Downloads 89
25072 Frequent Item Set Mining for Big Data Using MapReduce Framework

Authors: Tamanna Jethava, Rahul Joshi

Abstract:

Frequent Item sets play an essential role in many data Mining tasks that try to find interesting patterns from the database. Typically it refers to a set of items that frequently appear together in transaction dataset. There are several mining algorithm being used for frequent item set mining, yet most do not scale to the type of data we presented with today, so called “BIG DATA”. Big Data is a collection of large data sets. Our approach is to work on the frequent item set mining over the large dataset with scalable and speedy way. Big Data basically works with Map Reduce along with HDFS is used to find out frequent item sets from Big Data on large cluster. This paper focuses on using pre-processing & mining algorithm as hybrid approach for big data over Hadoop platform.

Keywords: frequent item set mining, big data, Hadoop, MapReduce

Procedia PDF Downloads 435
25071 The Role Of Data Gathering In NGOs

Authors: Hussaini Garba Mohammed

Abstract:

Background/Significance: The lack of data gathering is affecting NGOs world-wide in general to have good data information about educational and health related issues among communities in any country and around the world. For example, HIV/AIDS smoking (Tuberculosis diseases) and COVID-19 virus carriers is becoming a serious public health problem, especially among old men and women. But there is no full details data survey assessment from communities, villages, and rural area in some countries to show the percentage of victims and patients, especial with this world COVID-19 virus among the people. These data are essential to inform programming targets, strategies, and priorities in getting good information about data gathering in any society.

Keywords: reliable information, data assessment, data mining, data communication

Procedia PDF Downloads 179
25070 Residential and Care Model for Elderly People Based on “Internet Plus”

Authors: Haoyi Sheng

Abstract:

China's aging tendency is becoming increasingly severe, which leads to the embarrassing situation of "getting old before getting wealthy". The traditional pension model does not comply with the need of today. Relying on "Internet Plus", it can efficiently integrate information and resources and meet the personalized needs of elderly care. It can reduce the operating cost of community elderly care facilities and lay a technical foundation for providing better services for the elderly. The key for providing help for the elderly in the future is to effectively integrate technology, make good use of technology, and improve the efficiency of elderly care services. The effective integration of traditional home care, community care, intelligent elderly care equipment and medical resources to create the "Internet Plus" community intelligent pension service mode has become the future development trend of aging care. The research method of this paper is to collect literature and conduct theoretical research on community pension firstly. Secondly, the combination of suitable aging design and "Internet Plus" is elaborated through research. Finally, this paper states the current level of intelligent technology in old-age care and looks into the future by understanding multiple levels of "Internet Plus". The development of community intelligent pension mode and content under "Internet Plus" has enormous development potential. In addition to the characteristics and functions of ordinary houses, residential design of endowment housing has higher requirements for comfort and personalization, and the people-oriented is the principle of design.

Keywords: ageing tendency, 'Internet Plus', community intelligent elderly care, elderly care service model, technology

Procedia PDF Downloads 137
25069 The Application of Data Mining Technology in Building Energy Consumption Data Analysis

Authors: Liang Zhao, Jili Zhang, Chongquan Zhong

Abstract:

Energy consumption data, in particular those involving public buildings, are impacted by many factors: the building structure, climate/environmental parameters, construction, system operating condition, and user behavior patterns. Traditional methods for data analysis are insufficient. This paper delves into the data mining technology to determine its application in the analysis of building energy consumption data including energy consumption prediction, fault diagnosis, and optimal operation. Recent literature are reviewed and summarized, the problems faced by data mining technology in the area of energy consumption data analysis are enumerated, and research points for future studies are given.

Keywords: data mining, data analysis, prediction, optimization, building operational performance

Procedia PDF Downloads 852
25068 The Use of Gender-Fair Language in CS National Exams

Authors: Moshe Leiba, Doron Zohar

Abstract:

Computer Science (CS) and programming is still considered a boy’s club and is a male-dominated profession. This is also the case in high schools and higher education. In Israel, not different from the rest of the world, there are less than 35% of female students in CS studies that take the matriculation exams. The Israeli matriculation exams are written in a masculine form language. Gender-fair language (GFL) aims at reducing gender stereotyping and discrimination. There are several strategies that can be employed to make languages gender-fair and to treat women and men symmetrically (especially in languages with grammatical gender, among them neutralization and using the plural form. This research aims at exploring computer science teachers’ beliefs regarding the use of gender-fair language in exams. An exploratory quantitative research methodology was employed to collect the data. A questionnaire was administered to 353 computer science teachers. 58% female and 42% male. 86% are teaching for at least 3 years, with 59% of them have a teaching experience of 7 years. 71% of the teachers teach in high school, and 82% of them are preparing students for the matriculation exam in computer science. The questionnaire contained 2 matriculation exam questions from previous years and open-ended questions. Teachers were asked which form they think is more suited: (a) the existing form (mescaline), (b) using both gender full forms (e.g., he/she), (c) using both gender short forms, (d) plural form, (e) natural form, and (f) female form. 84% of the teachers recognized the need to change the existing mescaline form in the matriculation exams. About 50% of them thought that using the plural form was the best-suited option. When examining the teachers who are pro-change and those who are against, no gender differences or teaching experience were found. The teachers who are pro gender-fair language justified it as making it more personal and motivating for the female students. Those who thought that the mescaline form should remain argued that the female students do not complain and the change in form will not influence or affect the female students to choose to study computer science. Some even argued that the change will not affect the students but can only improve their sense of identity or feeling toward the profession (which seems like a misconception). This research suggests that the teachers are pro-change and believe that re-formulating the matriculation exams is the right step towards encouraging more female students to choose to study computer science as their major study track and to bridge the gap for gender equality. This should indicate a bottom-up approach, as not long after this research was conducted, the Israeli ministry of education decided to change the matriculation exams to gender-fair language using the plural form. In the coming years, with the transition to web-based examination, it is suggested to use personalization and adjust the language form in accordance with the student's gender.

Keywords: compter science, gender-fair language, teachers, national exams

Procedia PDF Downloads 112
25067 To Handle Data-Driven Software Development Projects Effectively

Authors: Shahnewaz Khan

Abstract:

Machine learning (ML) techniques are often used in projects for creating data-driven applications. These tasks typically demand additional research and analysis. The proper technique and strategy must be chosen to ensure the success of data-driven projects. Otherwise, even exerting a lot of effort, the necessary development might not always be possible. In this post, an effort to examine the workflow of data-driven software development projects and its implementation process in order to describe how to manage a project successfully. Which will assist in minimizing the added workload.

Keywords: data, data-driven projects, data science, NLP, software project

Procedia PDF Downloads 83
25066 3D Medical Printing the Key Component in Future of Medical Applications

Authors: Zahra Asgharpour, Eric Renteria, Sebastian De Boodt

Abstract:

There is a growing trend towards personalization of medical care, as evidenced by the emphasis on outcomes based medicine, the latest developments in CT and MR imaging and personalized treatment in a variety of surgical disciplines. 3D Printing has been introduced and applied in the medical field since 2000. The first applications were in the field of dental implants and custom prosthetics. According to recent publications, 3D printing in the medical field has been used in a wide range of applications which can be organized into several categories including implants, prosthetics, anatomical models and tissue bioprinting. Some of these categories are still in their infancy stage of the concept of proof while others are in application phase such as the design and manufacturing of customized implants and prosthesis. The approach of 3D printing in this category has been successfully used in the health care sector to make both standard and complex implants within a reasonable amount of time. In this study, some of the clinical applications of 3D printing in design and manufacturing of a patient-specific hip implant would be explained. In cases where patients have complex bone geometries or are undergoing a complex revision on hip replacement, the traditional surgical methods are not efficient, and hence these patients require patient-specific approaches. There are major advantages in using this new technology for medical applications, however, in order to get this technology widely accepted in medical device industry, there is a need for gaining more acceptance from the medical device regulatory offices. This is a challenge that is moving onward and will help the technology find its way at the end as an accepted manufacturing method for medical device industry in an international scale. The discussion will conclude with some examples describing the future directions of 3D Medical Printing.

Keywords: CT/MRI, image processing, 3D printing, medical devices, patient specific implants

Procedia PDF Downloads 298