Search results for: data center
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 26222

Search results for: data center

25352 Steps towards the Development of National Health Data Standards in Developing Countries

Authors: Abdullah I. Alkraiji, Thomas W. Jackson, Ian Murray

Abstract:

The proliferation of health data standards today is somewhat overlapping and conflicting, resulting in market confusion and leading to increasing proprietary interests. The government role and support in standardization for health data are thought to be crucial in order to establish credible standards for the next decade, to maximize interoperability across the health sector, and to decrease the risks associated with the implementation of non-standard systems. The normative literature missed out the exploration of the different steps required to be undertaken by the government towards the development of national health data standards. Based on the lessons learned from a qualitative study investigating the different issues to the adoption of health data standards in the major tertiary hospitals in Saudi Arabia and the opinions and feedback from different experts in the areas of data exchange and standards and medical informatics in Saudi Arabia and UK, a list of steps required towards the development of national health data standards was constructed. Main steps are the existence of: a national formal reference for health data standards, an agreed national strategic direction for medical data exchange, a national medical information management plan and a national accreditation body, and more important is the change management at the national and organizational level. The outcome of this study can be used by academics and practitioners to develop the planning of health data standards, and in particular those in developing countries.

Keywords: interoperabilty, medical data exchange, health data standards, case study, Saudi Arabia

Procedia PDF Downloads 337
25351 A Proposal for U-City (Smart City) Service Method Using Real-Time Digital Map

Authors: SangWon Han, MuWook Pyeon, Sujung Moon, DaeKyo Seo

Abstract:

Recently, technologies based on three-dimensional (3D) space information are being developed and quality of life is improving as a result. Research on real-time digital map (RDM) is being conducted now to provide 3D space information. RDM is a service that creates and supplies 3D space information in real time based on location/shape detection. Research subjects on RDM include the construction of 3D space information with matching image data, complementing the weaknesses of image acquisition using multi-source data, and data collection methods using big data. Using RDM will be effective for space analysis using 3D space information in a U-City and for other space information utilization technologies.

Keywords: RDM, multi-source data, big data, U-City

Procedia PDF Downloads 430
25350 Agile Methodology for Modeling and Design of Data Warehouses -AM4DW-

Authors: Nieto Bernal Wilson, Carmona Suarez Edgar

Abstract:

The organizations have structured and unstructured information in different formats, sources, and systems. Part of these come from ERP under OLTP processing that support the information system, however these organizations in OLAP processing level, presented some deficiencies, part of this problematic lies in that does not exist interesting into extract knowledge from their data sources, as also the absence of operational capabilities to tackle with these kind of projects.  Data Warehouse and its applications are considered as non-proprietary tools, which are of great interest to business intelligence, since they are repositories basis for creating models or patterns (behavior of customers, suppliers, products, social networks and genomics) and facilitate corporate decision making and research. The following paper present a structured methodology, simple, inspired from the agile development models as Scrum, XP and AUP. Also the models object relational, spatial data models, and the base line of data modeling under UML and Big data, from this way sought to deliver an agile methodology for the developing of data warehouses, simple and of easy application. The methodology naturally take into account the application of process for the respectively information analysis, visualization and data mining, particularly for patterns generation and derived models from the objects facts structured.

Keywords: data warehouse, model data, big data, object fact, object relational fact, process developed data warehouse

Procedia PDF Downloads 402
25349 Kidney Stones in Individuals Living with Diabetes Mellitus at King Abdul-Aziz Medical City - Tertiary Care Center, Jeddah, Saudi Arabia: A Retrospective Cohort Study

Authors: Suhaib Radi, Ibrahim Basem Nafadi, Abdullah Ahmed Alsulami, Nawaf Faisal Halabi, Abdulrhman Abdullah Alsubhi, Sami Wesam Maghrabi, Waleed Saad Alshehri

Abstract:

Background: Kidney stones greatly affect individuals. The range of these effects regarding multiple kidney stone factors (size, presence of obstruction, and modality of treatment) in stone formers with and without diabetes has not been well explored in the literature to the best of the author's knowledge. Our goal is to investigate this unexplored correlation between diabetes and kidney stones by conducting a Cohort retrospective study to precisely evaluate the effects of this condition and the existence of complications in adult individuals with diabetes in Saudi Arabia in comparison to a non-diabetic control group. Methodology: This is a retrospective cohort study aiming to evaluate the range of effects of kidney stones in stone formers in a group of adults diagnosed with type 2 diabetes mellitus and adults without diabetes between 2017 and 2019 in Jeddah, Saudi Arabia. An IRB approval has been granted for this study. The data was analyzed using SPSS. The data was collected from the 1st of December 2022 until the 1st of March 2023. Results: A total of 254 individuals diagnosed with kidney stones were included, 127 of whom were adult individuals with type 2 diabetes, and 127 were non-diabetics. Our study shows that the individuals affected with diabetes were more likely to have larger kidney stones in comparison to individuals without diabetes (13.12 mm vs. 10.53 mm, p-value = 0.03). Moreover, individuals with hypertension and dyslipidemia also had significantly larger stones. On the other hand, no significant difference was found in the presence of obstruction and modality of treatment between the two groups. Conclusion: This study done in Saudi Arabia found that individuals with kidney stones who concurrently had diabetes formed larger kidney stones, and they were also found to have other comorbidities such as HTN, dyslipidemia, obesity, and renal disease. The significance of these findings could assist in the future of primary and secondary prevention of renal stones.

Keywords: kidney stone, type 2 DM, metabolic syndrome, lithotripsy

Procedia PDF Downloads 106
25348 Identifying Model to Predict Deterioration of Water Mains Using Robust Analysis

Authors: Go Bong Choi, Shin Je Lee, Sung Jin Yoo, Gibaek Lee, Jong Min Lee

Abstract:

In South Korea, it is difficult to obtain data for statistical pipe assessment. In this paper, to address these issues, we find that various statistical model presented before is how data mixed with noise and are whether apply in South Korea. Three major type of model is studied and if data is presented in the paper, we add noise to data, which affects how model response changes. Moreover, we generate data from model in paper and analyse effect of noise. From this we can find robustness and applicability in Korea of each model.

Keywords: proportional hazard model, survival model, water main deterioration, ecological sciences

Procedia PDF Downloads 740
25347 Automated Testing to Detect Instance Data Loss in Android Applications

Authors: Anusha Konduru, Zhiyong Shan, Preethi Santhanam, Vinod Namboodiri, Rajiv Bagai

Abstract:

Mobile applications are increasing in a significant amount, each to address the requirements of many users. However, the quick developments and enhancements are resulting in many underlying defects. Android apps create and handle a large variety of 'instance' data that has to persist across runs, such as the current navigation route, workout results, antivirus settings, or game state. Due to the nature of Android, an app can be paused, sent into the background, or killed at any time. If the instance data is not saved and restored between runs, in addition to data loss, partially-saved or corrupted data can crash the app upon resume or restart. However, it is difficult for the programmer to manually test this issue for all the activities. This results in the issue of data loss that the data entered by the user are not saved when there is any interruption. This issue can degrade user experience because the user needs to reenter the information each time there is an interruption. Automated testing to detect such data loss is important to improve the user experience. This research proposes a tool, DroidDL, a data loss detector for Android, which detects the instance data loss from a given android application. We have tested 395 applications and found 12 applications with the issue of data loss. This approach is proved highly accurate and reliable to find the apps with this defect, which can be used by android developers to avoid such errors.

Keywords: Android, automated testing, activity, data loss

Procedia PDF Downloads 233
25346 Big Data: Appearance and Disappearance

Authors: James Moir

Abstract:

The mainstay of Big Data is prediction in that it allows practitioners, researchers, and policy analysts to predict trends based upon the analysis of large and varied sources of data. These can range from changing social and political opinions, patterns in crimes, and consumer behaviour. Big Data has therefore shifted the criterion of success in science from causal explanations to predictive modelling and simulation. The 19th-century science sought to capture phenomena and seek to show the appearance of it through causal mechanisms while 20th-century science attempted to save the appearance and relinquish causal explanations. Now 21st-century science in the form of Big Data is concerned with the prediction of appearances and nothing more. However, this pulls social science back in the direction of a more rule- or law-governed reality model of science and away from a consideration of the internal nature of rules in relation to various practices. In effect Big Data offers us no more than a world of surface appearance and in doing so it makes disappear any context-specific conceptual sensitivity.

Keywords: big data, appearance, disappearance, surface, epistemology

Procedia PDF Downloads 417
25345 From Data Processing to Experimental Design and Back Again: A Parameter Identification Problem Based on FRAP Images

Authors: Stepan Papacek, Jiri Jablonsky, Radek Kana, Ctirad Matonoha, Stefan Kindermann

Abstract:

FRAP (Fluorescence Recovery After Photobleaching) is a widely used measurement technique to determine the mobility of fluorescent molecules within living cells. While the experimental setup and protocol for FRAP experiments are usually fixed, data processing part is still under development. In this paper, we formulate and solve the problem of data selection which enhances the processing of FRAP images. We introduce the concept of the irrelevant data set, i.e., the data which are almost not reducing the confidence interval of the estimated parameters and thus could be neglected. Based on sensitivity analysis, we both solve the problem of the optimal data space selection and we find specific conditions for optimizing an important experimental design factor, e.g., the radius of bleach spot. Finally, a theorem announcing less precision of the integrated data approach compared to the full data case is proven; i.e., we claim that the data set represented by the FRAP recovery curve lead to a larger confidence interval compared to the spatio-temporal (full) data.

Keywords: FRAP, inverse problem, parameter identification, sensitivity analysis, optimal experimental design

Procedia PDF Downloads 274
25344 Analysis and Evaluation of the Public Responses to Traffic Congestion Pricing Schemes in Urban Streets

Authors: Saeed Sayyad Hagh Shomar

Abstract:

Traffic congestion pricing in urban streets is one of the most suitable options for solving the traffic problems and environment pollutions in the cities of the country. Unlike its acceptable outcomes, there are problems concerning the necessity to pay by the mass. Regarding the fact that public response in order to succeed in this strategy is so influential, studying their response and behavior to get the feedback and improve the strategies is of great importance. In this study, a questionnaire was used to examine the public reactions to the traffic congestion pricing schemes at the center of Tehran metropolis and the factors involved in people’s decision making in accepting or rejecting the congestion pricing schemes were assessed based on the data obtained from the questionnaire as well as the international experiences. Then, by analyzing and comparing the schemes, guidelines to reduce public objections to them are discussed. The results of reviewing and evaluating the public reactions show that all the pros and cons must be considered to guarantee the success of these projects. Consequently, with targeted public education and consciousness-raising advertisements, prior to initiating a scheme and ensuring the mechanism of the implementation after the start of the project, the initial opposition is reduced and, with the gradual emergence of the real and tangible benefits of its implementation, users’ satisfaction will increase.

Keywords: demand management, international experiences, traffic congestion pricing, public acceptance, public reactions, public objection

Procedia PDF Downloads 241
25343 Exploring the Feasibility of Utilizing Blockchain in Cloud Computing and AI-Enabled BIM for Enhancing Data Exchange in Construction Supply Chain Management

Authors: Tran Duong Nguyen, Marwan Shagar, Qinghao Zeng, Aras Maqsoodi, Pardis Pishdad, Eunhwa Yang

Abstract:

Construction supply chain management (CSCM) involves the collaboration of many disciplines and actors, which generates vast amounts of data. However, inefficient, fragmented, and non-standardized data storage often hinders this data exchange. The industry has adopted building information modeling (BIM) -a digital representation of a facility's physical and functional characteristics to improve collaboration, enhance transmission security, and provide a common data exchange platform. Still, the volume and complexity of data require tailored information categorization, aligning with stakeholders' preferences and demands. To address this, artificial intelligence (AI) can be integrated to handle this data’s magnitude and complexities. This research aims to develop an integrated and efficient approach for data exchange in CSCM by utilizing AI. The paper covers five main objectives: (1) Investigate existing framework and BIM adoption; (2) Identify challenges in data exchange; (3) Propose an integrated framework; (4) Enhance data transmission security; and (5) Develop data exchange in CSCM. The proposed framework demonstrates how integrating BIM and other technologies, such as cloud computing, blockchain, and AI applications, can significantly improve the efficiency and accuracy of data exchange in CSCM.

Keywords: construction supply chain management, BIM, data exchange, artificial intelligence

Procedia PDF Downloads 16
25342 Representation Data without Lost Compression Properties in Time Series: A Review

Authors: Nabilah Filzah Mohd Radzuan, Zalinda Othman, Azuraliza Abu Bakar, Abdul Razak Hamdan

Abstract:

Uncertain data is believed to be an important issue in building up a prediction model. The main objective in the time series uncertainty analysis is to formulate uncertain data in order to gain knowledge and fit low dimensional model prior to a prediction task. This paper discusses the performance of a number of techniques in dealing with uncertain data specifically those which solve uncertain data condition by minimizing the loss of compression properties.

Keywords: compression properties, uncertainty, uncertain time series, mining technique, weather prediction

Procedia PDF Downloads 424
25341 Valuing Academic Excellence in Higher Education: The Case of Establishing a Human Development Unit in a European Start-up University

Authors: Eleftheria Atta, Yianna Vovides, Marios Katsioloudes

Abstract:

In the fusion of neoliberalism and globalization, Higher Education (HE) is becoming increasingly complex. The changing patterns of the economy worldwide caused the development of high value-added economy HE has been viewed as a social investment, significant for the development of knowledge-based societies and economies. In order to contribute to economic competitiveness universities are required to produce local and employable workers in order to fit into the neoliberal economic environment. The emergence of neoliberal performativity, which measures outcomes, is a key aspect in a neoliberal era. It facilitates the redesign of institutions making organizations and individuals to think about themselves in relation to their performance. Performativity and performance management systems lead academics to become more effective, professionally advance, improve and become better than others and therefore act competitively. Besides the aforementioned complexities, universities also encounter the challenge of maintaining a set of values to guide an institution’s actions and which have always been highly respected in developing a HE institution. The formulation of a clear set of values also determines the institutional culture which will be maintained. It is evident that values create a significant framework for the workplace and may determine positive institutional results. Universities are required to engage in activities for capacity building which will improve their students’ competence as well as offer opportunities to administrative and academic staff to professionally develop in light of neoliberal performativity. Additionally, the University is now considered as an innovation ecosystem playing a significant role in providing education, research and innovation to help create solutions to meet social, environmental and economic challenges. Thus, Universities become central in orchestrating multi-actor innovation networks. This presentation will discuss the establishment of an institutional unit entitled ‘Human Development Unit’ (HDU) in a European start-up university. The activities of the HDU are envisioned as drivers for innovation that would enable the university as a whole to maintain its position in a fast-changing world and be ready to face adaptive challenges. In addition, the HDU provides its students, staff, and faculty with opportunities to advance their academic and professional development through engagement in programs that align with institutional values. It also serves as a connector with the broader community. The presentation will highlight the functions of three centers which the unit will coordinate namely, the Student Development Center (SDC), the Faculty & Staff Development Center (FSDC) and the Continuing Education Center (CEC). The presentation aligns with the aim of the conference as it welcomes presentations to discuss innovations and challenges encountered in HE. Particularly, this presentation seeks to discuss the establishment of an innovative unit at a start-up university which will contribute to creating an institutional culture shaped by the value of academic excellence for students as well as for staff, shaping and defining the functions and activities of the unit. The establishment of the proposed unit is crucial in a start-up university both to differentiate from other competitors but also to sustain its presence given the pressures in a neoliberal HE context.

Keywords: academic excellence, globalization, human development unit, neoliberalism

Procedia PDF Downloads 141
25340 Data Mining As A Tool For Knowledge Management: A Review

Authors: Maram Saleh

Abstract:

Knowledge has become an essential resource in today’s economy and become the most important asset of maintaining competition advantage in organizations. The importance of knowledge has made organizations to manage their knowledge assets and resources through all multiple knowledge management stages such as: Knowledge Creation, knowledge storage, knowledge sharing and knowledge use. Researches on data mining are continues growing over recent years on both business and educational fields. Data mining is one of the most important steps of the knowledge discovery in databases process aiming to extract implicit, unknown but useful knowledge and it is considered as significant subfield in knowledge management. Data miming have the great potential to help organizations to focus on extracting the most important information on their data warehouses. Data mining tools and techniques can predict future trends and behaviors, allowing businesses to make proactive, knowledge-driven decisions. This review paper explores the applications of data mining techniques in supporting knowledge management process as an effective knowledge discovery technique. In this paper, we identify the relationship between data mining and knowledge management, and then focus on introducing some application of date mining techniques in knowledge management for some real life domains.

Keywords: Data Mining, Knowledge management, Knowledge discovery, Knowledge creation.

Procedia PDF Downloads 203
25339 Anomaly Detection Based Fuzzy K-Mode Clustering for Categorical Data

Authors: Murat Yazici

Abstract:

Anomalies are irregularities found in data that do not adhere to a well-defined standard of normal behavior. The identification of outliers or anomalies in data has been a subject of study within the statistics field since the 1800s. Over time, a variety of anomaly detection techniques have been developed in several research communities. The cluster analysis can be used to detect anomalies. It is the process of associating data with clusters that are as similar as possible while dissimilar clusters are associated with each other. Many of the traditional cluster algorithms have limitations in dealing with data sets containing categorical properties. To detect anomalies in categorical data, fuzzy clustering approach can be used with its advantages. The fuzzy k-Mode (FKM) clustering algorithm, which is one of the fuzzy clustering approaches, by extension to the k-means algorithm, is reported for clustering datasets with categorical values. It is a form of clustering: each point can be associated with more than one cluster. In this paper, anomaly detection is performed on two simulated data by using the FKM cluster algorithm. As a significance of the study, the FKM cluster algorithm allows to determine anomalies with their abnormality degree in contrast to numerous anomaly detection algorithms. According to the results, the FKM cluster algorithm illustrated good performance in the anomaly detection of data, including both one anomaly and more than one anomaly.

Keywords: fuzzy k-mode clustering, anomaly detection, noise, categorical data

Procedia PDF Downloads 49
25338 Big Data Analytics and Data Security in the Cloud via Fully Homomorphic Encyption Scheme

Authors: Victor Onomza Waziri, John K. Alhassan, Idris Ismaila, Noel Dogonyara

Abstract:

This paper describes the problem of building secure computational services for encrypted information in the Cloud. Computing without decrypting the encrypted data; therefore, it meets the yearning of computational encryption algorithmic aspiration model that could enhance the security of big data for privacy or confidentiality, availability and integrity of the data and user’s security. The cryptographic model applied for the computational process of the encrypted data is the Fully Homomorphic Encryption Scheme. We contribute a theoretical presentations in a high-level computational processes that are based on number theory that is derivable from abstract algebra which can easily be integrated and leveraged in the Cloud computing interface with detail theoretic mathematical concepts to the fully homomorphic encryption models. This contribution enhances the full implementation of big data analytics based on cryptographic security algorithm.

Keywords: big data analytics, security, privacy, bootstrapping, Fully Homomorphic Encryption Scheme

Procedia PDF Downloads 478
25337 Vehicular Speed Detection Camera System Using Video Stream

Authors: C. A. Anser Pasha

Abstract:

In this paper, a new Vehicular Speed Detection Camera System that is applicable as an alternative to traditional radars with the same accuracy or even better is presented. The real-time measurement and analysis of various traffic parameters such as speed and number of vehicles are increasingly required in traffic control and management. Image processing techniques are now considered as an attractive and flexible method for automatic analysis and data collections in traffic engineering. Various algorithms based on image processing techniques have been applied to detect multiple vehicles and track them. The SDCS processes can be divided into three successive phases; the first phase is Objects detection phase, which uses a hybrid algorithm based on combining an adaptive background subtraction technique with a three-frame differencing algorithm which ratifies the major drawback of using only adaptive background subtraction. The second phase is Objects tracking, which consists of three successive operations - object segmentation, object labeling, and object center extraction. Objects tracking operation takes into consideration the different possible scenarios of the moving object like simple tracking, the object has left the scene, the object has entered the scene, object crossed by another object, and object leaves and another one enters the scene. The third phase is speed calculation phase, which is calculated from the number of frames consumed by the object to pass by the scene.

Keywords: radar, image processing, detection, tracking, segmentation

Procedia PDF Downloads 464
25336 An Approximation of Daily Rainfall by Using a Pixel Value Data Approach

Authors: Sarisa Pinkham, Kanyarat Bussaban

Abstract:

The research aims to approximate the amount of daily rainfall by using a pixel value data approach. The daily rainfall maps from the Thailand Meteorological Department in period of time from January to December 2013 were the data used in this study. The results showed that this approach can approximate the amount of daily rainfall with RMSE=3.343.

Keywords: daily rainfall, image processing, approximation, pixel value data

Procedia PDF Downloads 383
25335 Implementation of Traffic Engineering Using MPLS Technology

Authors: Vishal H. Shukla, Sanjay B. Deshmukh

Abstract:

Traffic engineering, at its center, is the ability of moving traffic approximately so that traffic from a congested link is moved onto the unused capacity on another link. Traffic Engineering ensures the best possible use of the resources. Now to support traffic engineering in the today’s network, Multiprotocol Label Switching (MPLS) is being used which is very helpful for reliable packets delivery in an ongoing internet services. Here a topology is been implemented on GNS3 to focus on the analysis of the communication take place from one site to other through the ISP. The comparison is made between the IP network & MPLS network based on Bandwidth & Jitter which are one of the performance parameters using JPERF simulator.

Keywords: GNS3, JPERF, MPLS, traffic engineering, VMware

Procedia PDF Downloads 482
25334 A Next-Generation Blockchain-Based Data Platform: Leveraging Decentralized Storage and Layer 2 Scaling for Secure Data Management

Authors: Kenneth Harper

Abstract:

The rapid growth of data-driven decision-making across various industries necessitates advanced solutions to ensure data integrity, scalability, and security. This study introduces a decentralized data platform built on blockchain technology to improve data management processes in high-volume environments such as healthcare and financial services. The platform integrates blockchain networks using Cosmos SDK and Polkadot Substrate alongside decentralized storage solutions like IPFS and Filecoin, and coupled with decentralized computing infrastructure built on top of Avalanche. By leveraging advanced consensus mechanisms, we create a scalable, tamper-proof architecture that supports both structured and unstructured data. Key features include secure data ingestion, cryptographic hashing for robust data lineage, and Zero-Knowledge Proof mechanisms that enhance privacy while ensuring compliance with regulatory standards. Additionally, we implement performance optimizations through Layer 2 scaling solutions, including ZK-Rollups, which provide low-latency data access and trustless data verification across a distributed ledger. The findings from this exercise demonstrate significant improvements in data accessibility, reduced operational costs, and enhanced data integrity when tested in real-world scenarios. This platform reference architecture offers a decentralized alternative to traditional centralized data storage models, providing scalability, security, and operational efficiency.

Keywords: blockchain, cosmos SDK, decentralized data platform, IPFS, ZK-Rollups

Procedia PDF Downloads 16
25333 The Effect of Damper Attachment on Tennis Racket Vibration: A Simulation Study

Authors: Kuangyou B. Cheng

Abstract:

Tennis is among the most popular sports worldwide. During ball-racket impact, substantial vibration transmitted to the hand/arm may be the cause of “tennis elbow”. Although it is common for athletes to attach a “vibration damper” to the spring-bed, the effect remains unclear. To avoid subjective factors and errors in data recording, the effect of damper attachment on racket handle end vibration was investigated with computer simulation. The tennis racket was modeled as a beam with free-free ends (similar to loosely holding the racket). Finite difference method with 40 segments was used to simulate ball-racket impact response. The effect of attaching a damper was modeled as having a segment with increased mass. It was found that the damper has the largest effect when installed at the spring-bed center. However, this is not a practical location due to interference with ball-racket impact. Vibration amplitude changed very slightly when the damper was near the top or bottom of the spring-bed. The damper works only slightly better at the bottom than at the top of the spring-bed. In addition, heavier dampers work better than lighter ones. These simulation results were comparable with experimental recordings in which the selection of damper locations was restricted by ball impact locations. It was concluded that mathematical model simulations were able to objectively investigate the effect of damper attachment on racket vibration. In addition, with very slight difference in grip end vibration amplitude when the damper was attached at the top or bottom of the spring-bed, whether the effect can really be felt by athletes is questionable.

Keywords: finite difference, impact, modeling, vibration amplitude

Procedia PDF Downloads 258
25332 Structural Performance Evaluation of Electronic Road Sign Panels Reflecting Damage Scenarios

Authors: Junwon Seo, Bipin Adhikari, Euiseok Jeong

Abstract:

This paper is intended to evaluate the structural performance of welded electronic road signs under various damage scenarios (DSs) using a finite element (FE) model calibrated with full-scale ultimate load testing results. The tested electronic road sign specimen was built with a back skin made of 5052 aluminum and two channels and a frame made of 6061 aluminum, where the back skin was connected to the frame by welding. The size of the tested specimen was 1.52 m long, 1.43 m wide, and 0.28 m deep. An actuator applied vertical loads at the center of the back skin of the specimen, resulting in a displacement of 158.7 mm and an ultimate load of 153.46 kN. Using these testing data, generation and calibration of a FE model of the tested specimen were executed in ABAQUS, indicating that the difference in the ultimate load between the calibrated model simulation and full-scale testing was only 3.32%. Then, six different DSs were simulated where the areas of the welded connection in the calibrated model were diminished for the DSs. It was found that the corners at the back skin-frame joint were prone to connection failure for all the DSs, and failure of the back skin-frame connection occurred remarkably from the distant edges.

Keywords: computational analysis, damage scenarios, electronic road signs, finite element, welded connections

Procedia PDF Downloads 90
25331 The Effect of Measurement Distribution on System Identification and Detection of Behavior of Nonlinearities of Data

Authors: Mohammad Javad Mollakazemi, Farhad Asadi, Aref Ghafouri

Abstract:

In this paper, we considered and applied parametric modeling for some experimental data of dynamical system. In this study, we investigated the different distribution of output measurement from some dynamical systems. Also, with variance processing in experimental data we obtained the region of nonlinearity in experimental data and then identification of output section is applied in different situation and data distribution. Finally, the effect of the spanning the measurement such as variance to identification and limitation of this approach is explained.

Keywords: Gaussian process, nonlinearity distribution, particle filter, system identification

Procedia PDF Downloads 509
25330 Prognosis, Clinical Outcomes and Short Term Survival Analyses of Patients with Cutaneous Melanomas

Authors: Osama Shakeel

Abstract:

The objective of the paper is to study the clinic-pathological factors, survival analyses, recurrence rate, metastatic rate, risk factors and the management of cutaneous malignant melanoma at Shaukat Khanum Memorial Cancer Hospital and Research Center. Methodology: From 2014 to 2017, all patients with a diagnosis of cutaneous malignant melanoma (CMM) were included in the study. Demographic variables were collected. Short and long term oncological outcomes were recorded. All data were entered and analyzed in SPSS version 21. Results: A total of 28 patients were included in the study. Median age was 46.5 +/-15.9 years. There were 16 male and 12 female patients. The family history of melanoma was present in 7.1% (n=2) of the patients. All patients had a mean survival of 13.43+/- 9.09 months. Lower limb was the commonest site among all which constitutes 46.4%(n=13). On histopathological analyses, ulceration was seen in 53.6% (n=15) patients. Unclassified tumor type was present in 75%(n=21) of the patients followed by nodular 21.4% (n=6) and superficial spreading 3.5%(n=1). Clark level IV was the commonest presentation constituting 46.4%(n=13). Metastases were seen in 50%(n=14) of the patients. Local recurrence was observed in 60.7%(n=17). 64.3%(n=18) lived after one year of treatment. Conclusion: CMM is a fatal disease. Although its disease of fair skin individuals, however, the incidence of CMM is also rising in this part of the world. Management includes early diagnoses and prompt management. However, mortality associated with this disease is still not favorable.

Keywords: malignant cancer of skin, cutaneous malignant melanoma, skin cancer, survival analyses

Procedia PDF Downloads 165
25329 Building a Scalable Telemetry Based Multiclass Predictive Maintenance Model in R

Authors: Jaya Mathew

Abstract:

Many organizations are faced with the challenge of how to analyze and build Machine Learning models using their sensitive telemetry data. In this paper, we discuss how users can leverage the power of R without having to move their big data around as well as a cloud based solution for organizations willing to host their data in the cloud. By using ScaleR technology to benefit from parallelization and remote computing or R Services on premise or in the cloud, users can leverage the power of R at scale without having to move their data around.

Keywords: predictive maintenance, machine learning, big data, cloud based, on premise solution, R

Procedia PDF Downloads 371
25328 Trusting the Big Data Analytics Process from the Perspective of Different Stakeholders

Authors: Sven Gehrke, Johannes Ruhland

Abstract:

Data is the oil of our time, without them progress would come to a hold [1]. On the other hand, the mistrust of data mining is increasing [2]. The paper at hand shows different aspects of the concept of trust and describes the information asymmetry of the typical stakeholders of a data mining project using the CRISP-DM phase model. Based on the identified influencing factors in relation to trust, problematic aspects of the current approach are verified using various interviews with the stakeholders. The results of the interviews confirm the theoretically identified weak points of the phase model with regard to trust and show potential research areas.

Keywords: trust, data mining, CRISP DM, stakeholder management

Procedia PDF Downloads 90
25327 Food Irradiation in the Third Sector Development and Validation of Questionnaire to Standard Measuring Instrument for Evaluation of Acceptance and Sensory Analysis of Irradiated Foods

Authors: Juliana Sagretti, Susy Sabato

Abstract:

Despite the poverty in the world, a third of all food produced in the world is wasted. FAO, the United Nations Organization of Agriculture and Food, points out the need to combine actions and new technologies to combat hunger and waste in contrast to the high production of food in the world. The energy of ionizing radiation in food brought many positive results, such as increased validity and insect infestation control. The food banks are organizations that act at various points of food chain to collect and distribute food to the needy. So, the aim of this study was to initiate a partnership between irradiation and the food bank through the development of a questionnaire to evaluate and disseminate the knowledge and acceptance of individuals in the food bank in Brazil. In addition, this study aimed to standardize a basis questionnaire for future research assessment of irradiated foods. For the construction of the questionnaire as a measuring instrument, a comprehensive and rigorous literature review was made. Its covered qualitative research, questionnaires, sensory evaluation and food irradiated. Three stages of pre - tests were necessary and related fields of experts were consulted. As a result, the questionnaire has three parts, personal issues, assertive issues and questions of multiple choices and finally an informative question. The questionnaire was applied in Ceagesp food bank in the biggest center of food in Brazil (data not shown).

Keywords: food bank, food irradiation, food waste, sustainability

Procedia PDF Downloads 324
25326 Wireless Transmission of Big Data Using Novel Secure Algorithm

Authors: K. Thiagarajan, K. Saranya, A. Veeraiah, B. Sudha

Abstract:

This paper presents a novel algorithm for secure, reliable and flexible transmission of big data in two hop wireless networks using cooperative jamming scheme. Two hop wireless networks consist of source, relay and destination nodes. Big data has to transmit from source to relay and from relay to destination by deploying security in physical layer. Cooperative jamming scheme determines transmission of big data in more secure manner by protecting it from eavesdroppers and malicious nodes of unknown location. The novel algorithm that ensures secure and energy balance transmission of big data, includes selection of data transmitting region, segmenting the selected region, determining probability ratio for each node (capture node, non-capture and eavesdropper node) in every segment, evaluating the probability using binary based evaluation. If it is secure transmission resume with the two- hop transmission of big data, otherwise prevent the attackers by cooperative jamming scheme and transmit the data in two-hop transmission.

Keywords: big data, two-hop transmission, physical layer wireless security, cooperative jamming, energy balance

Procedia PDF Downloads 484
25325 Improving the Quality of Higher Education for Students with Disability in Universities of Pakistan

Authors: Nasir Sulman

Abstract:

In Pakistan, the inclusion of persons with disabilities in higher education institutions has significantly been increased with every passing year and anyone can observe a sizeable number of these students in each faculty. The study executes to conduct a baseline survey for measuring faculty understanding about the special needs, experiences of students with disabilities and support provided by university administration in order to teach these students effectively. The researcher has used mixed methods and the University of Karachi was selected through non-probability-based sampling method. This university is one of the largest universities in Pakistan where more than 40,000 students have been enrolled. Data was gathered through a questionnaire and focused group discussion from three stakeholders including students with disabilities, faculty members and members of the university administration. The key findings show that students with disabilities experience a number of problems related to accommodating their special needs. However, the most encouraging factors identified are the attitude, support, and motivation they received from various faculty members and university administration. On the basis of the findings of the study the researcher has prepared a faculty guidebook and established a ‘Model Learning Assistance Centre for Students with Disabilities’ in the Department of Special Education, University of Karachi. Both these efforts will be helpful for improving the support services for students with disabilities to strengthen the existing laws, policies, and practices in institutions of higher education.

Keywords: persons with disabilities, higher education, learning assistance center, faculty guidebook

Procedia PDF Downloads 147
25324 Migrant and Population Health, Two Sides of a Coin: A Descriptive Study

Authors: A. Sottomayor, M. Perez Duque, M. C. Henriques

Abstract:

Introduction: Migration is not a new phenomenon; nomads often traveled, seeking better living conditions, including food and water. The increase of migrations affects all countries, rising health-related challenges. In Portugal, we have had migrant movements in the last decades, pairing with economic behavior. Irregular immigrants are detained in Santo António detention center from Portuguese Immigration and Borders Service (USHA-SEF) in Porto until court decision for a maximum of 60 days. It is the only long stay officially designated detention center for immigrants in Portugal. Immigrant health is important for public health (PH). It affects and is affected by the community. The XXVII Portuguese Government considered immigrant integration, including access to health, health promotion, protection and reduction of inequities a political priority. Many curative, psychological and legal services are provided for detainees, but until 2015, no structured health promotion or prevention actions were being held at USHA-SEF. That year, Porto Occidental PH Local Unit started to provide vaccination and health literacy on this theme for detainees and SEF workers. Our activities include a vaccine lecture, a medical consultation with vaccine prescription and administration, along with documented proof of vaccination. All vaccines are volunteer and free of charge. This action reduces the risk of importation and transmission of diseases, contributing to world eradication and elimination programs. We aimed to characterize the demography of irregular immigrant detained at UHSA-SEF and describe our activity. Methods: All data was provided by Porto Occidental Public Health Unit. All paper registers of vaccination were uploaded to MicrosoftExcel®. We included all registers and collected demographic variables, nationality, vaccination date, category, and administered vaccines. Descriptive analysis was performed using MicrosoftExcel®. Results: From 2015 to 2018, we delivered care to 256 individuals (179 immigrants; 77 workers). Considering immigrants, 72% were male, and 8 (16%) women were pregnant. 85% were between 20-54 years (ᵡ=30,8y; 2-71y), and 11 didn’t report any age. Migrants came from 48 countries, and India had the highest number (9%). MMR and Tetanus vaccines had > 90% vaccination rate and Poliomyelitis, hepatitis B and flu vaccines had around 85% vaccination rates. We had a consistent number of refusals. Conclusion: Our irregular migrant population comes from many different countries, which increases the risk of disease importation. Pregnant women are present as a particular subset of irregular migrants, and vaccination protects them and the baby. Vaccination of migrant is valuable for them and for the countries in which they pass. It contributes to universal health coverage, for eradication programmes and accomplishment of the Sustainable Development Goals. Peer influence may present as a determinant of refusals so we must consistently educate migrants before vaccination. More studies would be valuable, particularly on the migrant trajectory, duration of stay, destiny after court decision and health impact.

Keywords: migrants, public health, universal health coverage, vaccination

Procedia PDF Downloads 121
25323 One Step Further: Pull-Process-Push Data Processing

Authors: Romeo Botes, Imelda Smit

Abstract:

In today’s modern age of technology vast amounts of data needs to be processed in real-time to keep users satisfied. This data comes from various sources and in many formats, including electronic and mobile devices such as GPRS modems and GPS devices. They make use of different protocols including TCP, UDP, and HTTP/s for data communication to web servers and eventually to users. The data obtained from these devices may provide valuable information to users, but are mostly in an unreadable format which needs to be processed to provide information and business intelligence. This data is not always current, it is mostly historical data. The data is not subject to implementation of consistency and redundancy measures as most other data usually is. Most important to the users is that the data are to be pre-processed in a readable format when it is entered into the database. To accomplish this, programmers build processing programs and scripts to decode and process the information stored in databases. Programmers make use of various techniques in such programs to accomplish this, but sometimes neglect the effect some of these techniques may have on database performance. One of the techniques generally used,is to pull data from the database server, process it and push it back to the database server in one single step. Since the processing of the data usually takes some time, it keeps the database busy and locked for the period of time that the processing takes place. Because of this, it decreases the overall performance of the database server and therefore the system’s performance. This paper follows on a paper discussing the performance increase that may be achieved by utilizing array lists along with a pull-process-push data processing technique split in three steps. The purpose of this paper is to expand the number of clients when comparing the two techniques to establish the impact it may have on performance of the CPU storage and processing time.

Keywords: performance measures, algorithm techniques, data processing, push data, process data, array list

Procedia PDF Downloads 238