Search results for: continuous data
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 26741

Search results for: continuous data

23321 Estimation of the Upper Tail Dependence Coefficient for Insurance Loss Data Using an Empirical Copula-Based Approach

Authors: Adrian O'Hagan, Robert McLoughlin

Abstract:

Considerable focus in the world of insurance risk quantification is placed on modeling loss values from lines of business (LOBs) that possess upper tail dependence. Copulas such as the Joe, Gumbel and Student-t copula may be used for this purpose. The copula structure imparts a desired level of tail dependence on the joint distribution of claims from the different LOBs. Alternatively, practitioners may possess historical or simulated data that already exhibit upper tail dependence, through the impact of catastrophe events such as hurricanes or earthquakes. In these circumstances, it is not desirable to induce additional upper tail dependence when modeling the joint distribution of the loss values from the individual LOBs. Instead, it is of interest to accurately assess the degree of tail dependence already present in the data. The empirical copula and its associated upper tail dependence coefficient are presented in this paper as robust, efficient means of achieving this goal.

Keywords: empirical copula, extreme events, insurance loss reserving, upper tail dependence coefficient

Procedia PDF Downloads 284
23320 Implementing a Strategy of Reliability Centred Maintenance (RCM) in the Libyan Cement Industry

Authors: Khalid M. Albarkoly, Kenneth S. Park

Abstract:

The substantial development of the construction industry has forced the cement industry, its major support, to focus on achieving maximum productivity to meet the growing demand for this material. Statistics indicate that the demand for cement rose from 1.6 billion metric tons (bmt) in 2000 to 4bmt in 2013. This means that the reliability of a production system needs to be at the highest level that can be achieved by good maintenance. This paper studies the extent to which the implementation of RCM is needed as a strategy for increasing the reliability of the production systems component can be increased, thus ensuring continuous productivity. In a case study of four Libyan cement factories, 80 employees were surveyed and 12 top and middle managers interviewed. It is evident that these factories usually breakdown more often than once per month which has led to a decline in productivity, they cannot produce more than 50% of their designed capacity. This has resulted from the poor reliability of their production systems as a result of poor or insufficient maintenance. It has been found that most of the factories’ employees misunderstand maintenance and its importance. The main cause of this problem is the lack of qualified and trained staff, but in addition, it has been found that most employees are not found to be motivated as a result of a lack of management support and interest. In response to these findings, it has been suggested that the RCM strategy should be implemented in the four factories. The paper shows the importance of considering the development of maintenance strategies through the implementation of RCM in these factories. The purpose of it would be to overcome the problems that could reduce the level of reliability of the production systems. This study could be a useful source of information for academic researchers and the industrial organisations which are still experiencing problems in maintenance practices.

Keywords: Libyan cement industry, reliability centred maintenance, maintenance, production, reliability

Procedia PDF Downloads 390
23319 Blockchain in Saudi E-Government: A Systematic Literature Review

Authors: Haitham Assiri, Priyadarsi Nanda

Abstract:

The world is gradually entering the fourth industrial revolution. E-Government services are scaling government operations across the globe. However, as promising as an e-Government system would be, it is also susceptible to malicious attacks if not properly secured. This study found out that, in Saudi Arabia, the e-Government website, Yesser is vulnerable to external attacks. Obviously, this can lead to a breach of data integrity and privacy. In this paper, a Systematic Literature Review was conducted to explore possible ways the Kingdom of Saudi Arabia can take necessary measures to strengthen its e-Government system using Blockchain. Blockchain is one of the emerging technologies shaping the world through its applications in finance, elections, healthcare, etc. It secures systems and brings more transparency. A total of 28 papers were selected for this SLR, and 19 of the papers significantly showed that blockchain could enhance the security and privacy of Saudi’s e-government system. Other papers also concluded that blockchain is effective, albeit with the integration of other technologies like IoT, AI and big data. These papers have been analysed to sieve out the findings and set the stage for future research into the subject.

Keywords: blockchain, data integrity, e-government, security threats

Procedia PDF Downloads 250
23318 Geospatial Information for Smart City Development

Authors: Simangele Dlamini

Abstract:

Smart city development is seen as a way of facing the challenges brought about by the growing urban population the world over. Research indicates that cities have a role to play in combating urban challenges like crime, waste disposal, greenhouse gas emissions, and resource efficiency. These solutions should be such that they do not make city management less sustainable but should be solutions-driven, cost and resource-efficient, and smart. This study explores opportunities on how the City of Johannesburg, South Africa, can use Geographic Information Systems, Big Data and the Internet of Things (IoT) in identifying opportune areas to initiate smart city initiatives such as smart safety, smart utilities, smart mobility, and smart infrastructure in an integrated manner. The study will combine Big Data, using real-time data sources to identify hotspot areas that will benefit from ICT interventions. The GIS intervention will assist the city in avoiding a silo approach in its smart city development initiatives, an approach that has led to the failure of smart city development in other countries.

Keywords: smart cities, internet of things, geographic information systems, johannesburg

Procedia PDF Downloads 149
23317 Language Errors Used in “The Space between Us” Movie and Their Effects on Translation Quality: Translation Study toward Discourse Analysis Approach

Authors: Mochamad Nuruz Zaman, Mangatur Rudolf Nababan, M. A. Djatmika

Abstract:

Both society and education areas teach to have good communication for building the interpersonal skills up. Everyone has the capacity to understand something new, either well comprehension or worst understanding. Worst understanding makes the language errors when the interactions are done by someone in the first meeting, and they do not know before it because of distance area. “The Space between Us” movie delivers the love-adventure story between Mars Boy and Earth Girl. They are so many missing conversations because of the different climate and environment. As the moviegoer also must be focused on the subtitle in order to enjoy well the movie. Furthermore, Indonesia subtitle and English conversation on the movie still have overlapping understanding in the translation. Translation hereby consists of source language -SL- (English conversation) and target language -TL- (Indonesia subtitle). These research gap above is formulated in research question by how the language errors happened in that movie and their effects on translation quality which is deepest analyzed by translation study toward discourse analysis approach. The research goal is to expand the language errors and their translation qualities in order to create a good atmosphere in movie media. The research is studied by embedded research in qualitative design. The research locations consist of setting, participant, and event as focused determined boundary. Sources of datum are “The Space between Us” movie and informant (translation quality rater). The sampling is criterion-based sampling (purposive sampling). Data collection techniques use content analysis and questioner. Data validation applies data source and method triangulation. Data analysis delivers domain, taxonomy, componential, and cultural theme analysis. Data findings on the language errors happened in the movie are referential, register, society, textual, receptive, expressive, individual, group, analogical, transfer, local, and global errors. Data discussions on their effects to translation quality are concentrated by translation techniques on their data findings; they are amplification, borrowing, description, discursive creation, established equivalent, generalization, literal, modulation, particularization, reduction, substitution, and transposition.

Keywords: discourse analysis, language errors, The Space between Us movie, translation techniques, translation quality instruments

Procedia PDF Downloads 219
23316 A Coupling Study of Public Service Facilities and Land Price Based on Big Data Perspective in Wuxi City

Authors: Sisi Xia, Dezhuan Tao, Junyan Yang, Weiting Xiong

Abstract:

Under the background of Chinese urbanization changing from incremental development to stock development, the completion of urban public service facilities is essential to urban spatial quality. As public services facilities is a huge and complicated system, clarifying the various types of internal rules associated with the land market price is key to optimizing spatial layout. This paper takes Wuxi City as a representative sample location and establishes the digital analysis platform using urban price and several high-precision big data acquisition methods. On this basis, it analyzes the coupling relationship between different public service categories and land price, summarizing the coupling patterns of urban public facilities distribution and urban land price fluctuations. Finally, the internal mechanism within each of the two elements is explored, providing the reference of the optimum layout of urban planning and public service facilities.

Keywords: public service facilities, land price, urban spatial morphology, big data

Procedia PDF Downloads 215
23315 Structural Damage Detection Using Modal Data Employing Teaching Learning Based Optimization

Authors: Subhajit Das, Nirjhar Dhang

Abstract:

Structural damage detection is a challenging work in the field of structural health monitoring (SHM). The damage detection methods mainly focused on the determination of the location and severity of the damage. Model updating is a well known method to locate and quantify the damage. In this method, an error function is defined in terms of difference between the signal measured from ‘experiment’ and signal obtained from undamaged finite element model. This error function is minimised with a proper algorithm, and the finite element model is updated accordingly to match the measured response. Thus, the damage location and severity can be identified from the updated model. In this paper, an error function is defined in terms of modal data viz. frequencies and modal assurance criteria (MAC). MAC is derived from Eigen vectors. This error function is minimized by teaching-learning-based optimization (TLBO) algorithm, and the finite element model is updated accordingly to locate and quantify the damage. Damage is introduced in the model by reduction of stiffness of the structural member. The ‘experimental’ data is simulated by the finite element modelling. The error due to experimental measurement is introduced in the synthetic ‘experimental’ data by adding random noise, which follows Gaussian distribution. The efficiency and robustness of this method are explained through three examples e.g., one truss, one beam and one frame problem. The result shows that TLBO algorithm is efficient to detect the damage location as well as the severity of damage using modal data.

Keywords: damage detection, finite element model updating, modal assurance criteria, structural health monitoring, teaching learning based optimization

Procedia PDF Downloads 215
23314 Visco - Plastic Transition and Transfer of Plastic Material with SGF in case of Linear Dry Friction Contact on Steel Surfaces

Authors: Lucian Capitanu, Virgil Florescu

Abstract:

Often for the laboratory studies, modeling of specific tribological processes raises special problems. One such problem is the modeling of some temperatures and extremely high contact pressures, allowing modeling of temperatures and pressures at which the injection or extrusion processing of thermoplastic materials takes place. Tribological problems occur mainly in thermoplastics materials reinforced with glass fibers. They produce an advanced wear to the barrels and screws of processing machines, in short time. Obtaining temperatures around 210 °C and higher, as well as pressures around 100 MPa is very difficult in the laboratory. This paper reports a simple and convenient solution to get these conditions, using friction sliding couples with linear contact, cylindrical liner plastic filled with glass fibers on plate steel samples, polished and super-finished. C120 steel, which is a steel for moulds and Rp3 steel, high speed steel for tools, were used. Obtaining the pressure was achieved by continuous request of the liner in rotational movement up to its elasticity limits, when the dry friction coefficient reaches or exceeds the hardness value of 0.5 HB. By dissipation of the power lost by friction on flat steel sample, are reached contact temperatures at the metal surface that reach and exceed 230 °C, being placed in the range temperature values of the injection. Contact pressures (in load and materials conditions used) ranging from 16.3-36.4 MPa were obtained depending on the plastic material used and the glass fibers content.

Keywords: plastics with glass fibers, dry friction, linear contact, contact temperature, contact pressure, experimental simulation

Procedia PDF Downloads 302
23313 Structure of the Working Time of Nurses in Emergency Departments in Polish Hospitals

Authors: Jadwiga Klukow, Anna Ksykiewicz-Dorota

Abstract:

An analysis of the distribution of nurses’ working time constitutes vital information for the management in planning employment. The objective of the study was to analyze the distribution of nurses’ working time in an emergency department. The study was conducted in an emergency department of a teaching hospital in Lublin, in Southeast Poland. The catalogue of activities performed by nurses was compiled by means of continuous observation. Identified activities were classified into four groups: Direct care, indirect care, coordination of work in the department and personal activities. Distribution of nurses’ working time was determined by work sampling observation (Tippett) at random intervals. The research project was approved by the Research Ethics Committee by the Medical University of Lublin (Protocol 0254/113/2010). On average, nurses spent 31% of their working time on direct care, 47% on indirect care, 12% on coordinating work in the department and 10% on personal activities. The most frequently performed direct care tasks were diagnostic activities – 29.23% and treatment-related activities – 27.69%. The study has provided information on the complexity of performed activities and utilization of nurses’ working time. Enhancing the effectiveness of nursing actions requires working out a strategy for improved management of the time nurses spent at work. Increasing the involvement of auxiliary staff and optimizing communication processes within the team may lead to reduction of the time devoted to indirect care for the benefit of direct care.

Keywords: emergency nurses, nursing care, workload, work sampling

Procedia PDF Downloads 334
23312 Deployed Confidence: The Testing in Production

Authors: Shreya Asthana

Abstract:

Testers know that the feature they tested on stage is working perfectly in production only after release went live. Sometimes something breaks in production and testers get to know through the end user’s bug raised. The panic mode starts when your staging test results do not reflect current production behavior. And you started doubting your testing skills when finally the user reported a bug to you. Testers can deploy their confidence on release day by testing on production. Once you start doing testing in production, you will see test result accuracy because it will be running on real time data and execution will be a little faster as compared to staging one due to elimination of bad data. Feature flagging, canary releases, and data cleanup can help to achieve this technique of testing. By this paper it will be easier to understand the steps to achieve production testing before making your feature live, and to modify IT company’s testing procedure, so testers can provide the bug free experience to the end users. This study is beneficial because too many people think that testing should be done in staging but not in production and now this is high time to pull out people from their old mindset of testing into a new testing world. At the end of the day, it all just matters if the features are working in production or not.

Keywords: bug free production, new testing mindset, testing strategy, testing approach

Procedia PDF Downloads 77
23311 Cross-Validation of the Data Obtained for ω-6 Linoleic and ω-3 α-Linolenic Acids Concentration of Hemp Oil Using Jackknife and Bootstrap Resampling

Authors: Vibha Devi, Shabina Khanam

Abstract:

Hemp (Cannabis sativa) possesses a rich content of ω-6 linoleic and ω-3 linolenic essential fatty acid in the ratio of 3:1, which is a rare and most desired ratio that enhances the quality of hemp oil. These components are beneficial for the development of cell and body growth, strengthen the immune system, possess anti-inflammatory action, lowering the risk of heart problem owing to its anti-clotting property and a remedy for arthritis and various disorders. The present study employs supercritical fluid extraction (SFE) approach on hemp seed at various conditions of parameters; temperature (40 - 80) °C, pressure (200 - 350) bar, flow rate (5 - 15) g/min, particle size (0.430 - 1.015) mm and amount of co-solvent (0 - 10) % of solvent flow rate through central composite design (CCD). CCD suggested 32 sets of experiments, which was carried out. As SFE process includes large number of variables, the present study recommends the application of resampling techniques for cross-validation of the obtained data. Cross-validation refits the model on each data to achieve the information regarding the error, variability, deviation etc. Bootstrap and jackknife are the most popular resampling techniques, which create a large number of data through resampling from the original dataset and analyze these data to check the validity of the obtained data. Jackknife resampling is based on the eliminating one observation from the original sample of size N without replacement. For jackknife resampling, the sample size is 31 (eliminating one observation), which is repeated by 32 times. Bootstrap is the frequently used statistical approach for estimating the sampling distribution of an estimator by resampling with replacement from the original sample. For bootstrap resampling, the sample size is 32, which was repeated by 100 times. Estimands for these resampling techniques are considered as mean, standard deviation, variation coefficient and standard error of the mean. For ω-6 linoleic acid concentration, mean value was approx. 58.5 for both resampling methods, which is the average (central value) of the sample mean of all data points. Similarly, for ω-3 linoleic acid concentration, mean was observed as 22.5 through both resampling. Variance exhibits the spread out of the data from its mean. Greater value of variance exhibits the large range of output data, which is 18 for ω-6 linoleic acid (ranging from 48.85 to 63.66 %) and 6 for ω-3 linoleic acid (ranging from 16.71 to 26.2 %). Further, low value of standard deviation (approx. 1 %), low standard error of the mean (< 0.8) and low variance coefficient (< 0.2) reflect the accuracy of the sample for prediction. All the estimator value of variance coefficients, standard deviation and standard error of the mean are found within the 95 % of confidence interval.

Keywords: resampling, supercritical fluid extraction, hemp oil, cross-validation

Procedia PDF Downloads 141
23310 Evaluation of Hydrocarbon Prospects of 'ADE' Field, Niger Delta

Authors: Oluseun A. Sanuade, Sanlinn I. Kaka, Adesoji O. Akanji, Olukole A. Akinbiyi

Abstract:

Prospect evaluation of ‘the ‘ADE’ field was done using 3D seismic data and well log data. The field is located in the offshore Niger Delta where water depth ranges from 450 to 800 m. The objectives of this study are to explore deeper prospects and to ascertain the kind of traps that are favorable for the accumulation of hydrocarbon in the field. Six horizons with major and minor faults were identified and mapped in the field. Time structure maps of these horizons were generated and using the available check-shot data the maps were converted to top structure maps which were used to calculate the hydrocarbon volume. The results show that regional structural highs that are trending in northeast-southwest (NE-SW) characterized a large portion of the field. These highs were observed across all horizons revealing a regional post-depositional deformation. Three prospects were identified and evaluated to understand the different opportunities in the field. These include stratigraphic pinch out and bi-directional downlap. The results of this study show that the field has potentials for new opportunities that could be explored for further studies.

Keywords: hydrocarbon, play, prospect, stratigraphy

Procedia PDF Downloads 270
23309 Enhancing Information Technologies with AI: Unlocking Efficiency, Scalability, and Innovation

Authors: Abdal-Hafeez Alhussein

Abstract:

Artificial Intelligence (AI) has become a transformative force in the field of information technologies, reshaping how data is processed, analyzed, and utilized across various domains. This paper explores the multifaceted applications of AI within information technology, focusing on three key areas: automation, scalability, and data-driven decision-making. We delve into how AI-powered automation is optimizing operational efficiency in IT infrastructures, from automated network management to self-healing systems that reduce downtime and enhance performance. Scalability, another critical aspect, is addressed through AI’s role in cloud computing and distributed systems, enabling the seamless handling of increasing data loads and user demands. Additionally, the paper highlights the use of AI in cybersecurity, where real-time threat detection and adaptive response mechanisms significantly improve resilience against sophisticated cyberattacks. In the realm of data analytics, AI models—especially machine learning and natural language processing—are driving innovation by enabling more precise predictions, automated insights extraction, and enhanced user experiences. The paper concludes with a discussion on the ethical implications of AI in information technologies, underscoring the importance of transparency, fairness, and responsible AI use. It also offers insights into future trends, emphasizing the potential of AI to further revolutionize the IT landscape by integrating with emerging technologies like quantum computing and IoT.

Keywords: artificial intelligence, information technology, automation, scalability

Procedia PDF Downloads 17
23308 D3Advert: Data-Driven Decision Making for Ad Personalization through Personality Analysis Using BiLSTM Network

Authors: Sandesh Achar

Abstract:

Personalized advertising holds greater potential for higher conversion rates compared to generic advertisements. However, its widespread application in the retail industry faces challenges due to complex implementation processes. These complexities impede the swift adoption of personalized advertisement on a large scale. Personalized advertisement, being a data-driven approach, necessitates consumer-related data, adding to its complexity. This paper introduces an innovative data-driven decision-making framework, D3Advert, which personalizes advertisements by analyzing personalities using a BiLSTM network. The framework utilizes the Myers–Briggs Type Indicator (MBTI) dataset for development. The employed BiLSTM network, specifically designed and optimized for D3Advert, classifies user personalities into one of the sixteen MBTI categories based on their social media posts. The classification accuracy is 86.42%, with precision, recall, and F1-Score values of 85.11%, 84.14%, and 83.89%, respectively. The D3Advert framework personalizes advertisements based on these personality classifications. Experimental implementation and performance analysis of D3Advert demonstrate a 40% improvement in impressions. D3Advert’s innovative and straightforward approach has the potential to transform personalized advertising and foster widespread personalized advertisement adoption in marketing.

Keywords: personalized advertisement, deep Learning, MBTI dataset, BiLSTM network, NLP.

Procedia PDF Downloads 44
23307 Drive Sharing with Multimodal Interaction: Enhancing Safety and Efficiency

Authors: Sagar Jitendra Mahendrakar

Abstract:

Exploratory testing is a dynamic and adaptable method of software quality assurance that is frequently praised for its ability to find hidden flaws and improve the overall quality of the product. Instead of using preset test cases, exploratory testing allows testers to explore the software application dynamically. This is in contrast to scripted testing methodologies, which primarily rely on tester intuition, creativity, and adaptability. There are several tools and techniques that can aid testers in the exploratory testing process which we will be discussing in this talk.Tests of this kind are able to find bugs of this kind that are harder to find during structured testing or that other testing methods may have overlooked.The purpose of this abstract is to examine the nature and importance of exploratory testing in modern software development methods. It explores the fundamental ideas of exploratory testing, highlighting the value of domain knowledge and tester experience in spotting possible problems that may escape the notice of traditional testing methodologies. Throughout the software development lifecycle, exploratory testing promotes quick feedback loops and continuous improvement by giving testers the ability to make decisions in real time based on their observations. This abstract also clarifies the unique features of exploratory testing, like its non-linearity and capacity to replicate user behavior in real-world settings. Testers can find intricate bugs, usability problems, and edge cases in software through impromptu exploration that might go undetected. Exploratory testing's flexible and iterative structure fits in well with agile and DevOps processes, allowing for a quicker time to market without sacrificing the quality of the final product.

Keywords: exploratory, testing, automation, quality

Procedia PDF Downloads 52
23306 Communication Infrastructure Required for a Driver Behaviour Monitoring System, ‘SiaMOTO’ IT Platform

Authors: Dogaru-Ulieru Valentin, Sălișteanu Ioan Corneliu, Ardeleanu Mihăiță Nicolae, Broscăreanu Ștefan, Sălișteanu Bogdan, Mihai Mihail

Abstract:

The SiaMOTO system is a communications and data processing platform for vehicle traffic. The human factor is the most important factor in the generation of this data, as the driver is the one who dictates the trajectory of the vehicle. Like any trajectory, specific parameters refer to position, speed and acceleration. Constant knowledge of these parameters allows complex analyses. Roadways allow many vehicles to travel through their confined space, and the overlapping trajectories of several vehicles increase the likelihood of collision events, known as road accidents. Any such event has causes that lead to its occurrence, so the conditions for its occurrence are known. The human factor is predominant in deciding the trajectory parameters of the vehicle on the road, so monitoring it by knowing the events reported by the DiaMOTO device over time, will generate a guide to target any potentially high-risk driving behavior and reward those who control the driving phenomenon well. In this paper, we have focused on detailing the communication infrastructure of the DiaMOTO device with the traffic data collection server, the infrastructure through which the database that will be used for complex AI/DLM analysis is built. The central element of this description is the data string in CODEC-8 format sent by the DiaMOTO device to the SiaMOTO collection server database. The data presented are specific to a functional infrastructure implemented in an experimental model stage, by installing on a number of 50 vehicles DiaMOTO unique code devices, integrating ADAS and GPS functions, through which vehicle trajectories can be monitored 24 hours a day.

Keywords: DiaMOTO, Codec-8, ADAS, GPS, driver monitoring

Procedia PDF Downloads 78
23305 Predictive Modeling of Bridge Conditions Using Random Forest

Authors: Miral Selim, May Haggag, Ibrahim Abotaleb

Abstract:

The aging of transportation infrastructure presents significant challenges, particularly concerning the monitoring and maintenance of bridges. This study investigates the application of Random Forest algorithms for predictive modeling of bridge conditions, utilizing data from the US National Bridge Inventory (NBI). The research is significant as it aims to improve bridge management through data-driven insights that can enhance maintenance strategies and contribute to overall safety. Random Forest is chosen for its robustness, ability to handle complex, non-linear relationships among variables, and its effectiveness in feature importance evaluation. The study begins with comprehensive data collection and cleaning, followed by the identification of key variables influencing bridge condition ratings, including age, construction materials, environmental factors, and maintenance history. Random Forest is utilized to examine the relationships between these variables and the predicted bridge conditions. The dataset is divided into training and testing subsets to evaluate the model's performance. The findings demonstrate that the Random Forest model effectively enhances the understanding of factors affecting bridge conditions. By identifying bridges at greater risk of deterioration, the model facilitates proactive maintenance strategies, which can help avoid costly repairs and minimize service disruptions. Additionally, this research underscores the value of data-driven decision-making, enabling better resource allocation to prioritize maintenance efforts where they are most necessary. In summary, this study highlights the efficiency and applicability of Random Forest in predictive modeling for bridge management. Ultimately, these findings pave the way for more resilient and proactive management of bridge systems, ensuring their longevity and reliability for future use.

Keywords: data analysis, random forest, predictive modeling, bridge management

Procedia PDF Downloads 23
23304 Interface Engineering of Short- and Ultrashort Period W-Based Multilayers for Soft X-Rays

Authors: A. E. Yakshin, D. Ijpes, J. M. Sturm, I. A. Makhotkin, M. D. Ackermann

Abstract:

Applications like synchrotron optics, soft X-ray microscopy, X-ray astronomy, and wavelength dispersive X-ray fluorescence (WD-XRF) rely heavily on short- and ultra-short-period multilayer (ML) structures. In WD-XRF, ML serves as an analyzer crystal to disperse emission lines of light elements. The key requirement for the ML is to be highly reflective while also providing sufficient angular dispersion to resolve specific XRF lines. For these reasons, MLs with periods ranging from 1.0 to 2.5 nm are of great interest in this field. Due to the short period, the reflectance of such MLs is extremely sensitive to interface imperfections such as roughness and interdiffusion. Moreover, the thickness of the individual layers is only a few angstroms, which is close to the limit of materials to grow a continuous film. MLs with a period between 2.5 nm and 1.0 nm, combining tungsten (W) reflector with B₄C, Si, and Al spacers, were created and examined. These combinations show high theoretical reflectance in the full range from C-Kα (4.48nm) down to S-Kα (0.54nm). However, the formation of optically unfavorable compounds, intermixing, and interface roughness result in limited reflectance. A variety of techniques, including diffusion barriers, seed layers, and ion polishing for sputter-deposited MLs, were used to address these issues. Diffuse scattering measurements, photo-electron spectroscopy analysis, and X-ray reflectivity measurements showed a noticeable reduction of compound formation, intermixing, and interface roughness. This also resulted in a substantial increase in soft X-ray reflectance for W/Si, W/B4C, and W/Al MLs. In particular, the reflectivity of 1 nm period W/Si multilayers at the wavelength of 0.84 nm increased more than 3-fold – propelling forward the applicability of such multilayers for shorter wavelengths.

Keywords: interface engineering, reflectance, short period multilayer structures, x-ray optics

Procedia PDF Downloads 51
23303 Validity and Reliability of Competency Assessment Implementation (CAI) Instrument Using Rasch Model

Authors: Nurfirdawati Muhamad Hanafi, Azmanirah Ab Rahman, Marina Ibrahim Mukhtar, Jamil Ahmad, Sarebah Warman

Abstract:

This study was conducted to generate empirical evidence on validity and reliability of the item of Competency Assessment Implementation (CAI) Instrument using Rasch Model for polythomous data aided by Winstep software version 3.68. The construct validity was examined by analyzing the point-measure correlation index (PTMEA), in fit and outfit MNSQ values; meanwhile the reliability was examined by analyzing item reliability index. A survey technique was used as the major method with the CAI instrument on 156 teachers from vocational schools. The results have shown that the reliability of CAI Instrument items were between 0.80 and 0.98. PTMEA Correlation is in positive values, in which the item is able to distinguish between the ability of the respondent. Statistical data obtained shows that out of 154 items, 12 items from the instrument suggested to be omitted. This study is hoped could bring a new direction to the process of data analysis in educational research.

Keywords: competency assessment, reliability, validity, item analysis

Procedia PDF Downloads 445
23302 Electronic Equipment Failure due to Corrosion

Authors: Yousaf Tariq

Abstract:

There are many reasons which are involved in electronic equipment failure i.e. temperature, humidity, dust, smoke etc. Corrosive gases are also one of the factor which may involve in failure of equipment. Sensitivity of electronic equipment increased when “lead-free” regulation enforced on manufacturers. In data center, equipment like hard disk, servers, printed circuit boards etc. have been exposed to gaseous contamination due to increase in sensitivity. There is a worldwide standard to protect electronic industrial electronic from corrosive gases. It is well known as “ANSI/ISA S71.04 – 1985 - Environmental Conditions for Control Systems: Airborne Contaminants. ASHRAE Technical Committee (TC) 9.9 members also recommended ISA standard in their whitepaper on Gaseous and Particulate Contamination Guideline for data centers. TC 9.9 members represented some of the major IT equipment manufacturers e.g. IBM, HP, Cisco etc. As per standard practices, first step is to monitor air quality in data center. If contamination level shows more than G1, it means that gas-phase air filtration is required other than dust/smoke air filtration. It is important that outside fresh air entering in data center should have pressurization/re-circulated process in order to absorb corrosive gases and to maintain level within specified limit. It is also important that air quality monitoring should be conducted once in a year. Temperature and humidity should also be monitored as per standard practices to maintain level within specified limit.

Keywords: corrosive gases, corrosion, electronic equipment failure, ASHRAE, hard disk

Procedia PDF Downloads 330
23301 Evaluation of Diagnosis Performance Based on Pairwise Model Construction and Filtered Data

Authors: Hyun-Woo Cho

Abstract:

It is quite important to utilize right time and intelligent production monitoring and diagnosis of industrial processes in terms of quality and safety issues. When compared with monitoring task, fault diagnosis represents the task of finding process variables responsible causing a specific fault in the process. It can be helpful to process operators who should investigate and eliminate root causes more effectively and efficiently. This work focused on the active use of combining a nonlinear statistical technique with a preprocessing method in order to implement practical real-time fault identification schemes for data-rich cases. To compare its performance to existing identification schemes, a case study on a benchmark process was performed in several scenarios. The results showed that the proposed fault identification scheme produced more reliable diagnosis results than linear methods. In addition, the use of the filtering step improved the identification results for the complicated processes with massive data sets.

Keywords: diagnosis, filtering, nonlinear statistical techniques, process monitoring

Procedia PDF Downloads 244
23300 Train Cross-Cultural Leaders in Higher Education

Authors: Sarah Abi Raad

Abstract:

Nowadays, one of the challenges faced by many institutions is the continuous changing psychosocial environment. This alteration affects the resources, the organizations and defies the leadership and management of people in charge. As a fact, institutions of higher education differ from many organizations, requiring leadership to be a more shared phenomenon than in most profit-centered enterprises. In these colleges, the leadership must be oriented in a way to empower activities. This said, it is important to train students to take on leadership roles in their personal and professional lives. Thus, leadership training in higher education have to manage a cross-cultural environment in order to get the best out of the whole community that works and studies there. The main directions to follow are the building of a professional identity that manages the cross-cultural public while feeling a personal fulfillment in the workplace. In order to do that, this communication proposal has three objectives: - Explain the aspects of the cross-cultural leadership training logic offered to managers and chairs by allowing them to develop a technical leader style of passionate type with a managerial leadership style of compassionate type. - Define the multiple factors on which depends the leadership, which includes the department’s stage of development, the specific management function, the academic discipline and the chair’s own style of leadership. - Emphasize on the complex nature of leadership and the different facets that results from its role in the higher education. However, different situations require a leader with particular characteristics that can be gathered into three categories: “the innovator”, “the implementer” and the “pacifier”. Each category is linked to a problem organizations normally encounter. This leads to conclude with the following question: are the gender, age and culture taken into consideration during a training?

Keywords: benevolent leadership, cross-cultural training, management, unprecedented existential crisis

Procedia PDF Downloads 124
23299 A Predictive Model of Supply and Demand in the State of Jalisco, Mexico

Authors: M. Gil, R. Montalvo

Abstract:

Business Intelligence (BI) has become a major source of competitive advantages for firms around the world. BI has been defined as the process of data visualization and reporting for understanding what happened and what is happening. Moreover, BI has been studied for its predictive capabilities in the context of trade and financial transactions. The current literature has identified that BI permits managers to identify market trends, understand customer relations, and predict demand for their products and services. This last capability of BI has been of special concern to academics. Specifically, due to its power to build predictive models adaptable to specific time horizons and geographical regions. However, the current literature of BI focuses on predicting specific markets and industries because the impact of such predictive models was relevant to specific industries or organizations. Currently, the existing literature has not developed a predictive model of BI that takes into consideration the whole economy of a geographical area. This paper seeks to create a predictive model of BI that would show the bigger picture of a geographical area. This paper uses a data set from the Secretary of Economic Development of the state of Jalisco, Mexico. Such data set includes data from all the commercial transactions that occurred in the state in the last years. By analyzing such data set, it will be possible to generate a BI model that predicts supply and demand from specific industries around the state of Jalisco. This research has at least three contributions. Firstly, a methodological contribution to the BI literature by generating the predictive supply and demand model. Secondly, a theoretical contribution to BI current understanding. The model presented in this paper incorporates the whole picture of the economic field instead of focusing on a specific industry. Lastly, a practical contribution might be relevant to local governments that seek to improve their economic performance by implementing BI in their policy planning.

Keywords: business intelligence, predictive model, supply and demand, Mexico

Procedia PDF Downloads 123
23298 A New Block Cipher for Resource-Constrained Internet of Things Devices

Authors: Muhammad Rana, Quazi Mamun, Rafiqul Islam

Abstract:

In the Internet of Things (IoT), many devices are connected and accumulate a sheer amount of data. These Internet-driven raw data need to be transferred securely to the end-users via dependable networks. Consequently, the challenges of IoT security in various IoT domains are paramount. Cryptography is being applied to secure the networks for authentication, confidentiality, data integrity and access control. However, due to the resource constraint properties of IoT devices, the conventional cipher may not be suitable in all IoT networks. This paper designs a robust and effective lightweight cipher to secure the IoT environment and meet the resource-constrained nature of IoT devices. We also propose a symmetric and block-cipher based lightweight cryptographic algorithm. The proposed algorithm increases the complexity of the block cipher, maintaining the lowest computational requirements possible. The proposed algorithm efficiently constructs the key register updating technique, reduces the number of encryption rounds, and adds a new layer between the encryption and decryption processes.

Keywords: internet of things, cryptography block cipher, S-box, key management, security, network

Procedia PDF Downloads 113
23297 BodeACD: Buffer Overflow Vulnerabilities Detecting Based on Abstract Syntax Tree, Control Flow Graph, and Data Dependency Graph

Authors: Xinghang Lv, Tao Peng, Jia Chen, Junping Liu, Xinrong Hu, Ruhan He, Minghua Jiang, Wenli Cao

Abstract:

As one of the most dangerous vulnerabilities, effective detection of buffer overflow vulnerabilities is extremely necessary. Traditional detection methods are not accurate enough and consume more resources to meet complex and enormous code environment at present. In order to resolve the above problems, we propose the method for Buffer overflow detection based on Abstract syntax tree, Control flow graph, and Data dependency graph (BodeACD) in C/C++ programs with source code. Firstly, BodeACD constructs the function samples of buffer overflow that are available on Github, then represents them as code representation sequences, which fuse control flow, data dependency, and syntax structure of source code to reduce information loss during code representation. Finally, BodeACD learns vulnerability patterns for vulnerability detection through deep learning. The results of the experiments show that BodeACD has increased the precision and recall by 6.3% and 8.5% respectively compared with the latest methods, which can effectively improve vulnerability detection and reduce False-positive rate and False-negative rate.

Keywords: vulnerability detection, abstract syntax tree, control flow graph, data dependency graph, code representation, deep learning

Procedia PDF Downloads 170
23296 Fueling Efficient Reporting And Decision-Making In Public Health With Large Data Automation In Remote Areas, Neno Malawi

Authors: Wiseman Emmanuel Nkhomah, Chiyembekezo Kachimanga, Julia Huggins, Fabien Munyaneza

Abstract:

Background: Partners In Health – Malawi introduced one of Operational Researches called Primary Health Care (PHC) Surveys in 2020, which seeks to assess progress of delivery of care in the district. The study consists of 5 long surveys, namely; Facility assessment, General Patient, Provider, Sick Child, Antenatal Care (ANC), primarily conducted in 4 health facilities in Neno district. These facilities include Neno district hospital, Dambe health centre, Chifunga and Matope. Usually, these annual surveys are conducted from January, and the target is to present final report by June. Once data is collected and analyzed, there are a series of reviews that take place before reaching final report. In the first place, the manual process took over 9 months to present final report. Initial findings reported about 76.9% of the data that added up when cross-checked with paper-based sources. Purpose: The aim of this approach is to run away from manually pulling the data, do fresh analysis, and reporting often associated not only with delays in reporting inconsistencies but also with poor quality of data if not done carefully. This automation approach was meant to utilize features of new technologies to create visualizations, reports, and dashboards in Power BI that are directly fished from the data source – CommCare hence only require a single click of a ‘refresh’ button to have the updated information populated in visualizations, reports, and dashboards at once. Methodology: We transformed paper-based questionnaires into electronic using CommCare mobile application. We further connected CommCare Mobile App directly to Power BI using Application Program Interface (API) connection as data pipeline. This provided chance to create visualizations, reports, and dashboards in Power BI. Contrary to the process of manually collecting data in paper-based questionnaires, entering them in ordinary spreadsheets, and conducting analysis every time when preparing for reporting, the team utilized CommCare and Microsoft Power BI technologies. We utilized validations and logics in CommCare to capture data with less errors. We utilized Power BI features to host the reports online by publishing them as cloud-computing process. We switched from sharing ordinary report files to sharing the link to potential recipients hence giving them freedom to dig deep into extra findings within Power BI dashboards and also freedom to export to any formats of their choice. Results: This data automation approach reduced research timelines from the initial 9 months’ duration to 5. It also improved the quality of the data findings from the original 76.9% to 98.9%. This brought confidence to draw conclusions from the findings that help in decision-making and gave opportunities for further researches. Conclusion: These results suggest that automating the research data process has the potential of reducing overall amount of time spent and improving the quality of the data. On this basis, the concept of data automation should be taken into serious consideration when conducting operational research for efficiency and decision-making.

Keywords: reporting, decision-making, power BI, commcare, data automation, visualizations, dashboards

Procedia PDF Downloads 116
23295 Analysis of Financial Time Series by Using Ornstein-Uhlenbeck Type Models

Authors: Md Al Masum Bhuiyan, Maria C. Mariani, Osei K. Tweneboah

Abstract:

In the present work, we develop a technique for estimating the volatility of financial time series by using stochastic differential equation. Taking the daily closing prices from developed and emergent stock markets as the basis, we argue that the incorporation of stochastic volatility into the time-varying parameter estimation significantly improves the forecasting performance via Maximum Likelihood Estimation. While using the technique, we see the long-memory behavior of data sets and one-step-ahead-predicted log-volatility with ±2 standard errors despite the variation of the observed noise from a Normal mixture distribution, because the financial data studied is not fully Gaussian. Also, the Ornstein-Uhlenbeck process followed in this work simulates well the financial time series, which aligns our estimation algorithm with large data sets due to the fact that this algorithm has good convergence properties.

Keywords: financial time series, maximum likelihood estimation, Ornstein-Uhlenbeck type models, stochastic volatility model

Procedia PDF Downloads 242
23294 Detecting Black Hole Attacks in Body Sensor Networks

Authors: Sara Alshehri, Bayan Alenzi, Atheer Alshehri, Samia Chelloug, Zainab Almry, Hussah Albugmai

Abstract:

This paper concerns body area networks sensor that collect signals around a human body. The black hole attacks are the main security challenging problem because the data traffic can be dropped at any node. The focus of our proposed solution is to efficiently route data packets while detecting black hole nodes.

Keywords: body sensor networks, security, black hole, routing, broadcasting, OMNeT++

Procedia PDF Downloads 646
23293 hsa-miR-1204 and hsa-miR-639 Prominent Role in Tamoxifen's Molecular Mechanisms on the EMT Phenomenon in Breast Cancer Patients

Authors: Mahsa Taghavi

Abstract:

In the treatment of breast cancer, tamoxifen is a regularly prescribed medication. The effect of tamoxifen on breast cancer patients' EMT pathways was studied. In this study to see if it had any effect on the cancer cells' resistance to tamoxifen and to look for specific miRNAs associated with EMT. In this work, we used continuous and integrated bioinformatics analysis to choose the optimal GEO datasets. Once we had sorted the gene expression profile, we looked at the mechanism of signaling, the ontology of genes, and the protein interaction of each gene. In the end, we used the GEPIA database to confirm the candidate genes. after that, I investigated critical miRNAs related to candidate genes. There were two gene expression profiles that were categorized into two distinct groups. Using the expression profile of genes that were lowered in the EMT pathway, the first group was examined. The second group represented the polar opposite of the first. A total of 253 genes from the first group and 302 genes from the second group were found to be common. Several genes in the first category were linked to cell death, focal adhesion, and cellular aging. Two genes in the second group were linked to cell death, focal adhesion, and cellular aging. distinct cell cycle stages were observed. Finally, proteins such as MYLK, SOCS3, and STAT5B from the first group and BIRC5, PLK1, and RAPGAP1 from the second group were selected as potential candidates linked to tamoxifen's influence on the EMT pathway. hsa-miR-1204 and hsa-miR-639 have a very close relationship with the candidates genes according to the node degrees and betweenness index. With this, the action of tamoxifen on the EMT pathway was better understood. It's important to learn more about how tamoxifen's target genes and proteins work so that we can better understand the drug.

Keywords: tamoxifen, breast cancer, bioinformatics analysis, EMT, miRNAs

Procedia PDF Downloads 129
23292 Data Analytics of Electronic Medical Records Shows an Age-Related Differences in Diagnosis of Coronary Artery Disease

Authors: Maryam Panahiazar, Andrew M. Bishara, Yorick Chern, Roohallah Alizadehsani, Dexter Hadleye, Ramin E. Beygui

Abstract:

Early detection plays a crucial role in enhancing the outcome for a patient with coronary artery disease (CAD). We utilized a big data analytics platform on ~23,000 patients with CAD from a total of 960,129 UCSF patients in 8 years. We traced the patients from their first encounter with a physician to diagnose and treat CAD. Characteristics such as demographic information, comorbidities, vital, lab tests, medications, and procedures are included. There are statistically significant gender-based differences in patients younger than 60 years old from the time of the first physician encounter to coronary artery bypass grafting (CABG) with a p-value=0.03. There are no significant differences between the patients between 60 and 80 years old (p-value=0.8) and older than 80 (p-value=0.4) with a 95% confidence interval. This recognition would affect significant changes in the guideline for referral of the patients for diagnostic tests expeditiously to improve the outcome by avoiding the delay in treatment.

Keywords: electronic medical records, coronary artery disease, data analytics, young women

Procedia PDF Downloads 148