Search results for: multivariate failure-time data
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 25414

Search results for: multivariate failure-time data

23614 Impact of Safety and Quality Considerations of Housing Clients on the Construction Firms’ Intention to Adopt Quality Function Deployment: A Case of Construction Sector

Authors: Saif Ul Haq

Abstract:

The current study intends to examine the safety and quality considerations of clients of housing projects and their impact on the adoption of Quality Function Deployment (QFD) by the construction firm. Mixed method research technique has been used to collect and analyze the data wherein a survey was conducted to collect the data from 220 clients of housing projects in Saudi Arabia. Then, the telephonic and Skype interviews were conducted to collect data of 15 professionals working in the top ten real estate companies of Saudi Arabia. Data were analyzed by using partial least square (PLS) and thematic analysis techniques. Findings reveal that today’s customer prioritizes the safety and quality requirements of their houses and as a result, construction firms adopt QFD to address the needs of customers. The findings are of great importance for the clients of housing projects as well as for the construction firms as they could apply QFD in housing projects to address the safety and quality concerns of their clients.

Keywords: construction industry, quality considerations, quality function deployment, safety considerations

Procedia PDF Downloads 125
23613 Customers’ Acceptability of Islamic Banking: Employees’ Perspective in Peshawar

Authors: Tahira Imtiaz, Karim Ullah

Abstract:

This paper aims to incorporate the banks employees’ perspective on acceptability of Islamic banking by the customers of Peshawar. A qualitative approach is adopted for which six in-depth interviews with employees of Islamic banks are conducted. The employees were asked to share their experience regarding customers’ acceptance attitude towards acceptability of Islamic banking. Collected data was analyzed through thematic analysis technique and its synthesis with the current literature. Through data analysis a theoretical framework is developed, which highlights the factors which drive customers towards Islamic banking, as witnessed by the employees. The practical implication of analyzed data evident that a new model could be developed on the basis of four determinants of human preference namely: inner satisfaction, time, faith and market forces.

Keywords: customers’ attraction, employees’ perspective, Islamic banking, Riba

Procedia PDF Downloads 333
23612 Customized Design of Amorphous Solids by Generative Deep Learning

Authors: Yinghui Shang, Ziqing Zhou, Rong Han, Hang Wang, Xiaodi Liu, Yong Yang

Abstract:

The design of advanced amorphous solids, such as metallic glasses, with targeted properties through artificial intelligence signifies a paradigmatic shift in physical metallurgy and materials technology. Here, we developed a machine-learning architecture that facilitates the generation of metallic glasses with targeted multifunctional properties. Our architecture integrates the state-of-the-art unsupervised generative adversarial network model with supervised models, allowing the incorporation of general prior knowledge derived from thousands of data points across a vast range of alloy compositions, into the creation of data points for a specific type of composition, which overcame the common issue of data scarcity typically encountered in the design of a given type of metallic glasses. Using our generative model, we have successfully designed copper-based metallic glasses, which display exceptionally high hardness or a remarkably low modulus. Notably, our architecture can not only explore uncharted regions in the targeted compositional space but also permits self-improvement after experimentally validated data points are added to the initial dataset for subsequent cycles of data generation, hence paving the way for the customized design of amorphous solids without human intervention.

Keywords: metallic glass, artificial intelligence, mechanical property, automated generation

Procedia PDF Downloads 56
23611 R Data Science for Technology Management

Authors: Sunghae Jun

Abstract:

Technology management (TM) is important issue in a company improving the competitiveness. Among many activities of TM, technology analysis (TA) is important factor, because most decisions for management of technology are decided by the results of TA. TA is to analyze the developed results of target technology using statistics or Delphi. TA based on Delphi is depended on the experts’ domain knowledge, in comparison, TA by statistics and machine learning algorithms use objective data such as patent or paper instead of the experts’ knowledge. Many quantitative TA methods based on statistics and machine learning have been studied, and these have been used for technology forecasting, technological innovation, and management of technology. They applied diverse computing tools and many analytical methods case by case. It is not easy to select the suitable software and statistical method for given TA work. So, in this paper, we propose a methodology for quantitative TA using statistical computing software called R and data science to construct a general framework of TA. From the result of case study, we also show how our methodology is applied to real field. This research contributes to R&D planning and technology valuation in TM areas.

Keywords: technology management, R system, R data science, statistics, machine learning

Procedia PDF Downloads 458
23610 Mixture statistical modeling for predecting mortality human immunodeficiency virus (HIV) and tuberculosis(TB) infection patients

Authors: Mohd Asrul Affendi Bi Abdullah, Nyi Nyi Naing

Abstract:

The purpose of this study was to identify comparable manner between negative binomial death rate (NBDR) and zero inflated negative binomial death rate (ZINBDR) with died patients with (HIV + T B+) and (HIV + T B−). HIV and TB is a serious world wide problem in the developing country. Data were analyzed with applying NBDR and ZINBDR to make comparison which a favorable model is better to used. The ZINBDR model is able to account for the disproportionately large number of zero within the data and is shown to be a consistently better fit than the NBDR model. Hence, as a results ZINBDR model is a superior fit to the data than the NBDR model and provides additional information regarding the died mechanisms HIV+TB. The ZINBDR model is shown to be a use tool for analysis death rate according age categorical.

Keywords: zero inflated negative binomial death rate, HIV and TB, AIC and BIC, death rate

Procedia PDF Downloads 433
23609 Efficient Reuse of Exome Sequencing Data for Copy Number Variation Callings

Authors: Chen Wang, Jared Evans, Yan Asmann

Abstract:

With the quick evolvement of next-generation sequencing techniques, whole-exome or exome-panel data have become a cost-effective way for detection of small exonic mutations, but there has been a growing desire to accurately detect copy number variations (CNVs) as well. In order to address this research and clinical needs, we developed a sequencing coverage pattern-based method not only for copy number detections, data integrity checks, CNV calling, and visualization reports. The developed methodologies include complete automation to increase usability, genome content-coverage bias correction, CNV segmentation, data quality reports, and publication quality images. Automatic identification and removal of poor quality outlier samples were made automatically. Multiple experimental batches were routinely detected and further reduced for a clean subset of samples before analysis. Algorithm improvements were also made to improve somatic CNV detection as well as germline CNV detection in trio family. Additionally, a set of utilities was included to facilitate users for producing CNV plots in focused genes of interest. We demonstrate the somatic CNV enhancements by accurately detecting CNVs in whole exome-wide data from the cancer genome atlas cancer samples and a lymphoma case study with paired tumor and normal samples. We also showed our efficient reuses of existing exome sequencing data, for improved germline CNV calling in a family of the trio from the phase-III study of 1000 Genome to detect CNVs with various modes of inheritance. The performance of the developed method is evaluated by comparing CNV calling results with results from other orthogonal copy number platforms. Through our case studies, reuses of exome sequencing data for calling CNVs have several noticeable functionalities, including a better quality control for exome sequencing data, improved joint analysis with single nucleotide variant calls, and novel genomic discovery of under-utilized existing whole exome and custom exome panel data.

Keywords: bioinformatics, computational genetics, copy number variations, data reuse, exome sequencing, next generation sequencing

Procedia PDF Downloads 257
23608 [Keynote]: No-Trust-Zone Architecture for Securing Supervisory Control and Data Acquisition

Authors: Michael Okeke, Andrew Blyth

Abstract:

Supervisory Control And Data Acquisition (SCADA) as the state of the art Industrial Control Systems (ICS) are used in many different critical infrastructures, from smart home to energy systems and from locomotives train system to planes. Security of SCADA systems is vital since many lives depend on it for daily activities and deviation from normal operation could be disastrous to the environment as well as lives. This paper describes how No-Trust-Zone (NTZ) architecture could be incorporated into SCADA Systems in order to reduce the chances of malicious intent. The architecture is made up of two distinctive parts which are; the field devices such as; sensors, PLCs pumps, and actuators. The second part of the architecture is designed following lambda architecture, which is made up of a detection algorithm based on Particle Swarm Optimization (PSO) and Hadoop framework for data processing and storage. Apache Spark will be a part of the lambda architecture for real-time analysis of packets for anomalies detection.

Keywords: industrial control system (ics, no-trust-zone (ntz), particle swarm optimisation (pso), supervisory control and data acquisition (scada), swarm intelligence (SI)

Procedia PDF Downloads 345
23607 A Study on the Correlation Analysis between the Pre-Sale Competition Rate and the Apartment Unit Plan Factor through Machine Learning

Authors: Seongjun Kim, Jinwooung Kim, Sung-Ah Kim

Abstract:

The development of information and communication technology also affects human cognition and thinking, especially in the field of design, new techniques are being tried. In architecture, new design methodologies such as machine learning or data-driven design are being applied. In particular, these methodologies are used in analyzing the factors related to the value of real estate or analyzing the feasibility in the early planning stage of the apartment housing. However, since the value of apartment buildings is often determined by external factors such as location and traffic conditions, rather than the interior elements of buildings, data is rarely used in the design process. Therefore, although the technical conditions are provided, the internal elements of the apartment are difficult to apply the data-driven design in the design process of the apartment. As a result, the designers of apartment housing were forced to rely on designer experience or modular design alternatives rather than data-driven design at the design stage, resulting in a uniform arrangement of space in the apartment house. The purpose of this study is to propose a methodology to support the designers to design the apartment unit plan with high consumer preference by deriving the correlation and importance of the floor plan elements of the apartment preferred by the consumers through the machine learning and reflecting this information from the early design process. The data on the pre-sale competition rate and the elements of the floor plan are collected as data, and the correlation between pre-sale competition rate and independent variables is analyzed through machine learning. This analytical model can be used to review the apartment unit plan produced by the designer and to assist the designer. Therefore, it is possible to make a floor plan of apartment housing with high preference because it is possible to feedback apartment unit plan by using trained model when it is used in floor plan design of apartment housing.

Keywords: apartment unit plan, data-driven design, design methodology, machine learning

Procedia PDF Downloads 268
23606 Nonparametric Truncated Spline Regression Model on the Data of Human Development Index in Indonesia

Authors: Kornelius Ronald Demu, Dewi Retno Sari Saputro, Purnami Widyaningsih

Abstract:

Human Development Index (HDI) is a standard measurement for a country's human development. Several factors may have influenced it, such as life expectancy, gross domestic product (GDP) based on the province's annual expenditure, the number of poor people, and the percentage of an illiterate people. The scatter plot between HDI and the influenced factors show that the plot does not follow a specific pattern or form. Therefore, the HDI's data in Indonesia can be applied with a nonparametric regression model. The estimation of the regression curve in the nonparametric regression model is flexible because it follows the shape of the data pattern. One of the nonparametric regression's method is a truncated spline. Truncated spline regression is one of the nonparametric approach, which is a modification of the segmented polynomial functions. The estimator of a truncated spline regression model was affected by the selection of the optimal knots point. Knot points is a focus point of spline truncated functions. The optimal knots point was determined by the minimum value of generalized cross validation (GCV). In this article were applied the data of Human Development Index with a truncated spline nonparametric regression model. The results of this research were obtained the best-truncated spline regression model to the HDI's data in Indonesia with the combination of optimal knots point 5-5-5-4. Life expectancy and the percentage of an illiterate people were the significant factors depend to the HDI in Indonesia. The coefficient of determination is 94.54%. This means the regression model is good enough to applied on the data of HDI in Indonesia.

Keywords: generalized cross validation (GCV), Human Development Index (HDI), knots point, nonparametric regression, truncated spline

Procedia PDF Downloads 339
23605 Vegetation Assessment Under the Influence of Environmental Variables; A Case Study from the Yakhtangay Hill of Himalayan Range, Pakistan

Authors: Hameed Ullah, Shujaul Mulk Khan, Zahid Ullah, Zeeshan Ahmad Sadia Jahangir, Abdullah, Amin Ur Rahman, Muhammad Suliman, Dost Muhammad

Abstract:

The interrelationship between vegetation and abiotic variables inside an ecosystem is one of the main jobs of plant scientists. This study was designed to investigate the vegetation structure and species diversity along with the environmental variables in the Yakhtangay hill district Shangla of the Himalayan Mountain series Pakistan by using multivariate statistical analysis. Quadrat’s method was used and a total of 171 Quadrats were laid down 57 for Tree, Shrubs and Herbs, respectively, to analyze the phytosociological attributes of the vegetation. The vegetation of the selected area was classified into different Life and leaf-forms according to Raunkiaer classification, while PCORD software version 5 was used to classify the vegetation into different plants communities by Two-way indicator species Analysis (TWINSPAN). The CANOCCO version 4.5 was used for DCA and CCA analysis to find out variation directories of vegetation with different environmental variables. A total of 114 plants species belonging to 45 different families was investigated inside the area. The Rosaceae (12 species) was the dominant family followed by Poaceae (10 species) and then Asteraceae (7 species). Monocots were more dominant than Dicots and Angiosperms were more dominant than Gymnosperms. Among the life forms the Hemicryptophytes and Nanophanerophytes were dominant, followed by Therophytes, while among the leaf forms Microphylls were dominant, followed by Leptophylls. It is concluded that among the edaphic factors such as soil pH, the concentration of soil organic matter, Calcium Carbonates concentration in soil, soil EC, soil TDS, and physiographic factors such as Altitude and slope are affecting the structure of vegetation, species composition and species diversity at the significant level with p-value ≤0.05. The Vegetation of the selected area was classified into four major plants communities and the indicator species for each community was recorded. Classification of plants into 4 different communities based upon edaphic gradients favors the individualistic hypothesis. Indicator Species Analysis (ISA) shows the indicators of the study area are mostly indicators to the Himalayan or moist temperate ecosystem, furthermore, these indicators could be considered for micro-habitat conservation and respective ecosystem management plans.

Keywords: species richness, edaphic gradients, canonical correspondence analysis (CCA), TWCA

Procedia PDF Downloads 153
23604 Impact of Protean Career Attitude on Career Success with the Mediating Effect of Career Insight

Authors: Prabhashini Wijewantha

Abstract:

This study looks at the impact of protean career attitude of employees on their career success and next it looks at the mediation effect of career insights on the above relationship. Career success is defined as the accomplishment of desirable work related outcomes at any point in person’s work experiences over time and it comprises of two sub variables, namely, career satisfaction and perceived employability. Protean career attitude was measured using the eight items from the Self Directedness subscale of the Protean Career Attitude scale developed by Briscoe and Hall, where as career satisfaction was measured by the three item scale developed by Martine, Eddleston, and Veiga. Perceived employability was also evaluated using three items and career insight was measured using fourteen items that were adapted and used by De Vos and Soens. Data were collected from a sample of 300 mid career executives in Sri Lanka deploying the survey strategy and data were analyzed using the SPSS and AMOS software version 20.0. A preliminary analysis of data was initially performed where data were screened and reliability and validity were ensured. Next a simple regression analysis was performed to test the direct impact of protean career attitude on career success and the hypothesis was supported. The Baron and Kenney’s four steps, three regressions approach for mediator testing was used to calculate the mediation effect of career insight on the above relationship and a partial mediation was supported by the data. Finally theoretical and practical implications are discussed.

Keywords: career success, career insight, mid career MBAs, protean career attitude

Procedia PDF Downloads 360
23603 Studying the Influence of Systematic Pre-Occupancy Data Collection through Post-Occupancy Evaluation: A Shift in the Architectural Design Process

Authors: Noor Abdelhamid, Donovan Nelson, Cara Prosser

Abstract:

The architectural design process could be mapped out as a dialogue between designer and user that is constructed across multiple phases with the overarching goal of aligning design outcomes with user needs. Traditionally, this dialogue is bounded within a preliminary phase of determining factors that will direct the design intent, and a completion phase, of handing off the project to the client. Pre- and post-occupancy evaluations (P/POE’s) could provide an alternative process by extending this dialogue on both ends of the design process. The purpose of this research is to study the influence of systematic pre-occupancy data collection in achieving design goals by conducting post-occupancy evaluations of two case studies. In the context of this study, systematic pre-occupancy data collection is defined as the preliminary documentation of the existing conditions that helps portray stakeholders’ needs. When implemented, pre-occupancy occurs during the early phases of the architectural design process, utilizing the information to shape the design intent. Investigative POE’s are performed on two case studies with distinct early design approaches to understand how the current space is impacting user needs, establish design outcomes, and inform future strategies. The first case study underwent systematic pre-occupancy data collection and synthesis, while the other represents the traditional, uncoordinated practice of informally collecting data during an early design phase. POE’s target the dynamics between the building and its occupants by studying how spaces are serving the needs of the users. Data collection for this study consists of user surveys, audiovisual materials, and observations during regular site visits. Mixed methods of qualitative and quantitative analyses are synthesized to identify patterns in the data. The paper concludes by positioning value on both sides of the architectural design process: the integration of systematic pre-occupancy methods in the early phases and the reinforcement of a continued dialogue between building and design team after building completion.

Keywords: architecture, design process, pre-occupancy data, post-occupancy evaluation

Procedia PDF Downloads 164
23602 An Analysis of Oil Price Changes and Other Factors Affecting Iranian Food Basket: A Panel Data Method

Authors: Niloofar Ashktorab, Negar Ashktorab

Abstract:

Oil exports fund nearly half of Iran’s government expenditures, since many years other countries have been imposed different sanctions against Iran. Sanctions that primarily target Iran’s key energy sector have harmed Iran’s economy. The strategic effects of sanctions might be reduction as Iran adjusts to them economically. In this study, we evaluate the impact of oil price and sanctions against Iran on food commodity prices by using panel data method. Here, we find that the food commodity prices, the oil price and real exchange rate are stationary. The results show positive effect of oil price changes, real exchange rate and sanctions on food commodity prices.

Keywords: oil price, food basket, sanctions, panel data, Iran

Procedia PDF Downloads 356
23601 A Proposed Framework for Software Redocumentation Using Distributed Data Processing Techniques and Ontology

Authors: Laila Khaled Almawaldi, Hiew Khai Hang, Sugumaran A. l. Nallusamy

Abstract:

Legacy systems are crucial for organizations, but their intricacy and lack of documentation pose challenges for maintenance and enhancement. Redocumentation of legacy systems is vital for automatically or semi-automatically creating documentation for software lacking sufficient records. It aims to enhance system understandability, maintainability, and knowledge transfer. However, existing redocumentation methods need improvement in data processing performance and document generation efficiency. This stems from the necessity to efficiently handle the extensive and complex code of legacy systems. This paper proposes a method for semi-automatic legacy system re-documentation using semantic parallel processing and ontology. Leveraging parallel processing and ontology addresses current challenges by distributing the workload and creating documentation with logically interconnected data. The paper outlines challenges in legacy system redocumentation and suggests a method of redocumentation using parallel processing and ontology for improved efficiency and effectiveness.

Keywords: legacy systems, redocumentation, big data analysis, parallel processing

Procedia PDF Downloads 46
23600 Armenian Refugees in Early 20th C Japan: Quantitative Analysis on Their Number Based on Japanese Historical Data with the Comparison of a Foreign Historical Data

Authors: Meline Mesropyan

Abstract:

At the beginning of the 20th century, Japan served as a transit point for Armenian refugees fleeing the 1915 Genocide. However, research on Armenian refugees in Japan is sparse, and the Armenian Diaspora has never taken root in Japan. Consequently, Japan has not been considered a relevant research site for studying Armenian refugees. The primary objective of this study is to shed light on the number of Armenian refugees who passed through Japan between 1915 and 1930. Quantitative analyses will be conducted based on newly uncovered Japanese archival documents. Subsequently, the Japanese data will be compared to American immigration data to estimate the potential number of refugees in Japan during that period. This under-researched area is relevant to both the Armenian Diaspora and refugee studies in Japan. By clarifying the number of refugees, this study aims to enhance understanding of Japan's treatment of refugees and the extent of humanitarian efforts conducted by organizations and individuals in Japan, contributing to the broader field of historical refugee studies.

Keywords: Armenian genocide, Armenian refugees, Japanese statistics, number of refugees

Procedia PDF Downloads 57
23599 Building Green Infrastructure Networks Based on Cadastral Parcels Using Network Analysis

Authors: Gon Park

Abstract:

Seoul in South Korea established the 2030 Seoul City Master Plan that contains green-link projects to connect critical green areas within the city. However, the plan does not have detailed analyses for green infrastructure to incorporate land-cover information to many structural classes. This study maps green infrastructure networks of Seoul for complementing their green plans with identifying and raking green areas. Hubs and links of main elements of green infrastructure have been identified from incorporating cadastral data of 967,502 parcels to 135 of land use maps using geographic information system. Network analyses were used to rank hubs and links of a green infrastructure map with applying a force-directed algorithm, weighted values, and binary relationships that has metrics of density, distance, and centrality. The results indicate that network analyses using cadastral parcel data can be used as the framework to identify and rank hubs, links, and networks for the green infrastructure planning under a variable scenarios of green areas in cities.

Keywords: cadastral data, green Infrastructure, network analysis, parcel data

Procedia PDF Downloads 206
23598 Classification of Land Cover Usage from Satellite Images Using Deep Learning Algorithms

Authors: Shaik Ayesha Fathima, Shaik Noor Jahan, Duvvada Rajeswara Rao

Abstract:

Earth's environment and its evolution can be seen through satellite images in near real-time. Through satellite imagery, remote sensing data provide crucial information that can be used for a variety of applications, including image fusion, change detection, land cover classification, agriculture, mining, disaster mitigation, and monitoring climate change. The objective of this project is to propose a method for classifying satellite images according to multiple predefined land cover classes. The proposed approach involves collecting data in image format. The data is then pre-processed using data pre-processing techniques. The processed data is fed into the proposed algorithm and the obtained result is analyzed. Some of the algorithms used in satellite imagery classification are U-Net, Random Forest, Deep Labv3, CNN, ANN, Resnet etc. In this project, we are using the DeepLabv3 (Atrous convolution) algorithm for land cover classification. The dataset used is the deep globe land cover classification dataset. DeepLabv3 is a semantic segmentation system that uses atrous convolution to capture multi-scale context by adopting multiple atrous rates in cascade or in parallel to determine the scale of segments.

Keywords: area calculation, atrous convolution, deep globe land cover classification, deepLabv3, land cover classification, resnet 50

Procedia PDF Downloads 140
23597 The Effect of CPU Location in Total Immersion of Microelectronics

Authors: A. Almaneea, N. Kapur, J. L. Summers, H. M. Thompson

Abstract:

Meeting the growth in demand for digital services such as social media, telecommunications, and business and cloud services requires large scale data centres, which has led to an increase in their end use energy demand. Generally, over 30% of data centre power is consumed by the necessary cooling overhead. Thus energy can be reduced by improving the cooling efficiency. Air and liquid can both be used as cooling media for the data centre. Traditional data centre cooling systems use air, however liquid is recognised as a promising method that can handle the more densely packed data centres. Liquid cooling can be classified into three methods; rack heat exchanger, on-chip heat exchanger and full immersion of the microelectronics. This study quantifies the improvements of heat transfer specifically for the case of immersed microelectronics by varying the CPU and heat sink location. Immersion of the server is achieved by filling the gap between the microelectronics and a water jacket with a dielectric liquid which convects the heat from the CPU to the water jacket on the opposite side. Heat transfer is governed by two physical mechanisms, which is natural convection for the fixed enclosure filled with dielectric liquid and forced convection for the water that is pumped through the water jacket. The model in this study is validated with published numerical and experimental work and shows good agreement with previous work. The results show that the heat transfer performance and Nusselt number (Nu) is improved by 89% by placing the CPU and heat sink on the bottom of the microelectronics enclosure.

Keywords: CPU location, data centre cooling, heat sink in enclosures, immersed microelectronics, turbulent natural convection in enclosures

Procedia PDF Downloads 272
23596 A Macroeconomic Analysis of Defense Industry: Comparisons, Trends and Improvements in Brazil and in the World

Authors: J. Fajardo, J. Guerra, E. Gonzales

Abstract:

This paper will outline a study of Brazil's industrial base of defense (IDB), through a bibliographic research method, combined with an analysis of macroeconomic data from several available public data platforms. This paper begins with a brief study about Brazilian national industry, including analyzes of productivity, income, outcome and jobs. Next, the research presents a study on the defense industry in Brazil, presenting the main national companies that operate in the aeronautical, army and naval branches. After knowing the main points of the Brazilian defense industry, data on the productivity of the defense industry of the main countries and competing companies of the Brazilian industry were analyzed, in order to summarize big cases in Brazil with a comparative analysis. Concerned the methodology, were used bibliographic research and the exploration of historical data series, in order to analyze information, to get trends and to make comparisons along the time. The research is finished with the main trends for the development of the Brazilian defense industry, comparing the current situation with the point of view of several countries.

Keywords: economics of defence, industry, trends, market

Procedia PDF Downloads 156
23595 Measuring Fluctuating Asymmetry in Human Faces Using High-Density 3D Surface Scans

Authors: O. Ekrami, P. Claes, S. Van Dongen

Abstract:

Fluctuating asymmetry (FA) has been studied for many years as an indicator of developmental stability or ‘genetic quality’ based on the assumption that perfect symmetry is ideally the expected outcome for a bilateral organism. Further studies have also investigated the possible link between FA and attractiveness or levels of masculinity or femininity. These hypotheses have been mostly examined using 2D images, and the structure of interest is usually presented using a limited number of landmarks. Such methods have the downside of simplifying and reducing the dimensionality of the structure, which will in return increase the error of the analysis. In an attempt to reach more conclusive and accurate results, in this study we have used high-resolution 3D scans of human faces and have developed an algorithm to measure and localize FA, taking a spatially-dense approach. A symmetric spatially dense anthropometric mask with paired vertices is non-rigidly mapped on target faces using an Iterative Closest Point (ICP) registration algorithm. A set of 19 manually indicated landmarks were used to examine the precision of our mapping step. The protocol’s accuracy in measurement and localizing FA is assessed using simulated faces with known amounts of asymmetry added to them. The results of validation of our approach show that the algorithm is perfectly capable of locating and measuring FA in 3D simulated faces. With the use of such algorithm, the additional captured information on asymmetry can be used to improve the studies of FA as an indicator of fitness or attractiveness. This algorithm can especially be of great benefit in studies of high number of subjects due to its automated and time-efficient nature. Additionally, taking a spatially dense approach provides us with information about the locality of FA, which is impossible to obtain using conventional methods. It also enables us to analyze the asymmetry of a morphological structures in a multivariate manner; This can be achieved by using methods such as Principal Components Analysis (PCA) or Factor Analysis, which can be a step towards understanding the underlying processes of asymmetry. This method can also be used in combination with genome wide association studies to help unravel the genetic bases of FA. To conclude, we introduced an algorithm to study and analyze asymmetry in human faces, with the possibility of extending the application to other morphological structures, in an automated, accurate and multi-variate framework.

Keywords: developmental stability, fluctuating asymmetry, morphometrics, 3D image processing

Procedia PDF Downloads 140
23594 Delineating Subsurface Linear Features and Faults Under Sedimentary Cover in the Bahira Basin Using Integrated Gravity and Magnetic Data

Authors: M. Lghoul, N. El Goumi, M. Guernouche

Abstract:

In order to predict the structural and tectonic framework of the Bahira basin and to have a 3D geological modeling of the basin, an integrated multidisciplinary work has been conducted using gravity, magnetic and geological data. The objective of the current study is delineating the subsurfacefeatures, faults, and geological limits, using airborne magnetic and gravity data analysis of the Bahira basin. To achieve our goal, we have applied different enhanced techniques on magnetic and gravity data: power spectral analysis techniques, reduction to pole (RTP), upward continuation, analytical signal, tilt derivative, total horizontal derivative, 3D Euler deconvolutionand source parameter imagining. The major lineaments/faults trend are: NE–SW, NW-SE, ENE–WSW, and WNW–ESE. The 3D Euler deconvolution analysis highlighted a number of fault trend, mainly in the ENE-WSW, WNW-ESE directions. The depth tothe top of the basement sources in the study area ranges between 200 m, in the southern and northern part of the Bahira basin, to 5000 m located in the Eastern part of the basin.

Keywords: magnetic, gravity, structural trend, depth to basement

Procedia PDF Downloads 132
23593 Copyright Clearance for Artificial Intelligence Training Data: Challenges and Solutions

Authors: Erva Akin

Abstract:

– The use of copyrighted material for machine learning purposes is a challenging issue in the field of artificial intelligence (AI). While machine learning algorithms require large amounts of data to train and improve their accuracy and creativity, the use of copyrighted material without permission from the authors may infringe on their intellectual property rights. In order to overcome copyright legal hurdle against the data sharing, access and re-use of data, the use of copyrighted material for machine learning purposes may be considered permissible under certain circumstances. For example, if the copyright holder has given permission to use the data through a licensing agreement, then the use for machine learning purposes may be lawful. It is also argued that copying for non-expressive purposes that do not involve conveying expressive elements to the public, such as automated data extraction, should not be seen as infringing. The focus of such ‘copy-reliant technologies’ is on understanding language rules, styles, and syntax and no creative ideas are being used. However, the non-expressive use defense is within the framework of the fair use doctrine, which allows the use of copyrighted material for research or educational purposes. The questions arise because the fair use doctrine is not available in EU law, instead, the InfoSoc Directive provides for a rigid system of exclusive rights with a list of exceptions and limitations. One could only argue that non-expressive uses of copyrighted material for machine learning purposes do not constitute a ‘reproduction’ in the first place. Nevertheless, the use of machine learning with copyrighted material is difficult because EU copyright law applies to the mere use of the works. Two solutions can be proposed to address the problem of copyright clearance for AI training data. The first is to introduce a broad exception for text and data mining, either mandatorily or for commercial and scientific purposes, or to permit the reproduction of works for non-expressive purposes. The second is that copyright laws should permit the reproduction of works for non-expressive purposes, which opens the door to discussions regarding the transposition of the fair use principle from the US into EU law. Both solutions aim to provide more space for AI developers to operate and encourage greater freedom, which could lead to more rapid innovation in the field. The Data Governance Act presents a significant opportunity to advance these debates. Finally, issues concerning the balance of general public interests and legitimate private interests in machine learning training data must be addressed. In my opinion, it is crucial that robot-creation output should fall into the public domain. Machines depend on human creativity, innovation, and expression. To encourage technological advancement and innovation, freedom of expression and business operation must be prioritised.

Keywords: artificial intelligence, copyright, data governance, machine learning

Procedia PDF Downloads 83
23592 Biosorption of Phenol onto Water Hyacinth Activated Carbon: Kinetics and Isotherm Study

Authors: Manoj Kumar Mahapatra, Arvind Kumar

Abstract:

Batch adsorption experiments were carried out for the removal of phenol from its aqueous solution using water hyancith activated carbon (WHAC) as an adsorbent. The sorption kinetics were analysed using pseudo-first order kinetics and pseudo-second order model, and it was observed that the sorption data tend to fit very well in pseudo-second order model for the entire sorption time. The experimental data were analyzed by the Langmuir and Freundlich isotherm models. Equilibrium data fitted well to the Freundlich model with a maximum biosorption capacity of 31.45 mg/g estimated using Langmuir model. The adsorption intensity 3.7975 represents a favorable adsorption condition.

Keywords: adsorption, isotherm, kinetics, phenol

Procedia PDF Downloads 446
23591 GC-MS-Based Untargeted Metabolomics to Study the Metabolism of Pectobacterium Strains

Authors: Magdalena Smoktunowicz, Renata Wawrzyniak, Malgorzata Waleron, Krzysztof Waleron

Abstract:

Pectobacterium spp. were previously classified into the Erwinia genus founded in 1917 to unite at that time all Gram-negative, fermentative, nonsporulating and peritrichous flagellated plant pathogenic bacteria. After work of Waldee (1945), on Approved Lists of Bacterial Names and bacteriology manuals in 1980, they were described either under the species named Erwinia or Pectobacterium. The Pectobacterium genus was formally described in 1998 of 265 Pectobacterium strains. Currently, there are 21 species of Pectobacterium bacteria, including Pectobacterium betavasculorum since 2003, which caused soft rot on sugar beet tubers. Based on the biochemical experiments carried out for this, it is known that these bacteria are gram-negative, catalase-positive, oxidase-negative, facultatively anaerobic, using gelatin and causing symptoms of soft rot on potato and sugar beet tubers. The mere fact of growing on sugar beet may indicate a metabolism characteristic only for this species. Metabolomics, broadly defined as the biology of the metabolic systems, which allows to make comprehensive measurements of metabolites. Metabolomics, in combination with genomics, are complementary tools for the identification of metabolites and their reactions, and thus for the reconstruction of metabolic networks. The aim of this study was to apply the GC-MS-based untargeted metabolomics to study the metabolism of P. betavasculorum in different growing conditions. The metabolomic profiles of biomass and biomass media were determined. For sample preparation the following protocol was used: extraction with 900 µl of methanol: chloroform: water mixture (10: 3: 1, v: v) were added to 900 µl of biomass from the bottom of the tube and up to 900 µl of nutrient medium from the bacterial biomass. After centrifugation (13,000 x g, 15 min, 4oC), 300µL of the obtained supernatants were concentrated by rotary vacuum and evaporated to dryness. Afterwards, two-step derivatization procedure was performed before GC-MS analyses. The obtained results were subjected to statistical calculations with the use of both uni- and multivariate tests. The obtained results were evaluated using KEGG database, to asses which metabolic pathways are activated and which genes are responsible for it, during the metabolism of given substrates contained in the growing environment. The observed metabolic changes, combined with biochemical and physiological tests, may enable pathway discovery, regulatory inference and understanding of the homeostatic abilities of P. betavasculorum.

Keywords: GC-MS chromatograpfy, metabolomics, metabolism, pectobacterium strains, pectobacterium betavasculorum

Procedia PDF Downloads 79
23590 A West Coast Estuarine Case Study: A Predictive Approach to Monitor Estuarine Eutrophication

Authors: Vedant Janapaty

Abstract:

Estuaries are wetlands where fresh water from streams mixes with salt water from the sea. Also known as “kidneys of our planet”- they are extremely productive environments that filter pollutants, absorb floods from sea level rise, and shelter a unique ecosystem. However, eutrophication and loss of native species are ailing our wetlands. There is a lack of uniform data collection and sparse research on correlations between satellite data and in situ measurements. Remote sensing (RS) has shown great promise in environmental monitoring. This project attempts to use satellite data and correlate metrics with in situ observations collected at five estuaries. Images for satellite data were processed to calculate 7 bands (SIs) using Python. Average SI values were calculated per month for 23 years. Publicly available data from 6 sites at ELK was used to obtain 10 parameters (OPs). Average OP values were calculated per month for 23 years. Linear correlations between the 7 SIs and 10 OPs were made and found to be inadequate (correlation = 1 to 64%). Fourier transform analysis on 7 SIs was performed. Dominant frequencies and amplitudes were extracted for 7 SIs, and a machine learning(ML) model was trained, validated, and tested for 10 OPs. Better correlations were observed between SIs and OPs, with certain time delays (0, 3, 4, 6 month delay), and ML was again performed. The OPs saw improved R² values in the range of 0.2 to 0.93. This approach can be used to get periodic analyses of overall wetland health with satellite indices. It proves that remote sensing can be used to develop correlations with critical parameters that measure eutrophication in situ data and can be used by practitioners to easily monitor wetland health.

Keywords: estuary, remote sensing, machine learning, Fourier transform

Procedia PDF Downloads 104
23589 Agricultural Water Consumption Estimation in the Helmand Basin

Authors: Mahdi Akbari, Ali Torabi Haghighi

Abstract:

Hamun Lakes, located in the Helmand Basin, consisting of four water bodies, were the greatest (>8500 km2) freshwater bodies in Iran plateau but have almost entirely desiccated over the last 20 years. The desiccation of the lakes caused dust storm in the region which has huge economic and health consequences on the inhabitants. The flow of the Hirmand (or Helmand) River, the most important feeding river, has decreased from 4 to 1.9 km3 downstream due to anthropogenic activities. In this basin, water is mainly consumed for farming. Due to the lack of in-situ data in the basin, this research utilizes remote-sensing data to show how croplands and consequently consumed water in the agricultural sector have changed. Based on Landsat NDVI, we suggest using a threshold of around 0.35-0.4 to detect croplands in the basin. Croplands of this basin has doubled since 1990, especially in the downstream of the Kajaki Dam (the biggest dam of the basin). Using PML V2 Actual Evapotranspiration (AET) data and considering irrigation efficiency (≈0.3), we estimate that the consumed water (CW) for farming. We found that CW has increased from 2.5 to over 7.5 km3 from 2002 to 2017 in this basin. Also, the annual average Potential Evapotranspiration (PET) of the basin has had a negative trend in the recent years, although the AET over croplands has an increasing trend. In this research, using remote sensing data, we covered lack of data in the studied area and highlighted anthropogenic activities in the upstream which led to the lakes desiccation in the downstream.

Keywords: Afghanistan-Iran transboundary Basin, Iran-Afghanistan water treaty, water use, lake desiccation

Procedia PDF Downloads 131
23588 Burnout among Healthcare Workers in Poland during the COVID-19 Pandemic

Authors: Zbigniew Izdebski, Alicja Kozakiewicz, Maciej Białorudzki, Joanna Mazur

Abstract:

Work is an extremely important part of everyone's life and affects functioning in daily life. Healthcare workers (HCW) are suffering from negative actions in and out of the workplace, such as harassment, abuse, long working hours, mental suffering, exhaustion, and professional burnout. Staff burnout is detrimental not only in terms of individual employees but also to working with patients and to the healthcare institution as a whole. The purpose of this study was to explore the level of professional burnout among HCW working in medical institutions during the COVID-19 pandemic in Poland. The extent to which selected sociodemographic factors and perceived stress increase the risk of professional burnout was assessed. In addition, the frequency of use of professional psychological help and less formal support groups by HCW in relation to the level of professional burnout was presented. The survey was conducted as part of a larger project on the humanization of medicine and clinical communication from February-April 2022. This study used a self-administered online survey (CAWI) technique and PAPI (pen and paper interview) technique. The BAT-12 scale was used to measure burnout, the PSS-4 scale was used to measure stress, and questions formulated by the research team were also used. For the purpose of analysis, the sample was limited to 2196 HCWs who worked on a daily basis with patients during the COVID-19 pandemic. Frequency distributions were analyzed, and multivariate logistic regression was performed. The mean scores (scores) of job burnout as measured by the BAT-12 scale ranged among the professional groups from 2.15(0.69) to 2.30 (0.69) and remained highest for the nurses' group. The groups differed significantly in levels of burnout (chi-sq=17.719; d.f.=8; p<0.023). In the final model, raised stress most likely increased the risk of burnout (OR=3.88; 95%CI <3.13-3.81>; p<0,001). Other significant predictors of burnout included: traumatic work-related experience (OR=1.91, p<0.001), mobbing (OR=1.83, p<0.001), and a higher workload than before the pandemic (OR=1.41, p=0.002). Only 7% of respondents decided to use various forms of psychological support during the pandemic. HCW experiences challenges in dealing with an unpredictable pandemic. Limited preparedness can lead to physical and psychological problems such as high-stress levels, anxiety, fear, helplessness, hopelessness, anger and stigma. The workload can lead to professional burnout, as well as threaten patient safety.

Keywords: burnout, work, healthcare, healthcare worker, stress

Procedia PDF Downloads 80
23587 Data-Driven Strategies for Enhancing Food Security in Vulnerable Regions: A Multi-Dimensional Analysis of Crop Yield Predictions, Supply Chain Optimization, and Food Distribution Networks

Authors: Sulemana Ibrahim

Abstract:

Food security remains a paramount global challenge, with vulnerable regions grappling with issues of hunger and malnutrition. This study embarks on a comprehensive exploration of data-driven strategies aimed at ameliorating food security in such regions. Our research employs a multifaceted approach, integrating data analytics to predict crop yields, optimizing supply chains, and enhancing food distribution networks. The study unfolds as a multi-dimensional analysis, commencing with the development of robust machine learning models harnessing remote sensing data, historical crop yield records, and meteorological data to foresee crop yields. These predictive models, underpinned by convolutional and recurrent neural networks, furnish critical insights into anticipated harvests, empowering proactive measures to confront food insecurity. Subsequently, the research scrutinizes supply chain optimization to address food security challenges, capitalizing on linear programming and network optimization techniques. These strategies intend to mitigate loss and wastage while streamlining the distribution of agricultural produce from field to fork. In conjunction, the study investigates food distribution networks with a particular focus on network efficiency, accessibility, and equitable food resource allocation. Network analysis tools, complemented by data-driven simulation methodologies, unveil opportunities for augmenting the efficacy of these critical lifelines. This study also considers the ethical implications and privacy concerns associated with the extensive use of data in the realm of food security. The proposed methodology outlines guidelines for responsible data acquisition, storage, and usage. The ultimate aspiration of this research is to forge a nexus between data science and food security policy, bestowing actionable insights to mitigate the ordeal of food insecurity. The holistic approach converging data-driven crop yield forecasts, optimized supply chains, and improved distribution networks aspire to revitalize food security in the most vulnerable regions, elevating the quality of life for millions worldwide.

Keywords: data-driven strategies, crop yield prediction, supply chain optimization, food distribution networks

Procedia PDF Downloads 62
23586 Forced-Choice Measurement Models of Behavioural, Social, and Emotional Skills: Theory, Research, and Development

Authors: Richard Roberts, Anna Kravtcova

Abstract:

Introduction: The realisation that personality can change over the course of a lifetime has led to a new companion model to the Big Five, the behavioural, emotional, and social skills approach (BESSA). BESSA hypothesizes that this set of skills represents how the individual is thinking, feeling, and behaving when the situation calls for it, as opposed to traits, which represent how someone tends to think, feel, and behave averaged across situations. The five major skill domains share parallels with the Big Five Factor (BFF) model creativity and innovation (openness), self-management (conscientiousness), social engagement (extraversion), cooperation (agreeableness), and emotional resilience (emotional stability) skills. We point to noteworthy limitations in the current operationalisation of BESSA skills (i.e., via Likert-type items) and offer up a different measurement approach: forced choice. Method: In this forced-choice paradigm, individuals were given three skill items (e.g., managing my time) and asked to select one response they believed they were “worst at” and “best at”. The Thurstonian IRT models allow these to be placed on a normative scale. Two multivariate studies (N = 1178) were conducted with a 22-item forced-choice version of the BESSA, a published measure of the BFF, and various criteria. Findings: Confirmatory factor analysis of the forced-choice assessment showed acceptable model fit (RMSEA<0.06), while reliability estimates were reasonable (around 0.70 for each construct). Convergent validity evidence was as predicted (correlations between 0.40 and 0.60 for corresponding BFF and BESSA constructs). Notable was the extent the forced-choice BESSA assessment improved upon test-criterion relationships over and above the BFF. For example, typical regression models find BFF personality accounting for 25% of the variance in life satisfaction scores; both studies showed incremental gains over the BFF exceeding 6% (i.e., BFF and BESSA together accounted for over 31% of the variance in both studies). Discussion: Forced-choice measurement models offer up the promise of creating equated test forms that may unequivocally measure skill gains and are less prone to fakability and reference bias effects. Implications for practitioners are discussed, especially those interested in selection, succession planning, and training and development. We also discuss how the forced choice method can be applied to other constructs like emotional immunity, cross-cultural competence, and self-estimates of cognitive ability.

Keywords: Big Five, forced-choice method, BFF, methods of measurements

Procedia PDF Downloads 94
23585 Automatic Thresholding for Data Gap Detection for a Set of Sensors in Instrumented Buildings

Authors: Houda Najeh, Stéphane Ploix, Mahendra Pratap Singh, Karim Chabir, Mohamed Naceur Abdelkrim

Abstract:

Building systems are highly vulnerable to different kinds of faults and failures. In fact, various faults, failures and human behaviors could affect the building performance. This paper tackles the detection of unreliable sensors in buildings. Different literature surveys on diagnosis techniques for sensor grids in buildings have been published but all of them treat only bias and outliers. Occurences of data gaps have also not been given an adequate span of attention in the academia. The proposed methodology comprises the automatic thresholding for data gap detection for a set of heterogeneous sensors in instrumented buildings. Sensor measurements are considered to be regular time series. However, in reality, sensor values are not uniformly sampled. So, the issue to solve is from which delay each sensor become faulty? The use of time series is required for detection of abnormalities on the delays. The efficiency of the method is evaluated on measurements obtained from a real power plant: an office at Grenoble Institute of technology equipped by 30 sensors.

Keywords: building system, time series, diagnosis, outliers, delay, data gap

Procedia PDF Downloads 245