Search results for: parallel algorithms
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 3037

Search results for: parallel algorithms

787 Digital Joint Equivalent Channel Hybrid Precoding for Millimeterwave Massive Multiple Input Multiple Output Systems

Authors: Linyu Wang, Mingjun Zhu, Jianhong Xiang, Hanyu Jiang

Abstract:

Aiming at the problem that the spectral efficiency of hybrid precoding (HP) is too low in the current millimeter wave (mmWave) massive multiple input multiple output (MIMO) system, this paper proposes a digital joint equivalent channel hybrid precoding algorithm, which is based on the introduction of digital encoding matrix iteration. First, the objective function is expanded to obtain the relation equation, and the pseudo-inverse iterative function of the analog encoder is derived by using the pseudo-inverse method, which solves the problem of greatly increasing the amount of computation caused by the lack of rank of the digital encoding matrix and reduces the overall complexity of hybrid precoding. Secondly, the analog coding matrix and the millimeter-wave sparse channel matrix are combined into an equivalent channel, and then the equivalent channel is subjected to Singular Value Decomposition (SVD) to obtain a digital coding matrix, and then the derived pseudo-inverse iterative function is used to iteratively regenerate the simulated encoding matrix. The simulation results show that the proposed algorithm improves the system spectral efficiency by 10~20%compared with other algorithms and the stability is also improved.

Keywords: mmWave, massive MIMO, hybrid precoding, singular value decompositing, equivalent channel

Procedia PDF Downloads 73
786 Defect Classification of Hydrogen Fuel Pressure Vessels using Deep Learning

Authors: Dongju Kim, Youngjoo Suh, Hyojin Kim, Gyeongyeong Kim

Abstract:

Acoustic Emission Testing (AET) is widely used to test the structural integrity of an operational hydrogen storage container, and clustering algorithms are frequently used in pattern recognition methods to interpret AET results. However, the interpretation of AET results can vary from user to user as the tuning of the relevant parameters relies on the user's experience and knowledge of AET. Therefore, it is necessary to use a deep learning model to identify patterns in acoustic emission (AE) signal data that can be used to classify defects instead. In this paper, a deep learning-based model for classifying the types of defects in hydrogen storage tanks, using AE sensor waveforms, is proposed. As hydrogen storage tanks are commonly constructed using carbon fiber reinforced polymer composite (CFRP), a defect classification dataset is collected through a tensile test on a specimen of CFRP with an AE sensor attached. The performance of the classification model, using one-dimensional convolutional neural network (1-D CNN) and synthetic minority oversampling technique (SMOTE) data augmentation, achieved 91.09% accuracy for each defect. It is expected that the deep learning classification model in this paper, used with AET, will help in evaluating the operational safety of hydrogen storage containers.

Keywords: acoustic emission testing, carbon fiber reinforced polymer composite, one-dimensional convolutional neural network, smote data augmentation

Procedia PDF Downloads 69
785 Reconstructability Analysis for Landslide Prediction

Authors: David Percy

Abstract:

Landslides are a geologic phenomenon that affects a large number of inhabited places and are constantly being monitored and studied for the prediction of future occurrences. Reconstructability analysis (RA) is a methodology for extracting informative models from large volumes of data that work exclusively with discrete data. While RA has been used in medical applications and social science extensively, we are introducing it to the spatial sciences through applications like landslide prediction. Since RA works exclusively with discrete data, such as soil classification or bedrock type, working with continuous data, such as porosity, requires that these data are binned for inclusion in the model. RA constructs models of the data which pick out the most informative elements, independent variables (IVs), from each layer that predict the dependent variable (DV), landslide occurrence. Each layer included in the model retains its classification data as a primary encoding of the data. Unlike other machine learning algorithms that force the data into one-hot encoding type of schemes, RA works directly with the data as it is encoded, with the exception of continuous data, which must be binned. The usual physical and derived layers are included in the model, and testing our results against other published methodologies, such as neural networks, yields accuracy that is similar but with the advantage of a completely transparent model. The results of an RA session with a data set are a report on every combination of variables and their probability of landslide events occurring. In this way, every combination of informative state combinations can be examined.

Keywords: reconstructability analysis, machine learning, landslides, raster analysis

Procedia PDF Downloads 45
784 Glaucoma Detection in Retinal Tomography Using the Vision Transformer

Authors: Sushish Baral, Pratibha Joshi, Yaman Maharjan

Abstract:

Glaucoma is a chronic eye condition that causes vision loss that is irreversible. Early detection and treatment are critical to prevent vision loss because it can be asymptomatic. For the identification of glaucoma, multiple deep learning algorithms are used. Transformer-based architectures, which use the self-attention mechanism to encode long-range dependencies and acquire extremely expressive representations, have recently become popular. Convolutional architectures, on the other hand, lack knowledge of long-range dependencies in the image due to their intrinsic inductive biases. The aforementioned statements inspire this thesis to look at transformer-based solutions and investigate the viability of adopting transformer-based network designs for glaucoma detection. Using retinal fundus images of the optic nerve head to develop a viable algorithm to assess the severity of glaucoma necessitates a large number of well-curated images. Initially, data is generated by augmenting ocular pictures. After that, the ocular images are pre-processed to make them ready for further processing. The system is trained using pre-processed images, and it classifies the input images as normal or glaucoma based on the features retrieved during training. The Vision Transformer (ViT) architecture is well suited to this situation, as it allows the self-attention mechanism to utilise structural modeling. Extensive experiments are run on the common dataset, and the results are thoroughly validated and visualized.

Keywords: glaucoma, vision transformer, convolutional architectures, retinal fundus images, self-attention, deep learning

Procedia PDF Downloads 170
783 A Methodological Approach to the Betterment of the Retail Store's Interior Design: The Example of Dereboyu Street, Nicosia

Authors: Nazanin Reza Nejad, Kamil Guley

Abstract:

Shopping is one of the most entertaining activities of daily life. In parallel to this, the successful settings of the stores impress the customers and made it more appealing for the users. The design of the atmosphere is the language of the interior space, and this design directly affects users’ emotions and perceptions. One of the goals of interior design is to increase the quality of the designed space. A well-designed venue satisfies the user and ensures happiness and safety. Thus, customers are turned into frequent users of the store. Spaces without the right designs negatively influence the user. The accurate interior design of the stores becomes crucial at this point. This study aims to act as a guideline for the betterment of the interior design of a newly designed or already existing clothing store located on the shopping streets of the cities. In light of the relevant literature review, the most important point in interior store design is the design and ambiance factors and how these factors are used in the interior space of the stores. Within the scope of this study, 27 clothing stores located on Dereboyu, the largest shopping street in Nicosia, the capital of North Cyprus, were examined. The examined stores were grouped as brand stores and non-brand stores which sell products from different production sites. The observation regarding the interiors of the selected stores was analyzed through qualitative and quantitative research methods. The arrangements of the sub-functions in the stores were analyzed through various reading methods over the plan schemes and recorded images. The sub-functions of all examined stores are compared against the ambiance and design factors in the literature, and results were interpreted accordingly. At the end of the study, the differences among stores that belong to a brand with an identity and stores which have not yet established an identity were identified and compared. The results of the comparisons were used to offer implications for the betterment of the interior design on a future or already existing store on the street. Thus, the study was concluded to be a guideline for people interested in interior store design.

Keywords: atmosphere, ambiance factors, clothing store, identity, interior design

Procedia PDF Downloads 181
782 PaSA: A Dataset for Patent Sentiment Analysis to Highlight Patent Paragraphs

Authors: Renukswamy Chikkamath, Vishvapalsinhji Ramsinh Parmar, Christoph Hewel, Markus Endres

Abstract:

Given a patent document, identifying distinct semantic annotations is an interesting research aspect. Text annotation helps the patent practitioners such as examiners and patent attorneys to quickly identify the key arguments of any invention, successively providing a timely marking of a patent text. In the process of manual patent analysis, to attain better readability, recognising the semantic information by marking paragraphs is in practice. This semantic annotation process is laborious and time-consuming. To alleviate such a problem, we proposed a dataset to train machine learning algorithms to automate the highlighting process. The contributions of this work are: i) we developed a multi-class dataset of size 150k samples by traversing USPTO patents over a decade, ii) articulated statistics and distributions of data using imperative exploratory data analysis, iii) baseline Machine Learning models are developed to utilize the dataset to address patent paragraph highlighting task, and iv) future path to extend this work using Deep Learning and domain-specific pre-trained language models to develop a tool to highlight is provided. This work assists patent practitioners in highlighting semantic information automatically and aids in creating a sustainable and efficient patent analysis using the aptitude of machine learning.

Keywords: machine learning, patents, patent sentiment analysis, patent information retrieval

Procedia PDF Downloads 67
781 Test Suite Optimization Using an Effective Meta-Heuristic BAT Algorithm

Authors: Anuradha Chug, Sunali Gandhi

Abstract:

Regression Testing is a very expensive and time-consuming process carried out to ensure the validity of modified software. Due to the availability of insufficient resources to re-execute all the test cases in time constrained environment, efforts are going on to generate test data automatically without human efforts. Many search based techniques have been proposed to generate efficient, effective as well as optimized test data, so that the overall cost of the software testing can be minimized. The generated test data should be able to uncover all potential lapses that exist in the software or product. Inspired from the natural behavior of bat for searching her food sources, current study employed a meta-heuristic, search-based bat algorithm for optimizing the test data on the basis certain parameters without compromising their effectiveness. Mathematical functions are also applied that can effectively filter out the redundant test data. As many as 50 Java programs are used to check the effectiveness of proposed test data generation and it has been found that 86% saving in testing efforts can be achieved using bat algorithm while covering 100% of the software code for testing. Bat algorithm was found to be more efficient in terms of simplicity and flexibility when the results were compared with another nature inspired algorithms such as Firefly Algorithm (FA), Hill Climbing Algorithm (HC) and Ant Colony Optimization (ACO). The output of this study would be useful to testers as they can achieve 100% path coverage for testing with minimum number of test cases.

Keywords: regression testing, test case selection, test case prioritization, genetic algorithm, bat algorithm

Procedia PDF Downloads 348
780 Aerodynamic Modelling of Unmanned Aerial System through Computational Fluid Dynamics: Application to the UAS-S45 Balaam

Authors: Maxime A. J. Kuitche, Ruxandra M. Botez, Arthur Guillemin

Abstract:

As the Unmanned Aerial Systems have found diverse utilities in both military and civil aviation, the necessity to obtain an accurate aerodynamic model has shown an enormous growth of interest. Recent modeling techniques are procedures using optimization algorithms and statistics that require many flight tests and are therefore extremely demanding in terms of costs. This paper presents a procedure to estimate the aerodynamic behavior of an unmanned aerial system from a numerical approach using computational fluid dynamic analysis. The study was performed using an unstructured mesh obtained from a grid convergence analysis at a Mach number of 0.14, and at an angle of attack of 0°. The flow around the aircraft was described using a standard k-ω turbulence model. Thus, the Reynold Averaged Navier-Stokes (RANS) equations were solved using ANSYS FLUENT software. The method was applied on the UAS-S45 designed and manufactured by Hydra Technologies in Mexico. The lift, the drag, and the pitching moment coefficients were obtained at different angles of attack for several flight conditions defined in terms of altitudes and Mach numbers. The results obtained from the Computational Fluid Dynamics analysis were compared with the results obtained by using the DATCOM semi-empirical procedure. This comparison has indicated that our approach is highly accurate and that the aerodynamic model obtained could be useful to estimate the flight dynamics of the UAS-S45.

Keywords: aerodynamic modelling, CFD Analysis, ANSYS FLUENT, UAS-S45

Procedia PDF Downloads 355
779 Effect of Temperature and Deformation Mode on Texture Evolution of AA6061

Authors: M. Ghosh, A. Miroux, L. A. I. Kestens

Abstract:

At molecular or micrometre scale, practically all materials are neither homogeneous nor isotropic. The concept of texture is used to identify the structural features that cause the properties of a material to be anisotropic. For metallic materials, the anisotropy of the mechanical behaviour originates from the crystallographic nature of plastic deformation, and is therefore controlled by the crystallographic texture. Anisotropy in mechanical properties often constitutes a disadvantage in the application of materials, as it is often illustrated by the earing phenomena during drawing. However, advantages may also be attained when considering other properties (e.g. optimization of magnetic behaviour to a specific direction) by controlling texture through thermo-mechanical processing). Nevertheless, in order to have better control over the final properties it is essential to relate texture with materials processing route and subsequently optimise their performance. However, up to date, few studies have been reported about the evolution of texture in 6061 aluminium alloy during warm processing (from room temperature to 250ºC). In present investigation, recrystallized 6061 aluminium alloy samples were subjected to tensile and plane strain compression (PSC) at room and warm temperatures. The gradual change of texture following both deformation modes were measured and discussed. Tensile tests demonstrate the mechanism at low strain while PSC does the same at high strain and eventually simulate the condition of rolling. Cube dominated texture of the initial rolled and recrystallized AA6061 sheets were replaced by domination of S and R components after PSC at room temperature, warm temperature (250ºC) though did not reflect any noticeable deviation from room temperature observation. It was also noticed that temperature has no significant effect on the evolution of grain morphology during PSC. The band contrast map revealed that after 30% deformation the substructure inside the grain is mainly made of series of parallel bands. A tendency for decrease of Cube and increase of Goss was noticed after tensile deformation compared to as-received material. Like PSC, texture does not change after deformation at warm temperature though. n-fibre was noticed for all the three textures from Goss to Cube.

Keywords: AA 6061, deformation, temperature, tensile, PSC, texture

Procedia PDF Downloads 470
778 Constructions of Linear and Robust Codes Based on Wavelet Decompositions

Authors: Alla Levina, Sergey Taranov

Abstract:

The classical approach to the providing noise immunity and integrity of information that process in computing devices and communication channels is to use linear codes. Linear codes have fast and efficient algorithms of encoding and decoding information, but this codes concentrate their detect and correct abilities in certain error configurations. To protect against any configuration of errors at predetermined probability can robust codes. This is accomplished by the use of perfect nonlinear and almost perfect nonlinear functions to calculate the code redundancy. The paper presents the error-correcting coding scheme using biorthogonal wavelet transform. Wavelet transform applied in various fields of science. Some of the wavelet applications are cleaning of signal from noise, data compression, spectral analysis of the signal components. The article suggests methods for constructing linear codes based on wavelet decomposition. For developed constructions we build generator and check matrix that contain the scaling function coefficients of wavelet. Based on linear wavelet codes we develop robust codes that provide uniform protection against all errors. In article we propose two constructions of robust code. The first class of robust code is based on multiplicative inverse in finite field. In the second robust code construction the redundancy part is a cube of information part. Also, this paper investigates the characteristics of proposed robust and linear codes.

Keywords: robust code, linear code, wavelet decomposition, scaling function, error masking probability

Procedia PDF Downloads 466
777 Acupuncture in the Treatment of Parkinson's Disease-Related Fatigue: A Pilot Randomized, Controlled Study

Authors: Keng H. Kong, Louis C. Tan, Wing L. Aw, Kay Y. Tay

Abstract:

Background: Fatigue is a common problem in patients with Parkinson's disease, with reported prevalence of up to 70%. Fatigue can be disabling and has adverse effects on patients' quality of life. There is currently no satisfactory treatment of fatigue. Acupuncture is effective in the treatment of fatigue, especially that related to cancer. Its role in Parkinson's disease-related fatigue is uncertain. Aims: To evaluate the clinical efficacy of acupuncture treatment in Parkinson's disease-related fatigue. Hypothesis: We hypothesize that acupuncture is effective in alleviating Parkinson's disease-related fatigue. Design: A single center, randomized, controlled study with two parallel arms. Participants: Forty participants with idiopathic Parkinson's disease will be enrolled. Interventions: Participants will be randomized to receive verum (real) acupuncture or placebo acupuncture. The retractable non-invasive sham needle will be used in the placebo group. The intervention will be administered twice a week for five weeks. Main outcome measures: The primary outcome will be the change in general fatigue score of the multidimensional fatigue inventory at week 5. Secondary outcome measures include other subscales of the multidimensional fatigue inventory, movement disorders society-unified Parkinson's disease rating scale, Parkinson's disease questionnaire-39 and geriatric depression scale. All outcome measures will be assessed at baseline (week 0), completion of intervention (week 5) and 4 weeks after completion of intervention (week 9). Results: To date, 23 participants have been recruited and nine have completed the study. The mean age is 63.5±14.2 years, mean duration of Parkinson’s disease is 6.4±1.8 years and mean MDS-UPDRS score is 8.3±2.8. The mean general fatigue score of the multidimensional fatigue inventory is 13.5±4.6. No significant adverse event related to acupuncture is noted. Potential significance: If the results are as expected, this study will provide preliminary scientific evidence for the efficacy of acupuncture in Parkinson's Disease-related fatigue, and opens the door for a larger multicentre trial to be performed. In the longer term, it may lead to the integration of acupuncture in the care of patients with Parkinson's disease.

Keywords: acupuncture, fatigue, Parkinson's disease, trial

Procedia PDF Downloads 283
776 Development of pm2.5 Forecasting System in Seoul, South Korea Using Chemical Transport Modeling and ConvLSTM-DNN

Authors: Ji-Seok Koo, Hee‑Yong Kwon, Hui-Young Yun, Kyung-Hui Wang, Youn-Seo Koo

Abstract:

This paper presents a forecasting system for PM2.5 levels in Seoul, South Korea, leveraging a combination of chemical transport modeling and ConvLSTM-DNN machine learning technology. Exposure to PM2.5 has known detrimental impacts on public health, making its prediction crucial for establishing preventive measures. Existing forecasting models, like the Community Multiscale Air Quality (CMAQ) and Weather Research and Forecasting (WRF), are hindered by their reliance on uncertain input data, such as anthropogenic emissions and meteorological patterns, as well as certain intrinsic model limitations. The system we've developed specifically addresses these issues by integrating machine learning and using carefully selected input features that account for local and distant sources of PM2.5. In South Korea, the PM2.5 concentration is greatly influenced by both local emissions and long-range transport from China, and our model effectively captures these spatial and temporal dynamics. Our PM2.5 prediction system combines the strengths of advanced hybrid machine learning algorithms, convLSTM and DNN, to improve upon the limitations of the traditional CMAQ model. Data used in the system include forecasted information from CMAQ and WRF models, along with actual PM2.5 concentration and weather variable data from monitoring stations in China and South Korea. The system was implemented specifically for Seoul's PM2.5 forecasting.

Keywords: PM2.5 forecast, machine learning, convLSTM, DNN

Procedia PDF Downloads 40
775 Increased Expression Levels of Soluble Epoxide Hydrolase in Obese and Its Modulation by Physical Exercise

Authors: Abdelkrim Khadir, Sina Kavalakatt, Preethi Cherian, Ali Tiss

Abstract:

Soluble epoxide hydrolase (sEH) is an emerging therapeutic target in several chronic states that have inflammation as a common underlying cause such as immunometabolic diseases. Indeed, sEH is known to play a pro-inflammatory role by metabolizing anti-inflammatory, epoxyeicosatrienoic acids (EETs) to pro-inflammatory diols. Recently, it was shown sEH to be linked to diet and microbiota interaction in rat models of obesity. Nevertheless, the functional contribution of sEH and its anti-inflammatory substrates EETs in obesity remain poorly understood. In the current study, we compared the expression pattern of sEH between lean and obese nondiabetic human subjects using subcutaneous adipose tissue (SAT) and peripheral blood mononuclear cells (PBMCs). Using RT-PCR, western blot and immunofluorescence confocal microscopy, we show here that the level of sEH mRNA and protein to be significantly increased in obese subjects with concomitant increase in endoplasmic reticulum (ER) stress components (GRP78 and ATF6α) and inflammatory markers (TNF-α, IL-6) when compared to lean controls. The observation that sEH was overexpressed in obese subjects’ prompt us to investigate whether physical exercise could reduce its expression. In this study, we report here 3-months supervised physical exercise significantly attenuated the expression of sEH in both the SAT and PBMCs, with a parallel decrease in the expression of ER stress markers along with attenuated inflammatory response. On the other hand, homocysteine, a sulfur containing amino acid deriving from the essential amino acid methionine was shown to be directly associated with insulin resistance. When 3T3-L1 preadipocytes cells were treated with homocysteine our results show increased sEH levels along with ER stress markers. Collectively, our data suggest that sEH upregulation is strongly linked to ER stress in adiposity and that physical exercise modulates its expression. This gives further evidence that exercise might be useful as a strategy for managing obesity and preventing its associated complications.

Keywords: obesity, adipose tissue, epoxide hydrolase, ER stress

Procedia PDF Downloads 119
774 Computer-Aided Ship Design Approach for Non-Uniform Rational Basis Spline Based Ship Hull Surface Geometry

Authors: Anu S. Nair, V. Anantha Subramanian

Abstract:

This paper presents a surface development and fairing technique combining the features of a modern computer-aided design tool namely the Non-Uniform Rational Basis Spline (NURBS) with an algorithm to obtain a rapidly faired hull form. Some of the older series based designs give sectional area distribution such as in the Wageningen-Lap Series. Others such as the FORMDATA give more comprehensive offset data points. Nevertheless, this basic data still requires fairing to obtain an acceptable faired hull form. This method uses the input of sectional area distribution as an example and arrives at the faired form. Characteristic section shapes define any general ship hull form in the entrance, parallel mid-body and run regions. The method defines a minimum of control points at each section and using the Golden search method or the bisection method; the section shape converges to the one with the prescribed sectional area with a minimized error in the area fit. The section shapes combine into evolving the faired surface by NURBS and typically takes 20 iterations. The advantage of the method is that it is fast, robust and evolves the faired hull form through minimal iterations. The curvature criterion check for the hull lines shows the evolution of the smooth faired surface. The method is applicable to hull form from any parent series and the evolved form can be evaluated for hydrodynamic performance as is done in more modern design practice. The method can handle complex shape such as that of the bulbous bow. Surface patches developed fit together at their common boundaries with curvature continuity and fairness check. The development is coded in MATLAB and the example illustrates the development of the method. The most important advantage is quick time, the rapid iterative fairing of the hull form.

Keywords: computer-aided design, methodical series, NURBS, ship design

Procedia PDF Downloads 149
773 Hybrid GNN Based Machine Learning Forecasting Model For Industrial IoT Applications

Authors: Atish Bagchi, Siva Chandrasekaran

Abstract:

Background: According to World Bank national accounts data, the estimated global manufacturing value-added output in 2020 was 13.74 trillion USD. These manufacturing processes are monitored, modelled, and controlled by advanced, real-time, computer-based systems, e.g., Industrial IoT, PLC, SCADA, etc. These systems measure and manipulate a set of physical variables, e.g., temperature, pressure, etc. Despite the use of IoT, SCADA etc., in manufacturing, studies suggest that unplanned downtime leads to economic losses of approximately 864 billion USD each year. Therefore, real-time, accurate detection, classification and prediction of machine behaviour are needed to minimise financial losses. Although vast literature exists on time-series data processing using machine learning, the challenges faced by the industries that lead to unplanned downtimes are: The current algorithms do not efficiently handle the high-volume streaming data from industrial IoTsensors and were tested on static and simulated datasets. While the existing algorithms can detect significant 'point' outliers, most do not handle contextual outliers (e.g., values within normal range but happening at an unexpected time of day) or subtle changes in machine behaviour. Machines are revamped periodically as part of planned maintenance programmes, which change the assumptions on which original AI models were created and trained. Aim: This research study aims to deliver a Graph Neural Network(GNN)based hybrid forecasting model that interfaces with the real-time machine control systemand can detect, predict machine behaviour and behavioural changes (anomalies) in real-time. This research will help manufacturing industries and utilities, e.g., water, electricity etc., reduce unplanned downtimes and consequential financial losses. Method: The data stored within a process control system, e.g., Industrial-IoT, Data Historian, is generally sampled during data acquisition from the sensor (source) and whenpersistingin the Data Historian to optimise storage and query performance. The sampling may inadvertently discard values that might contain subtle aspects of behavioural changes in machines. This research proposed a hybrid forecasting and classification model which combines the expressive and extrapolation capability of GNN enhanced with the estimates of entropy and spectral changes in the sampled data and additional temporal contexts to reconstruct the likely temporal trajectory of machine behavioural changes. The proposed real-time model belongs to the Deep Learning category of machine learning and interfaces with the sensors directly or through 'Process Data Historian', SCADA etc., to perform forecasting and classification tasks. Results: The model was interfaced with a Data Historianholding time-series data from 4flow sensors within a water treatment plantfor45 days. The recorded sampling interval for a sensor varied from 10 sec to 30 min. Approximately 65% of the available data was used for training the model, 20% for validation, and the rest for testing. The model identified the anomalies within the water treatment plant and predicted the plant's performance. These results were compared with the data reported by the plant SCADA-Historian system and the official data reported by the plant authorities. The model's accuracy was much higher (20%) than that reported by the SCADA-Historian system and matched the validated results declared by the plant auditors. Conclusions: The research demonstrates that a hybrid GNN based approach enhanced with entropy calculation and spectral information can effectively detect and predict a machine's behavioural changes. The model can interface with a plant's 'process control system' in real-time to perform forecasting and classification tasks to aid the asset management engineers to operate their machines more efficiently and reduce unplanned downtimes. A series of trialsare planned for this model in the future in other manufacturing industries.

Keywords: GNN, Entropy, anomaly detection, industrial time-series, AI, IoT, Industry 4.0, Machine Learning

Procedia PDF Downloads 126
772 Defence Industry in the Political Economy of State and Business Relations

Authors: Hatice Idil Gorgen

Abstract:

Turkey has been investing in its national defence industrial base since the 1980s. State’s role in defence industry showed differences in Turkey. Parallel with this, ruling group’s attitude toward companies in defence sector varied. These changes in policies and behaviors of the state have occurred throughout such milestones as political and economic turmoil in domestic and international level. Hence, it is argued that state’s role, relations with private companies in defense sector and its policies towards the defense industry has shown differences due to the international system, political institutions, ideas and political coalitions in Turkey since the 1980s. Therefore, in order to see changes in the role of the state in defence sector, this paper aims to indicate first, history of state’s role in production and defence industry in the post-1980s era. Secondly, to comprehend the changes in the state’s role in defence industry, Stephan Haggard’s sources of policy change will be provided in the theoretical ground. Thirdly, state cooperated, and joint venture defence firms, state’s actions toward them will be observed. The remaining part will explore the underlying reasons for the changes in the role of the state in defence industry, and it implicitly or explicitly impacts on state business relations. Major findings illustrate that targeted idea of self-sufficient or autarky Turkey to attract domestic audience and to raise the prestige through defence system; ruling elites can regard defence industry and involved business groups as a mean for their ends. State dominant value, sensitive perception which has been ever since Ottoman Empire, prioritizes business groups in defence industry compared to others and push the ruling elites to pursue hard power in defence sectors. Through the globally structural transformation in defence industry, integration of Turkey to liberal bloc deepened and widened interdependence among states. Although it is a qualitative study, it involves the numerated data and descriptive statistics. Data will be collected by searching secondary sources from the literature, examining official documents of ministry of defence, and other appropriate ministries.

Keywords: defense industry, state and business relations, public private relations, arm industry

Procedia PDF Downloads 294
771 A Hierarchical Method for Multi-Class Probabilistic Classification Vector Machines

Authors: P. Byrnes, F. A. DiazDelaO

Abstract:

The Support Vector Machine (SVM) has become widely recognised as one of the leading algorithms in machine learning for both regression and binary classification. It expresses predictions in terms of a linear combination of kernel functions, referred to as support vectors. Despite its popularity amongst practitioners, SVM has some limitations, with the most significant being the generation of point prediction as opposed to predictive distributions. Stemming from this issue, a probabilistic model namely, Probabilistic Classification Vector Machines (PCVM), has been proposed which respects the original functional form of SVM whilst also providing a predictive distribution. As physical system designs become more complex, an increasing number of classification tasks involving industrial applications consist of more than two classes. Consequently, this research proposes a framework which allows for the extension of PCVM to a multi class setting. Additionally, the original PCVM framework relies on the use of type II maximum likelihood to provide estimates for both the kernel hyperparameters and model evidence. In a high dimensional multi class setting, however, this approach has been shown to be ineffective due to bad scaling as the number of classes increases. Accordingly, we propose the application of Markov Chain Monte Carlo (MCMC) based methods to provide a posterior distribution over both parameters and hyperparameters. The proposed framework will be validated against current multi class classifiers through synthetic and real life implementations.

Keywords: probabilistic classification vector machines, multi class classification, MCMC, support vector machines

Procedia PDF Downloads 206
770 Analogy in Microclimatic Parameters, Chemometric and Phytonutrient Profiles of Cultivated and Wild Ecotypes of Origanum vulgare L., across Kashmir Himalaya

Authors: Sumira Jan, Javid Iqbal Mir, Desh Beer Singh, Anil Sharma, Shafia Zaffar Faktoo

Abstract:

Background and Aims: Climatic and edaphic factors immensely influence crop quality and proper development. Regardless of economic potential, Himalayan Oregano has not subjected to phytonutrient and chemometric evaluation and its relationship with environmental conditions are scarce. The central objective of this research was to investigate microclimatic variation among wild and cultivated populations located in a microclimatic gradient in north-western Himalaya, Kashmir and analyse if such disparity was related with diverse climatic and edaphic conditions. Methods: Micrometeorological, Atomic absorption spectroscopy for micro elemental analysis was carried for soil. HPLC was carried out to estimate variation in phytonutrients and phytochemicals. Results: Geographic variation in phytonutrient was observed among cultivated and wild populations and among populations diverse within regions. Cultivated populations exhibited comparatively lesser phytonutrient value than wild populations. Moreover, our results observed higher vegetative growth of O. vulgare L. with higher pH (6-7), elevated organic carbon (2.42%), high nitrogen (97.41Kg/ha) and manganese (10-12ppm) and zinc contents (0.39-0.50) produce higher phytonutrients. HPLC data of phytonutrients like quercetin, betacarotene, ascorbic acid, arbutin and catechin revealed direct relationship with UV-B flux (r2=0.82), potassium (r2=0.97) displaying parallel relationship with phytonutrient value. Conclusions: Catechin was found as predominant phytonutrient among all populations with maximum accumulation of 163.8 ppm while as quercetin exhibited lesser value. Maximum arbutin (53.42ppm) and quercetin (2.87ppm) accumulated in plants thriving under intense and high UV-B flux. Minimum variation was demonstrated by beta carotene and ascorbic acid.

Keywords: phytonutrient, ascorbic acid, beta carotene, quercetin, catechin

Procedia PDF Downloads 246
769 Effectiveness of Office-Based Occupational Therapy for Office Workers with Low Back Pain: A Public Health Approach

Authors: Dina Jalalvand, Joshua A. Cleland

Abstract:

This double-blind, randomized control trial with parallel groups aimed to examine the effectiveness of office-based occupational therapy for office workers with low back pain on the intensity of pain and range of motion. Seventy-two male office workers (age: 20-50 years) with chronic low back pain (more than three months with at least two symptoms of chronic low back pain) satisfied eligibility criteria and agreed to participate in this study. The absence of joint burst following magnetic resonance imagining (MRI) was considered as an important inclusion criterion as well. Subjects were randomly assigned to a control or experimental group. The experimental group received the modified package of exercise-based occupational therapy, which included 11 simple exercise movements (derived from Williams and McKenzie), and the control group just received the conventional therapy, which included their routine physiotherapy sessions. The subjects completed the exercises three times a week for a duration of six weeks. Each exercise session was 10-15 minutes. Pain intensity and range of motion were the primary outcomes and were measured at baseline, 6 weeks, and 12 weeks after the end of the intervention using the numerical rating scale (NRS) and goniometer accordingly. Repeated measure ANOVA was used for analyzing data. The results of this study showed that significant decreases in pain intensity (p ≤ 0.05) and an increase in range of motion (p ≤ 0.001) in the experimental group in comparison with the control group after 6 and 12 weeks of intervention (between-group comparisons). In addition, there was a significant decrease in intensity of the pain (p ≤ 0.05) and an increase (p ≤ 0.001) in range of motion in the intervention group in comparison with baseline after 6 and 12 weeks (within-group comparison). This showed a positive effect of exercise-based occupational therapy that could potentially be used with low cost among office workers who suffer from low back pain. In addition, it should be noted that the introduced package of exercise training is easy to do, and there is not a need for a specific introduction.

Keywords: public health, office workers, low back pain, occupational therapy

Procedia PDF Downloads 198
768 Bridge Health Monitoring: A Review

Authors: Mohammad Bakhshandeh

Abstract:

Structural Health Monitoring (SHM) is a crucial and necessary practice that plays a vital role in ensuring the safety and integrity of critical structures, and in particular, bridges. The continuous monitoring of bridges for signs of damage or degradation through Bridge Health Monitoring (BHM) enables early detection of potential problems, allowing for prompt corrective action to be taken before significant damage occurs. Although all monitoring techniques aim to provide accurate and decisive information regarding the remaining useful life, safety, integrity, and serviceability of bridges, understanding the development and propagation of damage is vital for maintaining uninterrupted bridge operation. Over the years, extensive research has been conducted on BHM methods, and experts in the field have increasingly adopted new methodologies. In this article, we provide a comprehensive exploration of the various BHM approaches, including sensor-based, non-destructive testing (NDT), model-based, and artificial intelligence (AI)-based methods. We also discuss the challenges associated with BHM, including sensor placement and data acquisition, data analysis and interpretation, cost and complexity, and environmental effects, through an extensive review of relevant literature and research studies. Additionally, we examine potential solutions to these challenges and propose future research ideas to address critical gaps in BHM.

Keywords: structural health monitoring (SHM), bridge health monitoring (BHM), sensor-based methods, machine-learning algorithms, and model-based techniques, sensor placement, data acquisition, data analysis

Procedia PDF Downloads 70
767 A Deforestation Dilemma: An Integrated Approach to Conservation and Development in Madagascar

Authors: Tara Moore

Abstract:

Madagascar is one of the regions of the world with the highest biodiversity, with more than 600 new species discovered in just the last decade. In parallel with its record-breaking biodiversity, Madagascar is also the tenth poorest country in the world. The resultant socio-economic pressures are leading to a highly threatened environment. In particular, deforestation is at the core of biodiversity and ecosystem loss, primarily from slash and burn agriculture and illegal rosewood tree harvesting. Effective policy response is imperative for improved conservation in Madagascar. However, these changes cannot come from the current, unstable government institutions. After a violent and politically turbulent coup in 2009, any effort to defend Madagascar's biodiversity has been eclipsed by the high corruption of government bodies. This paper presents three policy options designed for a private donor to invest in conservation in Madagascar. The first proposed policy consists of payments for ecosystem services model, which involves paying local Malagasy women to reforest nearby territories. The second option is a micro-irrigation system proposal involving relocating local Malagasy out of the threatened forest region. The final proposition is captive breeding funding for the Madagascar Fauna and Flora Group, which could then lead to new reintroductions in the threatened northeastern rainforests. In the end, all three options present feasible, impactful options for a conservation-minded major donor. Ideally, the policy change would involve a combination of all three options, as each provides necessary development and conservation re-structuring goals. Option one, payments for ecosystem services, would be the preferred choice if there were only enough funding for one project. The payments for ecosystem services project both support local populations and promotes sustainable development while reforesting the threatened Marojejy National Park. Regardless of the chosen policy solution, any support from a donor will make a huge impact if it supports both sustainable development and biodiversity conservation.

Keywords: captive breeding, cnservation policy, lemur conservation, Madagascar conservation, payments for ecosystem services

Procedia PDF Downloads 113
766 Design and Optimization of Open Loop Supply Chain Distribution Network Using Hybrid K-Means Cluster Based Heuristic Algorithm

Authors: P. Suresh, K. Gunasekaran, R. Thanigaivelan

Abstract:

Radio frequency identification (RFID) technology has been attracting considerable attention with the expectation of improved supply chain visibility for consumer goods, apparel, and pharmaceutical manufacturers, as well as retailers and government procurement agencies. It is also expected to improve the consumer shopping experience by making it more likely that the products they want to purchase are available. Recent announcements from some key retailers have brought interest in RFID to the forefront. A modified K- Means Cluster based Heuristic approach, Hybrid Genetic Algorithm (GA) - Simulated Annealing (SA) approach, Hybrid K-Means Cluster based Heuristic-GA and Hybrid K-Means Cluster based Heuristic-GA-SA for Open Loop Supply Chain Network problem are proposed. The study incorporated uniform crossover operator and combined crossover operator in GAs for solving open loop supply chain distribution network problem. The algorithms are tested on 50 randomly generated data set and compared with each other. The results of the numerical experiments show that the Hybrid K-means cluster based heuristic-GA-SA, when tested on 50 randomly generated data set, shows superior performance to the other methods for solving the open loop supply chain distribution network problem.

Keywords: RFID, supply chain distribution network, open loop supply chain, genetic algorithm, simulated annealing

Procedia PDF Downloads 142
765 Effect of Helical Flow on Separation Delay in the Aortic Arch for Different Mechanical Heart Valve Prostheses by Time-Resolved Particle Image Velocimetry

Authors: Qianhui Li, Christoph H. Bruecker

Abstract:

Atherosclerotic plaques are typically found where flow separation and variations of shear stress occur. Although helical flow patterns and flow separations have been recorded in the aorta, their relation has not been clearly clarified and especially in the condition of artificial heart valve prostheses. Therefore, an experimental study is performed to investigate the hemodynamic performance of different mechanical heart valves (MHVs), i.e. the SJM Regent bileaflet mechanical heart valve (BMHV) and the Lapeyre-Triflo FURTIVA trileaflet mechanical heart valve (TMHV), in a transparent model of the human aorta under a physiological pulsatile right-hand helical flow condition. A typical systolic flow profile is applied in the pulse-duplicator to generate a physiological pulsatile flow which thereafter flows past an axial turbine blade structure to imitate the right-hand helical flow induced in the left ventricle. High-speed particle image velocimetry (PIV) measurements are used to map the flow evolution. A circular open orifice nozzle inserted in the valve plane as the reference configuration initially replaces the valve under investigation to understand the hemodynamic effects of the entered helical flow structure on the flow evolution in the aortic arch. Flow field analysis of the open orifice nozzle configuration illuminates the helical flow effectively delays the flow separation at the inner radius wall of the aortic arch. The comparison of the flow evolution for different MHVs shows that the BMHV works like a flow straightener which re-configures the helical flow pattern into three parallel jets (two side-orifice jets and the central orifice jet) while the TMHV preserves the helical flow structure and therefore prevent the flow separation at the inner radius wall of the aortic arch. Therefore the TMHV is of better hemodynamic performance and reduces the pressure loss.

Keywords: flow separation, helical aortic flow, mechanical heart valve, particle image velocimetry

Procedia PDF Downloads 155
764 Predicting the Compressive Strength of Geopolymer Concrete Using Machine Learning Algorithms: Impact of Chemical Composition and Curing Conditions

Authors: Aya Belal, Ahmed Maher Eltair, Maggie Ahmed Mashaly

Abstract:

Geopolymer concrete is gaining recognition as a sustainable alternative to conventional Portland Cement concrete due to its environmentally friendly nature, which is a key goal for Smart City initiatives. It has demonstrated its potential as a reliable material for the design of structural elements. However, the production of Geopolymer concrete is hindered by batch-to-batch variations, which presents a significant challenge to the widespread adoption of Geopolymer concrete. To date, Machine learning has had a profound impact on various fields by enabling models to learn from large datasets and predict outputs accurately. This paper proposes an integration between the current drift to Artificial Intelligence and the composition of Geopolymer mixtures to predict their mechanical properties. This study employs Python software to develop machine learning model in specific Decision Trees. The research uses the percentage oxides and the chemical composition of the Alkali Solution along with the curing conditions as the input independent parameters, irrespective of the waste products used in the mixture yielding the compressive strength of the mix as the output parameter. The results showed 90 % agreement of the predicted values to the actual values having the ratio of the Sodium Silicate to the Sodium Hydroxide solution being the dominant parameter in the mixture.

Keywords: decision trees, geopolymer concrete, machine learning, smart cities, sustainability

Procedia PDF Downloads 58
763 Client Hacked Server

Authors: Bagul Abhijeet

Abstract:

Background: Client-Server model is the backbone of today’s internet communication. In which normal user can not have control over particular website or server? By using the same processing model one can have unauthorized access to particular server. In this paper, we discussed about application scenario of hacking for simple website or server consist of unauthorized way to access the server database. This application emerges to autonomously take direct access of simple website or server and retrieve all essential information maintain by administrator. In this system, IP address of server given as input to retrieve user-id and password of server. This leads to breaking administrative security of server and acquires the control of server database. Whereas virus helps to escape from server security by crashing the whole server. Objective: To control malicious attack and preventing all government website, and also find out illegal work to do hackers activity. Results: After implementing different hacking as well as non-hacking techniques, this system hacks simple web sites with normal security credentials. It provides access to server database and allow attacker to perform database operations from client machine. Above Figure shows the experimental result of this application upon different servers and provides satisfactory results as required. Conclusion: In this paper, we have presented a to view to hack the server which include some hacking as well as non-hacking methods. These algorithms and methods provide efficient way to hack server database. By breaking the network security allow to introduce new and better security framework. The terms “Hacking” not only consider for its illegal activities but also it should be use for strengthen our global network.

Keywords: Hacking, Vulnerabilities, Dummy request, Virus, Server monitoring

Procedia PDF Downloads 232
762 A Kierkegaardian Reading of Iqbal's Poetry as a Communicative Act

Authors: Sevcan Ozturk

Abstract:

The overall aim of this paper is to present a Kierkegaardian approach to Iqbal’s use of literature as a form of communication. Despite belonging to different historical, cultural, and religious backgrounds, the philosophical approaches of Soren Kierkegaard, ‘the father of existentialism,' and Muhammad Iqbal ‘the spiritual father of Pakistan’ present certain parallels. Both Kierkegaard and Iqbal take human existence as the starting point for their reflections, emphasise the subject of becoming genuine religious personalities, and develop a notion of the self. While doing these they both adopt parallel methods, employ literary techniques and poetical forms, and use their literary works as a form of communication. The problem is that Iqbal does not provide a clear account of his method as Kierkegaard does in his works. As a result, Iqbal’s literary approach appears to be a collection of contradictions. This is mainly because despite he writes most of his works in the poetical form, he condemns all kinds of art including poetry. Moreover, while attacking on Islamic mysticism, he, at the same time, uses classical literary forms, and a number of traditional mystical, poetic symbols. This paper will argue that the contradictions found in Iqbal’s approach are actually a significant part of Iqbal’s way of communicating his reader. It is the contention of this paper that with the help of the parallels between the literary and philosophical theories of Kierkegaard and Iqbal, the application of Kierkegaard’s method to Iqbal’s use of poetry as a communicative act will make it possible to dispel the seeming ambiguities in Iqbal’s literary approach. The application of Kierkegaard’s theory to Iqbal’s literary method will include an analysis of the main principles of Kierkegaard’s own literary technique of ‘indirect communication,' which is a crucial term of his existentialist philosophy. Second, the clash between what Iqbal’s says about art and poetry and what he does will be highlighted in the light of Kierkegaardian theory of indirect communication. It will be argued that Iqbal’s literary technique can be considered as a form of ‘indirect communication,' and that reading his technique in this way helps on dispelling the contradictions in his approach. It is hoped that this paper will cultivate a dialogue between those who work in the fields of comparative philosophy Kierkegaard studies, existentialism, contemporary Islamic thought, Iqbal studies, and literary criticism.

Keywords: comparative philosophy, existentialism, indirect communication, intercultural philosophy, literary communication, Muhammad Iqbal, Soren Kierkegaard

Procedia PDF Downloads 302
761 Tropical Squall Lines in Brazil: A Methodology for Identification and Analysis Based on ISCCP Tracking Database

Authors: W. A. Gonçalves, E. P. Souza, C. R. Alcântara

Abstract:

The ISCCP-Tracking database offers an opportunity to study physical and morphological characteristics of Convective Systems based on geostationary meteorological satellites. This database contains 26 years of tracking of Convective Systems for the entire globe. Then, Tropical Squall Lines which occur in Brazil are certainly within the database. In this study, we propose a methodology for identification of these systems based on the ISCCP-Tracking database. A physical and morphological characterization of these systems is also shown. The proposed methodology is firstly based on the year of 2007. The Squall Lines were subjectively identified by visually analyzing infrared images from GOES-12. Based on this identification, the same systems were identified within the ISCCP-Tracking database. It is known, and it was also observed that the Squall Lines which occur on the north coast of Brazil develop parallel to the coast, influenced by the sea breeze. In addition, it was also observed that the eccentricity of the identified systems was greater than 0.7. Then, a methodology based on the inclination (based on the coast) and eccentricity (greater than 0.7) of the Convective Systems was applied in order to identify and characterize Tropical Squall Lines in Brazil. These thresholds were applied back in the ISCCP-Tracking database for the year of 2007. It was observed that other systems, which were not Squall Lines, were also identified. Then, we decided to call all systems identified by the inclination and eccentricity thresholds as Linear Convective Systems, instead of Squall Lines. After this step, the Linear Convective Systems were identified and characterized for the entire database, from 1983 to 2008. The physical and morphological characteristics of these systems were compared to those systems which did not have the required inclination and eccentricity to be called Linear Convective Systems. The results showed that the convection associated with the Linear Convective Systems seems to be more intense and organized than in the other systems. This affirmation is based on all ISCCP-Tracking variables analyzed. This type of methodology, which explores 26 years of satellite data by an objective analysis, was not previously explored in the literature. The physical and morphological characterization of the Linear Convective Systems based on 26 years of data is of a great importance and should be used in many branches of atmospheric sciences.

Keywords: squall lines, convective systems, linear convective systems, ISCCP-Tracking

Procedia PDF Downloads 280
760 Improving Activity Recognition Classification of Repetitious Beginner Swimming Using a 2-Step Peak/Valley Segmentation Method with Smoothing and Resampling for Machine Learning

Authors: Larry Powell, Seth Polsley, Drew Casey, Tracy Hammond

Abstract:

Human activity recognition (HAR) systems have shown positive performance when recognizing repetitive activities like walking, running, and sleeping. Water-based activities are a reasonably new area for activity recognition. However, water-based activity recognition has largely focused on supporting the elite and competitive swimming population, which already has amazing coordination and proper form. Beginner swimmers are not perfect, and activity recognition needs to support the individual motions to help beginners. Activity recognition algorithms are traditionally built around short segments of timed sensor data. Using a time window input can cause performance issues in the machine learning model. The window’s size can be too small or large, requiring careful tuning and precise data segmentation. In this work, we present a method that uses a time window as the initial segmentation, then separates the data based on the change in the sensor value. Our system uses a multi-phase segmentation method that pulls all peaks and valleys for each axis of an accelerometer placed on the swimmer’s lower back. This results in high recognition performance using leave-one-subject-out validation on our study with 20 beginner swimmers, with our model optimized from our final dataset resulting in an F-Score of 0.95.

Keywords: time window, peak/valley segmentation, feature extraction, beginner swimming, activity recognition

Procedia PDF Downloads 101
759 Strong Ground Motion Characteristics Revealed by Accelerograms in Ms8.0 Wenchuan Earthquake

Authors: Jie Su, Zhenghua Zhou, Yushi Wang, Yongyi Li

Abstract:

The ground motion characteristics, which are given by the analysis of acceleration records, underlie the formulation and revision of the seismic design code of structural engineering. China Digital Strong Motion Network had recorded a lot of accelerograms of main shock from 478 permanent seismic stations, during the Ms8.0 Wenchuan earthquake on 12th May, 2008. These accelerograms provided a large number of essential data for the analysis of ground motion characteristics of the event. The spatial distribution characteristics, rupture directivity effect, hanging-wall and footwall effect had been studied based on these acceleration records. The results showed that the contours of horizontal peak ground acceleration and peak velocity were approximately parallel to the seismogenic fault which demonstrated that the distribution of the ground motion intensity was obviously controlled by the spatial extension direction of the seismogenic fault. Compared with the peak ground acceleration (PGA) recorded on the sites away from which the front of the fault rupture propagates, the PGA recorded on the sites toward which the front of the fault rupture propagates had larger amplitude and shorter duration, which indicated a significant rupture directivity effect. With the similar fault distance, the PGA of the hanging-wall is apparently greater than that of the foot-wall, while the peak velocity fails to observe this rule. Taking account of the seismic intensity distribution of Wenchuan Ms8.0 earthquake, the shape of strong ground motion contours was significantly affected by the directional effect in the regions with Chinese seismic intensity level VI ~ VIII. However, in the regions whose Chinese seismic intensity level are equal or greater than VIII, the mutual positional relationship between the strong ground motion contours and the surface outcrop trace of the fault was evidently influenced by the hanging-wall and foot-wall effect.

Keywords: hanging-wall and foot-wall effect, peak ground acceleration, rupture directivity effect, strong ground motion

Procedia PDF Downloads 329
758 Leading Virtual Project Teams in the Post Pandemic Era: Trust and Conflict Management Strategies

Authors: Vidya Badrinarayanan, Appa Iyer Sivakumar

Abstract:

The coronavirus pandemic has sent an important message that future project teams need to be trained to work under virtual conditions, which has already become the new norm in organizations across the world. As organizations increasingly rely on virtual teams to achieve project objectives, it is essential to comprehend how leadership functions in virtual project teams. The purpose of this research is to analyze the leadership behaviors exhibited by project managers for building trust and managing conflicts effectively in virtual project teams. This convergent parallel mixed method research was conducted by surveying 185 virtual leaders and conducting a semi-structured interview with 13 senior virtual leaders involved in managing projects across the industry sectors. The research findings indicate that establishing trust and managing conflicts were ranked as significant challenges in leading virtual project teams in the post-pandemic era. In contrast to earlier findings, our research findings suggest that productivity was not ranked as a significant challenge in leading virtual project teams. This indeed is a positive finding for organizations to consider adopting virtual project teams in the long run. Additionally, the research findings recommend that virtual leaders need to strive to build a high-trust environment and develop effective conflict resolution skills to improve the effectiveness of virtual project teams. As the project management profession struggles with low project success rates, mixed-method research aims to contribute to the knowledge in the growing research area of virtual project leadership. This research contributes to the knowledge by offering first-person accounts from senior virtual leaders on the innovative strategies they had implemented for building trust and resolving conflicts effectively in the virtual project when there were limited opportunities for face-to-face interaction on account of the pandemic. In addition, the leadership framework created as a part of this research for trust development and conflict management in virtual project teams will guide project managers to improve virtual project team effectiveness.

Keywords: conflict management, trust building, virtual leadership, virtual teams

Procedia PDF Downloads 157