Search results for: measurement and analysis
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 28924

Search results for: measurement and analysis

28144 Identification of Flooding Attack (Zero Day Attack) at Application Layer Using Mathematical Model and Detection Using Correlations

Authors: Hamsini Pulugurtha, V.S. Lakshmi Jagadmaba Paluri

Abstract:

Distributed denial of service attack (DDoS) is one altogether the top-rated cyber threats presently. It runs down the victim server resources like a system of measurement and buffer size by obstructing the server to supply resources to legitimate shoppers. Throughout this text, we tend to tend to propose a mathematical model of DDoS attack; we discuss its relevancy to the choices like inter-arrival time or rate of arrival of the assault customers accessing the server. We tend to tend to further analyze the attack model in context to the exhausting system of measurement and buffer size of the victim server. The projected technique uses an associate in nursing unattended learning technique, self-organizing map, to make the clusters of identical choices. Lastly, the abstract applies mathematical correlation and so the standard likelihood distribution on the clusters and analyses their behaviors to look at a DDoS attack. These systems not exclusively interconnect very little devices exchanging personal data, but to boot essential infrastructures news standing of nuclear facilities. Although this interconnection brings many edges and blessings, it to boot creates new vulnerabilities and threats which might be conversant in mount attacks. In such sophisticated interconnected systems, the power to look at attacks as early as accomplishable is of paramount importance.

Keywords: application attack, bandwidth, buffer correlation, DDoS distribution flooding intrusion layer, normal prevention probability size

Procedia PDF Downloads 209
28143 Miniaturizing the Volumetric Titration of Free Nitric Acid in U(vi) Solutions: On the Lookout for a More Sustainable Process Radioanalytical Chemistry through Titration-On-A-Chip

Authors: Jose Neri, Fabrice Canto, Alastair Magnaldo, Laurent Guillerme, Vincent Dugas

Abstract:

A miniaturized and automated approach for the volumetric titration of free nitric acid in U(VI) solutions is presented. Free acidity measurement refers to the acidity quantification in solutions containing hydrolysable heavy metal ions such as U(VI), U(IV) or Pu(IV) without taking into account the acidity contribution from the hydrolysis of such metal ions. It is, in fact, an operation having an essential role for the control of the nuclear fuel recycling process. The main objective behind the technical optimization of the actual ‘beaker’ method was to reduce the amount of radioactive substance to be handled by the laboratory personnel, to ease the instrumentation adjustability within a glove-box environment and to allow a high-throughput analysis for conducting more cost-effective operations. The measurement technique is based on the concept of the Taylor-Aris dispersion in order to create inside of a 200 μm x 5cm circular cylindrical micro-channel a linear concentration gradient in less than a second. The proposed analytical methodology relies on the actinide complexation using pH 5.6 sodium oxalate solution and subsequent alkalimetric titration of nitric acid with sodium hydroxide. The titration process is followed with a CCD camera for fluorescence detection; the neutralization boundary can be visualized in a detection range of 500nm- 600nm thanks to the addition of a pH sensitive fluorophore. The operating principle of the developed device allows the active generation of linear concentration gradients using a single cylindrical micro channel. This feature simplifies the fabrication and ease of use of the micro device, as it does not need a complex micro channel network or passive mixers to generate the chemical gradient. Moreover, since the linear gradient is determined by the liquid reagents input pressure, its generation can be fully achieved in faster intervals than one second, being a more timely-efficient gradient generation process compared to other source-sink passive diffusion devices. The resulting linear gradient generator device was therefore adapted to perform for the first time, a volumetric titration on a chip where the amount of reagents used is fixed to the total volume of the micro channel, avoiding an important waste generation like in other flow-based titration techniques. The associated analytical method is automated and its linearity has been proven for the free acidity determination of U(VI) samples containing up to 0.5M of actinide ion and nitric acid in a concentration range of 0.5M to 3M. In addition to automation, the developed analytical methodology and technique greatly improves the standard off-line oxalate complexation and alkalimetric titration method by reducing a thousand fold the required sample volume, forty times the nuclear waste per analysis as well as the analysis time by eight-fold. The developed device represents, therefore, a great step towards an easy-to-handle nuclear-related application, which in the short term could be used to improve laboratory safety as much as to reduce the environmental impact of the radioanalytical chain.

Keywords: free acidity, lab-on-a-chip, linear concentration gradient, Taylor-Aris dispersion, volumetric titration

Procedia PDF Downloads 380
28142 The Influence of Strategic Networks and Logistics Integration on Company Performance among Small and Medium Enterprises

Authors: Jeremiah Madzimure

Abstract:

In order to stay competitive in business and improve performance, Small and Medium Enterprises (SMEs) need to make use of business networking and logistics integration. Strategic networking and logistics integration in business companies have become critical as they allow supplier partnering, exchange of vital information/ access to valuable resources allowing innovation, gaining access to additional resources, sharing risks and costs which is required for enhancing company performance. The purpose of this study was to examine the influence of strategic networks and logistics integration on company performance: the case of small and medium enterprises in South Africa. A quantitative research design was adopted in this study, and 137 SMEs owners and managers completed and returned the survey questionnaire. Confirmatory Factor Analysis (CFA) was conducted using the Analysis of Moment Structures (AMOS), version 24.0 to assess psychometric properties of the measurement scales. Path modelling techniques were used to test the proposed hypothesis. Three research hypotheses were postulated. The results indicate that strategic networks had a positive and significant influence on logistics integration and company performance. As well logistics integration had a strong positive and significant influence on company performance. This study provides a useful model for analysing the relationship between strategic networks and logistics integration on company performance. Moreover, the findings of the study provide useful insights into how SMEs should benefit from business networking and logistics integration so as to improve their performance. The implications of the study are discussed, and finally, limitations and recommendations are indicated.

Keywords: strategic networking, logistics integration, company performance, SMEs

Procedia PDF Downloads 284
28141 Psychological Capital: Convergent and Discriminant Validity of a Reconfigured Measure

Authors: Anton Grobler

Abstract:

Background: Psychological capital (PsyCap), consisting of Hope, Optimism, Resilience, and Self-efficacy, is a popular positive organisational behaviour construct utilised in the studying employee work and behavioral attitudes. Various scholars believe however that further validity research should be conducted on the PsyCap questionnaire (PCQ), outside of the founding research team and in more diverse settings, for the purpose of this paper, within the diverse South African (SA) context. Aim: The purpose of this study was to investigate the construct validity of the PCQ with specific reference to its psychometric properties within the diverse SA context. Setting: The sample includes a total of 1 749 respondents, ± 60 each from 30 organisations in South Africa. Method: This study utilised a cross-sectional design and quantitative analysis. The sample is relatively representative (in terms of race, gender) of the South African workforce. A multi-factorial model was statistically explored and confirmed (with exploratory factor analysis [EFA] and confirmatory factor analysis [CFA] respectively). Results: The study yielded a three-factor solution, with Hope and Optimism as a combined factor and Resilience and Self-efficacy made up of a reconfigured set of substantively justifiable items. Three items of the original 24 items were found not to be suitable. The three factors showed good psychometric properties, good fit (in support of construct validity) and acceptable levels of convergent and discriminant validity. Conclusion: The results support the original conceptualisation of PsyCap, although with a unique structural configuration. This resonates with the notion of scholars that further research should be conducted within diverse settings. This is necessary to ensure the valid measurement of the construct, which is considered to be one of the four criteria for a construct to be categorised as a positive organisational behaviour construct.

Keywords: positive organisational behaviour, psychological capital, hope, optimism, resilience, self-efficacy, construct validity

Procedia PDF Downloads 179
28140 Workflow Based Inspection of Geometrical Adaptability from 3D CAD Models Considering Production Requirements

Authors: Tobias Huwer, Thomas Bobek, Gunter Spöcker

Abstract:

Driving forces for enhancements in production are trends like digitalization and individualized production. Currently, such developments are restricted to assembly parts. Thus, complex freeform surfaces are not addressed in this context. The need for efficient use of resources and near-net-shape production will require individualized production of complex shaped workpieces. Due to variations between nominal model and actual geometry, this can lead to changes in operations in Computer-aided process planning (CAPP) to make CAPP manageable for an adaptive serial production. In this context, 3D CAD data can be a key to realizing that objective. Along with developments in the geometrical adaptation, a preceding inspection method based on CAD data is required to support the process planner by finding objective criteria to make decisions about the adaptive manufacturability of workpieces. Nowadays, this kind of decisions is depending on the experience-based knowledge of humans (e.g. process planners) and results in subjective decisions – leading to a variability of workpiece quality and potential failure in production. In this paper, we present an automatic part inspection method, based on design and measurement data, which evaluates actual geometries of single workpiece preforms. The aim is to automatically determine the suitability of the current shape for further machining, and to provide a basis for an objective decision about subsequent adaptive manufacturability. The proposed method is realized by a workflow-based approach, keeping in mind the requirements of industrial applications. Workflows are a well-known design method of standardized processes. Especially in applications like aerospace industry standardization and certification of processes are an important aspect. Function blocks, providing a standardized, event-driven abstraction to algorithms and data exchange, will be used for modeling and execution of inspection workflows. Each analysis step of the inspection, such as positioning of measurement data or checking of geometrical criteria, will be carried out by function blocks. One advantage of this approach is its flexibility to design workflows and to adapt algorithms specific to the application domain. In general, within the specified tolerance range it will be checked if a geometrical adaption is possible. The development of particular function blocks is predicated on workpiece specific information e.g. design data. Furthermore, for different product lifecycle phases, appropriate logics and decision criteria have to be considered. For example, tolerances for geometric deviations are different in type and size for new-part production compared to repair processes. In addition to function blocks, appropriate referencing systems are important. They need to support exact determination of position and orientation of the actual geometries to provide a basis for precise analysis. The presented approach provides an inspection methodology for adaptive and part-individual process chains. The analysis of each workpiece results in an inspection protocol and an objective decision about further manufacturability. A representative application domain is the product lifecycle of turbine blades containing a new-part production and a maintenance process. In both cases, a geometrical adaptation is required to calculate individual production data. In contrast to existing approaches, the proposed initial inspection method provides information to decide between different potential adaptive machining processes.

Keywords: adaptive, CAx, function blocks, turbomachinery

Procedia PDF Downloads 291
28139 Effects of Wind Load on the Tank Structures with Various Shapes and Aspect Ratios

Authors: Doo Byong Bae, Jae Jun Yoo, Il Gyu Park, Choi Seowon, Oh Chang Kook

Abstract:

There are several wind load provisions to evaluate the wind response on tank structures such as API, Euro-code, etc. the assessment of wind action applying these provisions is made by performing the finite element analysis using both linear bifurcation analysis and geometrically nonlinear analysis. By comparing the pressure patterns obtained from the analysis with the results of wind tunnel test, most appropriate wind load criteria will be recommended.

Keywords: wind load, finite element analysis, linear bifurcation analysis, geometrically nonlinear analysis

Procedia PDF Downloads 619
28138 Prototype for Measuring Blue Light Protection in Sunglasses

Authors: A. D. Loureiro, L. Ventura

Abstract:

Exposure to high-energy blue light has been strongly linked to the development of some eye diseases, such as age-related macular degeneration. Over the past few years, people have become more and more concerned about eye damage from blue light and how it can be prevented. We developed a prototype that allows users to self-check the blue light protection of their sunglasses and determines if the protection is adequate. Weighting functions approximating those defined in ISO 12312-1 were used to measure the luminous transmittance and blue light transmittance of sunglasses. The blue light transmittance value must be less than 1.2 times the luminous transmittance to be considered adequate. The prototype consists of a Golden Dragon Ultra White LED from OSRAM and a TCS3472 photodetector from AMS TAOS. Together, they provide four transmittance values weighted with different functions. These four transmittance values were then linearly combined to produce transmittance values with weighting functions close to those defined in ISO 12312-1 for luminous transmittance and for blue light transmittance. To evaluate our prototype, we used a VARIAN Cary 5000 spectrophotometer, a gold standard in the field, to measure the luminous transmittance and the blue light transmittance of 60 sunglasses lenses. (and Bland-Altman analysis was performed) Bland-Altman analysis was performed and showed non-significant bias and narrow 95% limits of agreement within predefined tolerances for both luminous transmittance and blue light transmittance. The results show that the prototype is a viable means of providing blue light protection information to the general public and a quick and easy way for industry and retailers to test their products. In addition, our prototype plays an important role in educating the public about a feature to look for in sunglasses before purchasing.

Keywords: blue light, sunglasses, eye protective devices, transmittance measurement, standards, ISO 12312-1

Procedia PDF Downloads 139
28137 Cobb Angle Measurement from Coronal X-Rays Using Artificial Neural Networks

Authors: Andrew N. Saylor, James R. Peters

Abstract:

Scoliosis is a complex 3D deformity of the thoracic and lumbar spines, clinically diagnosed by measurement of a Cobb angle of 10 degrees or more on a coronal X-ray. The Cobb angle is the angle made by the lines drawn along the proximal and distal endplates of the respective proximal and distal vertebrae comprising the curve. Traditionally, Cobb angles are measured manually using either a marker, straight edge, and protractor or image measurement software. The task of measuring the Cobb angle can also be represented by a function taking the spine geometry rendered using X-ray imaging as input and returning the approximate angle. Although the form of such a function may be unknown, it can be approximated using artificial neural networks (ANNs). The performance of ANNs is affected by many factors, including the choice of activation function and network architecture; however, the effects of these parameters on the accuracy of scoliotic deformity measurements are poorly understood. Therefore, the objective of this study was to systematically investigate the effect of ANN architecture and activation function on Cobb angle measurement from the coronal X-rays of scoliotic subjects. The data set for this study consisted of 609 coronal chest X-rays of scoliotic subjects divided into 481 training images and 128 test images. These data, which included labeled Cobb angle measurements, were obtained from the SpineWeb online database. In order to normalize the input data, each image was resized using bi-linear interpolation to a size of 500 × 187 pixels, and the pixel intensities were scaled to be between 0 and 1. A fully connected (dense) ANN with a fixed cost function (mean squared error), batch size (10), and learning rate (0.01) was developed using Python Version 3.7.3 and TensorFlow 1.13.1. The activation functions (sigmoid, hyperbolic tangent [tanh], or rectified linear units [ReLU]), number of hidden layers (1, 3, 5, or 10), and number of neurons per layer (10, 100, or 1000) were varied systematically to generate a total of 36 network conditions. Stochastic gradient descent with early stopping was used to train each network. Three trials were run per condition, and the final mean squared errors and mean absolute errors were averaged to quantify the network response for each condition. The network that performed the best used ReLU neurons had three hidden layers, and 100 neurons per layer. The average mean squared error of this network was 222.28 ± 30 degrees2, and the average mean absolute error was 11.96 ± 0.64 degrees. It is also notable that while most of the networks performed similarly, the networks using ReLU neurons, 10 hidden layers, and 1000 neurons per layer, and those using Tanh neurons, one hidden layer, and 10 neurons per layer performed markedly worse with average mean squared errors greater than 400 degrees2 and average mean absolute errors greater than 16 degrees. From the results of this study, it can be seen that the choice of ANN architecture and activation function has a clear impact on Cobb angle inference from coronal X-rays of scoliotic subjects.

Keywords: scoliosis, artificial neural networks, cobb angle, medical imaging

Procedia PDF Downloads 118
28136 Analysis of Interleaving Scheme for Narrowband VoIP System under Pervasive Environment

Authors: Monica Sharma, Harjit Pal Singh, Jasbinder Singh, Manju Bala

Abstract:

In Voice over Internet Protocol (VoIP) system, the speech signal is degraded when passed through the network layers. The speech signal is processed through the best effort policy based IP network, which leads to the network degradations including delay, packet loss and jitter. The packet loss is the major issue of the degradation in the VoIP signal quality; even a single lost packet may generate audible distortion in the decoded speech signal. In addition to these network degradations, the quality of the speech signal is also affected by the environmental noises and coder distortions. The signal quality of the VoIP system is improved through the interleaving technique. The performance of the system is evaluated for various types of noises at different network conditions. The performance of the enhanced VoIP signal is evaluated using perceptual evaluation of speech quality (PESQ) measurement for narrow band signal.

Keywords: VoIP, interleaving, packet loss, packet size, background noise

Procedia PDF Downloads 468
28135 Effect of Common Yoga Protocol on Reaction Time of Football Players

Authors: Vikram Singh

Abstract:

The objective of the study was to study the effectiveness of common yoga protocol on reaction time (simple visual reaction time-SVRT measured in milliseconds/seconds) of male football players in the age group of 15 to 21 years. The 40 boys were randomly assigned into two groups i.e. control and experimental. SVRT for both the groups were measured on day-1 and post intervention (common yoga protocol here) was measured after 45 days of training to the experimental group only. One way ANOVA (Univariate analysis) and Independent t-test using SPSS 23 statistical package was applied to get and analyze the results. There was a significant difference after 45 days of yoga protocol in simple visual reaction time of experimental group (p = .032), t (33.05) = 3.881, p = .000 (two-tailed). Null hypothesis (that there would be no post measurement differences in reaction times of control and experimental groups) was rejected. Where p<.05. Therefore alternate hypothesis was accepted.

Keywords: footballers, t-test, yoga protocol, reaction time

Procedia PDF Downloads 246
28134 A Case Study on the Condition Monitoring of a Critical Machine in a Tyre Manufacturing Plant

Authors: Ramachandra C. G., Amarnath. M., Prashanth Pai M., Nagesh S. N.

Abstract:

The machine's performance level drops down over a period of time due to the wear and tear of its components. The early detection of an emergent fault becomes very vital in order to obtain uninterrupted production in a plant. Maintenance is an activity that helps to keep the machine's performance at an anticipated level, thereby ensuring the availability of the machine to perform its intended function. At present, a number of modern maintenance techniques are available, such as preventive maintenance, predictive maintenance, condition-based maintenance, total productive maintenance, etc. Condition-based maintenance or condition monitoring is one such modern maintenance technique in which the machine's condition or health is checked by the measurement of certain parameters such as sound level, temperature, velocity, displacement, vibration, etc. It can recognize most of the factors restraining the usefulness and efficacy of the total manufacturing unit. This research work is conducted on a Batch Mill in a tire production unit located in the Southern Karnataka region. The health of the mill is assessed using amplitude of vibration as a parameter of measurement. Most commonly, the vibration level is assessed using various points on the machine bearing. The normal or standard level is fixed using reference materials such as manuals or catalogs supplied by the manufacturers and also by referring vibration standards. The Rio-Vibro meter is placed in different locations on the batch-off mill to record the vibration data. The data collected are analyzed to identify the malfunctioning components in the batch off the mill, and corrective measures are suggested.

Keywords: availability, displacement, vibration, rio-vibro, condition monitoring

Procedia PDF Downloads 66
28133 Effects of Diabetic Duration on Platelet and Platelet Indices in Streptozotocin-Induced Diabetic Rats

Authors: Sahar Oudeh, Abbas Javaheri Vayeghan, Mahmood Ahmadi-Hamedani

Abstract:

This study aimed to investigate the effect of diabetic duration on platelet and platelet indices in streptozotocin-induced diabetic male and female rats. Thirty-two healthy adult Wistar rats (16 females and 16 males) were randomly divided into 4 groups of eight, including 1) control group (4 females and 4 males who did not undergo any treatment until the end of 28 days), 2) 7-day diabetic group (4 females and 4 males who were diabetic for 7 days and were euthanized after 7 days), 3) 14-day diabetic group (4 females and 4 males who were diabetic for 14 days and were euthanized after 14 days), and 28-day diabetic group (4 females and 4 males who were diabetic for 28 days and were euthanized after 28 days). Diabetes was induced by intraperitoneal injection of streptozotocin (65 mg/kg). After induction of diabetes in the groups, blood samples were taken from their hearts after anesthesia, and platelet counts (PLT) and platelet indices were measured by an automatic blood cell counter (Nihon Kohden, Celltac Alpha VET MEK-6550, Japan). Statistical differences among groups were analyzed using one-way analysis of variance (ANOVA) followed by Tukey’s multiple tests. The results of this study showed that PLT and mean platelet volume (MPV) significantly increased in 7 and 14-day diabetic groups compared to the control group, whereas plateletcrit (PCT) and platelet distribution rate (PDW) significantly increased in 14 and 28-day diabetic groups, respectively. Significant differences were observed between female and male rats in PCT and PLT in the 14-day diabetic group and PDW in the 28-day diabetic group. According to the results of this study, measurement and analysis of platelet indices can be used as a method for the early diagnosis of diabetes and its complications.

Keywords: diabetic duration, streptozotocin, female and male rats, platelet indices

Procedia PDF Downloads 159
28132 Permanent Magnet Synchronous Generator: Unsymmetrical Point Operation

Authors: P. Pistelok

Abstract:

The article presents the concept of an electromagnetic circuit generator with permanent magnets mounted on the surface rotor core designed for single phase work. Computation field-circuit model was shown. The spectrum of time course of voltages in the idle work was presented. The cross section with graphically presentation of magnetic induction in particular parts of electromagnetic circuits was presented. Distribution of magnetic induction at the rated load point for each phase were shown. The time course of voltages and currents for each phases for rated power were displayed. An analysis of laboratory results and measurement of load characteristics of the generator was discussed. The work deals with three electromagnetic circuits of generators with permanent magnet where output voltage characteristics versus rated power were expressed.

Keywords: permanent magnet generator, permanent magnets, vibration, course of torque, single phase work, asymmetrical three phase work

Procedia PDF Downloads 272
28131 A Proposal of Advanced Key Performance Indicators for Assessing Six Performances of Construction Projects

Authors: Wi Sung Yoo, Seung Woo Lee, Youn Kyoung Hur, Sung Hwan Kim

Abstract:

Large-scale construction projects are continuously increasing, and the need for tools to monitor and evaluate the project success is emphasized. At the construction industry level, there are limitations in deriving performance evaluation factors that reflect the diversity of construction sites and systems that can objectively evaluate and manage performance. Additionally, there are difficulties in integrating structured and unstructured data generated at construction sites and deriving improvements. In this study, we propose the Key Performance Indicators (KPIs) to enable performance evaluation that reflects the increased diversity of construction sites and the unstructured data generated, and present a model for measuring performance by the derived indicators. The comprehensive performance of a unit construction site is assessed based on 6 areas (Time, Cost, Quality, Safety, Environment, Productivity) and 26 indicators. We collect performance indicator information from 30 construction sites that meet legal standards and have been successfully performed. And We apply data augmentation and optimization techniques into establishing measurement standards for each indicator. In other words, the KPI for construction site performance evaluation presented in this study provides standards for evaluating performance in six areas using institutional requirement data and document data. This can be expanded to establish a performance evaluation system considering the scale and type of construction project. Also, they are expected to be used as a comprehensive indicator of the construction industry and used as basic data for tracking competitiveness at the national level and establishing policies.

Keywords: key performance indicator, performance measurement, structured and unstructured data, data augmentation

Procedia PDF Downloads 20
28130 Effect of the Orifice Plate Specifications on Coefficient of Discharge

Authors: Abulbasit G. Abdulsayid, Zinab F. Abdulla, Asma A. Omer

Abstract:

On the ground that the orifice plate is relatively inexpensive, requires very little maintenance and only calibrated during the occasion of plant turnaround, the orifice plate has turned to be in a real prevalent use in gas industry. Inaccuracy of measurement in the fiscal metering stations may highly be accounted to be the most vital factor for mischarges in the natural gas industry in Libya. A very trivial error in measurement can add up a fast escalating financial burden to the custodian transactions. The unaccounted gas quantity transferred annually via orifice plates in Libya, could be estimated in an extent of multi-million dollars. As the oil and gas wealth is the solely source of income to Libya, every effort is now being exerted to improve the accuracy of existing orifice metering facilities. Discharge coefficient has become pivotal in current researches undertaken in this regard. Hence, increasing the knowledge of the flow field in a typical orifice meter is indispensable. Recently and in a drastic pace, the CFD has become the most time and cost efficient versatile tool for in-depth analysis of fluid mechanics, heat and mass transfer of various industrial applications. Getting deeper into the physical phenomena lied beneath and predicting all relevant parameters and variables with high spatial and temporal resolution have been the greatest weighing pros counting for CFD. In this paper, flow phenomena for air passing through an orifice meter were numerically analyzed with CFD code based modeling, giving important information about the effect of orifice plate specifications on the discharge coefficient for three different tappings locations, i.e., flange tappings, D and D/2 tappings compared with vena contracta tappings. Discharge coefficients were paralleled with discharge coefficients estimated by ISO 5167. The influences of orifice plate bore thickness, orifice plate thickness, beveled angle, perpendicularity and buckling of the orifice plate, were all duly investigated. A case of an orifice meter whose pipe diameter of 2 in, beta ratio of 0.5 and Reynolds number of 91100, was taken as a model. The results highlighted that the discharge coefficients were highly responsive to the variation of plate specifications and under all cases, the discharge coefficients for D and D/2 tappings were very close to that of vena contracta tappings which were believed as an ideal arrangement. Also, in general sense, it was appreciated that the standard equation in ISO 5167, by which the discharge coefficient was calculated, cannot capture the variation of the plate specifications and thus further thorough considerations would be still needed.

Keywords: CFD, discharge coefficients, orifice meter, orifice plate specifications

Procedia PDF Downloads 110
28129 W-WING: Aeroelastic Demonstrator for Experimental Investigation into Whirl Flutter

Authors: Jiri Cecrdle

Abstract:

This paper describes the concept of the W-WING whirl flutter aeroelastic demonstrator. Whirl flutter is the specific case of flutter that accounts for the additional dynamic and aerodynamic influences of the engine rotating parts. The instability is driven by motion-induced unsteady aerodynamic propeller forces and moments acting in the propeller plane. Whirl flutter instability is a serious problem that may cause the unstable vibration of a propeller mounting, leading to the failure of an engine installation or an entire wing. The complicated physical principle of whirl flutter required the experimental validation of the analytically gained results. W-WING aeroelastic demonstrator has been designed and developed at Czech Aerospace Research Centre (VZLU) Prague, Czechia. The demonstrator represents the wing and engine of the twin turboprop commuter aircraft. Contrary to the most of past demonstrators, it includes a powered motor and thrusting propeller. It allows the changes of the main structural parameters influencing the whirl flutter stability characteristics. Propeller blades are adjustable at standstill. The demonstrator is instrumented by strain gauges, accelerometers, revolution-counting impulse sensor, sensor of airflow velocity, and the thrust measurement unit. Measurement is supported by the in house program providing the data storage and real-time depiction in the time domain as well as pre-processing into the form of the power spectral densities. The engine is linked with a servo-drive unit, which enables maintaining of the propeller revolutions (constant or controlled rate ramp) and monitoring of immediate revolutions and power. Furthermore, the program manages the aerodynamic excitation of the demonstrator by the aileron flapping (constant, sweep, impulse). Finally, it provides the safety guard to prevent any structural failure of the demonstrator hardware. In addition, LMS TestLab system is used for the measurement of the structure response and for the data assessment by means of the FFT- and OMA-based methods. The demonstrator is intended for the experimental investigations in the VZLU 3m-diameter low-speed wind tunnel. The measurement variant of the model is defined by the structural parameters: pitch and yaw attachment stiffness, pitch and yaw hinge stations, balance weight station, propeller type (duralumin or steel blades), and finally, angle of attack of the propeller blade 75% section (). The excitation is provided either by the airflow turbulence or by means of the aerodynamic excitation by the aileron flapping using a frequency harmonic sweep. The experimental results are planned to be utilized for validation of analytical methods and software tools in the frame of development of the new complex multi-blade twin-rotor propulsion system for the new generation regional aircraft. Experimental campaigns will include measurements of aerodynamic derivatives and measurements of stability boundaries for various configurations of the demonstrator.

Keywords: aeroelasticity, flutter, whirl flutter, W WING demonstrator

Procedia PDF Downloads 83
28128 Experimental Characterization of Anisotropic Mechanical Properties of Textile Woven Fabric

Authors: Rym Zouari, Sami Ben Amar, Abdelwaheb Dogui

Abstract:

This paper presents an experimental characterization of the anisotropic mechanical behavior of 4 textile woven fabrics with different weaves (Twill 3, Plain, Twill4 and Satin 4) by off-axis tensile testing. These tests are applied according seven directions oriented by 15° increment with respect to the warp direction. Fixed and articulated jaws are used. Analysis of experimental results is done through global (Effort/Elongation curves) and local scales. Global anisotropy was studied from the Effort/Elongation curves: shape, breaking load (Frup), tensile elongation (EMT), tensile energy (WT) and linearity index (LT). Local anisotropy was studied from the measurement of strain tensor components in the central area of the specimen as a function of testing orientation and effort: longitudinal strain ɛL, transverse strain ɛT and shearing ɛLT. The effect of used jaws is also analyzed.

Keywords: anisotropy, off-axis tensile test, strain fields, textile woven fabric

Procedia PDF Downloads 347
28127 Causes of Deteriorations of Flexible Pavement, Its Condition Rating and Maintenance

Authors: Pooja Kherudkar, Namdeo Hedaoo

Abstract:

There are various causes for asphalt pavement distresses which can develop prematurely or with aging in services. These causes are not limited to aging of bitumen binder but include poor quality materials and construction, inadequate mix design, inadequate pavement structure design considering the traffic and lack of preventive maintenance. There is physical evidence available for each type of pavement distress. Distress in asphalt pavements can be categorized in different distress modes like fracture (cracking and spalling), distortion (permanent deformation and slippage), and disintegration (raveling and potholes). This study shows the importance of severity determination of distresses for the selection of appropriate preventive maintenance treatment. Distress analysis of the deteriorated roads was carried out. Four roads of urban flexible pavements from Pune city was selected as a case study. The roads were surveyed to detect the types, to measure the severity and extent of the distresses. Causes of distresses were investigated. The pavement condition rating values of the roads were calculated. These ranges of ratings were as follows; 1 for poor condition road, 1.1 to 2 for fair condition road and 2.1 to 3 for good condition road. Out of the four roads, two roads were found to be in fair condition and the other two were found in good condition. From the various preventive maintenance treatments like crack seal, fog seal, slurry seal, microsurfacing, surface dressing and thin hot mix/cold mix bituminous overlays, the effective maintenance treatments with respect to the surface condition and severity levels of the existing pavement were recommended.

Keywords: distress analysis, pavement condition rating, preventive maintenance treatments, surface distress measurement

Procedia PDF Downloads 181
28126 An Attempt on Antimicrobial Studies of Lanthanide Schiff Base Complexes

Authors: Lekha Logu

Abstract:

The coordination behavior of the newly synthesized Schiff base ligands, 4-bromo-2-((p-tolyl imino) methyl) phenol obtained by condensing para-toluidine with 5-bromo salicylaldehyde and N-(3,4-dichloro benzylidene)-4-methylbenzenamine obtained by condensing Para-toluidine with 3,4-dichloro benzaldehyde in ethanolic medium has been explored in this current study. The synthesized Schiff’s base ligands were complexed with lanthanide nitrate salts yielding [LnL(NO3)2(H2O)2]NO3, (Ln=Pr, Sm). Elemental analysis, conductance measurement, and spectral techniques like Nuclear Magnetic Resonance (NMR), Ultraviolet-visible (UV-Vis) and Fourier Transform Infrared (FTIR) have been used to characterize Schiff’s base ligands and their lanthanide metal complexes. An attempt has been made on these complexes for their antimicrobial activity against the gram-positive and gram-negative bacterial species like Escherichia coli, Staphylococcus aureus, Bacillus subtilis, Klebsiella pneumonia and fungal species like Canadida and Aspergillus.

Keywords: lanthanide complexes, Schiff's base, antimicrobial assay, synthesis, characterization

Procedia PDF Downloads 53
28125 Low Cost Technique for Measuring Luminance in Biological Systems

Authors: N. Chetty, K. Singh

Abstract:

In this work, the relationship between the melanin content in a tissue and subsequent absorption of light through that tissue was determined using a digital camera. This technique proved to be simple, cost effective, efficient and reliable. Tissue phantom samples were created using milk and soy sauce to simulate the optical properties of melanin content in human tissue. Increasing the concentration of soy sauce in the milk correlated to an increase in melanin content of an individual. Two methods were employed to measure the light transmitted through the sample. The first was direct measurement of the transmitted intensity using a conventional lux meter. The second method involved correctly calibrating an ordinary digital camera and using image analysis software to calculate the transmitted intensity through the phantom. The results from these methods were then graphically compared to the theoretical relationship between the intensity of transmitted light and the concentration of absorbers in the sample. Conclusions were then drawn about the effectiveness and efficiency of these low cost methods.

Keywords: tissue phantoms, scattering coefficient, albedo, low-cost method

Procedia PDF Downloads 263
28124 Linkage Disequilibrium and Haplotype Blocks Study from Two High-Density Panels and a Combined Panel in Nelore Beef Cattle

Authors: Priscila A. Bernardes, Marcos E. Buzanskas, Luciana C. A. Regitano, Ricardo V. Ventura, Danisio P. Munari

Abstract:

Genotype imputation has been used to reduce genomic selections costs. In order to increase haplotype detection accuracy in methods that considers the linkage disequilibrium, another approach could be used, such as combined genotype data from different panels. Therefore, this study aimed to evaluate the linkage disequilibrium and haplotype blocks in two high-density panels before and after the imputation to a combined panel in Nelore beef cattle. A total of 814 animals were genotyped with the Illumina BovineHD BeadChip (IHD), wherein 93 animals (23 bulls and 70 progenies) were also genotyped with the Affymetrix Axion Genome-Wide BOS 1 Array Plate (AHD). After the quality control, 809 IHD animals (509,107 SNPs) and 93 AHD (427,875 SNPs) remained for analyses. The combined genotype panel (CP) was constructed by merging both panels after quality control, resulting in 880,336 SNPs. Imputation analysis was conducted using software FImpute v.2.2b. The reference (CP) and target (IHD) populations consisted of 23 bulls and 786 animals, respectively. The linkage disequilibrium and haplotype blocks studies were carried out for IHD, AHD, and imputed CP. Two linkage disequilibrium measures were considered; the correlation coefficient between alleles from two loci (r²) and the |D’|. Both measures were calculated using the software PLINK. The haplotypes' blocks were estimated using the software Haploview. The r² measurement presented different decay when compared to |D’|, wherein AHD and IHD had almost the same decay. For r², even with possible overestimation by the sample size for AHD (93 animals), the IHD presented higher values when compared to AHD for shorter distances, but with the increase of distance, both panels presented similar values. The r² measurement is influenced by the minor allele frequency of the pair of SNPs, which can cause the observed difference comparing the r² decay and |D’| decay. As a sum of the combinations between Illumina and Affymetrix panels, the CP presented a decay equivalent to a mean of these combinations. The estimated haplotype blocks detected for IHD, AHD, and CP were 84,529, 63,967, and 140,336, respectively. The IHD were composed by haplotype blocks with mean of 137.70 ± 219.05kb, the AHD with mean of 102.10kb ± 155.47, and the CP with mean of 107.10kb ± 169.14. The majority of the haplotype blocks of these three panels were composed by less than 10 SNPs, with only 3,882 (IHD), 193 (AHD) and 8,462 (CP) haplotype blocks composed by 10 SNPs or more. There was an increase in the number of chromosomes covered with long haplotypes when CP was used as well as an increase in haplotype coverage for short chromosomes (23-29), which can contribute for studies that explore haplotype blocks. In general, using CP could be an alternative to increase density and number of haplotype blocks, increasing the probability to obtain a marker close to a quantitative trait loci of interest.

Keywords: Bos taurus indicus, decay, genotype imputation, single nucleotide polymorphism

Procedia PDF Downloads 264
28123 Design and Performance Improvement of Three-Dimensional Optical Code Division Multiple Access Networks with NAND Detection Technique

Authors: Satyasen Panda, Urmila Bhanja

Abstract:

In this paper, we have presented and analyzed three-dimensional (3-D) matrices of wavelength/time/space code for optical code division multiple access (OCDMA) networks with NAND subtraction detection technique. The 3-D codes are constructed by integrating a two-dimensional modified quadratic congruence (MQC) code with one-dimensional modified prime (MP) code. The respective encoders and decoders were designed using fiber Bragg gratings and optical delay lines to minimize the bit error rate (BER). The performance analysis of the 3D-OCDMA system is based on measurement of signal to noise ratio (SNR), BER and eye diagram for a different number of simultaneous users. Also, in the analysis, various types of noises and multiple access interference (MAI) effects were considered. The results obtained with NAND detection technique were compared with those obtained with OR and AND subtraction techniques. The comparison results proved that the NAND detection technique with 3-D MQC\MP code can accommodate more number of simultaneous users for longer distances of fiber with minimum BER as compared to OR and AND subtraction techniques. The received optical power is also measured at various levels of BER to analyze the effect of attenuation.

Keywords: Cross Correlation (CC), Three dimensional Optical Code Division Multiple Access (3-D OCDMA), Spectral Amplitude Coding Optical Code Division Multiple Access (SAC-OCDMA), Multiple Access Interference (MAI), Phase Induced Intensity Noise (PIIN), Three Dimensional Modified Quadratic Congruence/Modified Prime (3-D MQC/MP) code

Procedia PDF Downloads 403
28122 The Effect of Core Training on Physical Fitness Characteristics in Male Volleyball Players

Authors: Sibel Karacaoglu, Fatma Ç. Kayapinar

Abstract:

The aim of the study is to investigate the effect of the core training program on physical fitness characteristics and body composition in male volleyball players. 26 male university volleyball team players aged between 19 to 24 years who had no health problems and injury participated in the study. Subjects were divided into training (TG) and control groups (CG) as randomly. Data from twenty-one players who completed all training sessions were used for statistical analysis (TG,n=11; CG,n=10). A core training program was applied to the training group three days a week for 10 weeks. On the other hand, the control group did not receive any training. Before and after the 10-week training program, pre- and post-testing comprised of body composition measurements (weight, BMI, bioelectrical impedance analysis) and physical fitness measurements including flexibility (sit and reach test), muscle strength (back, leg and grip strength by dynamometer), muscle endurance (sit-ups and push-ups tests), power (one-legged jump and vertical jump tests), speed (20m sprint, 30m sprint) and balance tests (one-legged standing test) were performed. Changes of pre- and post- test values of the groups were determined by using dependent t test. According to the statistical analysis of data, no significant difference was found in terms of body composition in the both groups for pre- and post- test values. In the training group, all physical fitness measurements improved significantly after core training program (p<0.05) except 30m speed and handgrip strength (p>0.05). On the hand, only 20m speed test values improved after post-test period (p<0.05), but the other physical fitness tests values did not differ (p>0.05) between pre- and post- test measurement in the control group. The results of the study suggest that the core training program has positive effect on physical fitness characteristics in male volleyball players.

Keywords: body composition, core training, physical fitness, volleyball

Procedia PDF Downloads 340
28121 Determinants of Corporate Social Responsibility Adoption: Evidence from China

Authors: Jing (Claire) LI

Abstract:

More than two decades from 2000 to 2020 of economic reforms have brought China unprecedented economic growth. There is an urgent call of research towards corporate social responsibility (CSR) in the context of China because while China continues to develop into a global trading market, it suffers from various serious problems relating to CSR. This study analyses the factors affecting the adoption of CSR practices by Chinese listed companies. The author proposes a new framework of factors of CSR adoption. Following common organisational factors and external factors in the literature (including organisational support, company size, shareholder pressures, and government support), this study introduces two additional factors, dynamic capability and regional culture. A survey questionnaire was conducted on the CSR adoption of Chinese listed companies in Shen Zhen and Shang Hai index from December 2019 to March 2020. The survey was conducted to collect data on the factors that affect the adoption of CSR. After collection of data, this study performed factor analysis to reduce the number of measurement items to several main factors. This procedure is to confirm the proposed framework and ensure the significant factors. Through analysis, this study identifies four grouped factors as determinants of the CSR adoption. The first factor loading includes dynamic capability and organisational support. The study finds that they are positively related to the first factor, so the first factor mainly reflects the capabilities of companies, which is one component in internal factors. In the second factor, measurement items of stakeholder pressures mainly are from regulatory bodies, customer and supplier, employees and community, and shareholders. In sum, they are positively related to the second factor and they reflect stakeholder pressures, which is one component of external factors. The third factor reflects organisational characteristics. Variables include company size and cultural score. Among these variables, company size is negatively related to the third factor. The resulted factor loading of the third factor implies that organisational factor is an important determinant of CSR adoption. Cultural consistency, the variable in the fourth factor, is positively related to the factor. It represents the difference between perception of managers and actual culture of the organisations in terms of cultural dimensions, which is one component in internal factors. It implies that regional culture is an important factor of CSR adoption. Overall, the results are consistent with previous literature. This study is of significance from both theoretical and empirical perspectives. First, from the significance of theoretical perspective, this research combines stakeholder theory, dynamic capability view of a firm, and neo-institutional theory in CSR research. Based on association of these three theories, this study introduces two new factors (dynamic capability and regional culture) to have a better framework for CSR adoption. Second, this study contributes to empirical literature of CSR in the context of China. Extant Chinese companies lack recognition of the importance of CSR practices adoption. This study built a framework and may help companies to design resource allocation strategies and evaluate future CSR and management practices in an early stage.

Keywords: China, corporate social responsibility, CSR adoption, dynamic capability, regional culture

Procedia PDF Downloads 122
28120 Assessing the Social Impacts of Regional Services: The Case of a Portuguese Municipality

Authors: A. Camões, M. Ferreira Dias, M. Amorim

Abstract:

In recent years, the social economy is increasingly seen as a viable means to address social problems. Social enterprises, as well as public projects and initiatives targeted to meet social purposes, offer organizational models that assume heterogeneity, flexibility and adaptability to the ‘real world and real problems’. Despite the growing popularity of social initiatives, decision makers still face a paucity in what concerns the available models and tools to adequately assess its sustainability, and its impacts, notably the nature of its contribution to economic growth. This study was carried out at the local level, by analyzing the social impact initiatives and projects promoted by the Municipality of Albergaria-a-Velha (Câmara Municipal de Albergaria-a-Velha -CMA), a municipality of 25,000 inhabitants in the central region of Portugal. This work focuses on the challenges related to the qualifications and employability of citizens, which stands out as one of the key concerns in the Portuguese economy, particularly expressive in the context of small-scale cities and inland territories. The study offers a characterization of the Municipality, its socio-economic structure and challenges, followed by an exploratory analysis of multiple sourced data, collected from the CMA's documental sources as well as from privileged informants. The purpose is to conduct detailed analysis of the CMA's social projects, aimed at characterizing its potential impact for the model of qualifications and employability of the citizens of the Municipality. The study encompasses a discussion of the socio-economic profile of the municipality, notably its asymmetries, the analysis of the social projects and initiatives, as well as of data derived from inquiry actors involved in the implementation of the social projects and its beneficiaries. Finally, the results obtained with the Better Life Index will be included. This study makes it possible to ascertain if what is implicit in the literature goes to the encounter of what one experiences in reality.

Keywords: measurement, municipalities, social economy, social impact

Procedia PDF Downloads 123
28119 On-Line Data-Driven Multivariate Statistical Prediction Approach to Production Monitoring

Authors: Hyun-Woo Cho

Abstract:

Detection of incipient abnormal events in production processes is important to improve safety and reliability of manufacturing operations and reduce losses caused by failures. The construction of calibration models for predicting faulty conditions is quite essential in making decisions on when to perform preventive maintenance. This paper presents a multivariate calibration monitoring approach based on the statistical analysis of process measurement data. The calibration model is used to predict faulty conditions from historical reference data. This approach utilizes variable selection techniques, and the predictive performance of several prediction methods are evaluated using real data. The results shows that the calibration model based on supervised probabilistic model yielded best performance in this work. By adopting a proper variable selection scheme in calibration models, the prediction performance can be improved by excluding non-informative variables from their model building steps.

Keywords: calibration model, monitoring, quality improvement, feature selection

Procedia PDF Downloads 346
28118 Raman Spectral Fingerprints of Healthy and Cancerous Human Colorectal Tissues

Authors: Maria Karnachoriti, Ellas Spyratou, Dimitrios Lykidis, Maria Lambropoulou, Yiannis S. Raptis, Ioannis Seimenis, Efstathios P. Efstathopoulos, Athanassios G. Kontos

Abstract:

Colorectal cancer is the third most common cancer diagnosed in Europe, according to the latest incidence data provided by the World Health Organization (WHO), and early diagnosis has proved to be the key in reducing cancer-related mortality. In cases where surgical interventions are required for cancer treatment, the accurate discrimination between healthy and cancerous tissues is critical for the postoperative care of the patient. The current study focuses on the ex vivo handling of surgically excised colorectal specimens and the acquisition of their spectral fingerprints using Raman spectroscopy. Acquired data were analyzed in an effort to discriminate, in microscopic scale, between healthy and malignant margins. Raman spectroscopy is a spectroscopic technique with high detection sensitivity and spatial resolution of few micrometers. The spectral fingerprint which is produced during laser-tissue interaction is unique and characterizes the biostructure and its inflammatory or cancer state. Numerous published studies have demonstrated the potential of the technique as a tool for the discrimination between healthy and malignant tissues/cells either ex vivo or in vivo. However, the handling of the excised human specimens and the Raman measurement conditions remain challenging, unavoidably affecting measurement reliability and repeatability, as well as the technique’s overall accuracy and sensitivity. Therefore, tissue handling has to be optimized and standardized to ensure preservation of cell integrity and hydration level. Various strategies have been implemented in the past, including the use of balanced salt solutions, small humidifiers or pump-reservoir-pipette systems. In the current study, human colorectal specimens of 10X5 mm were collected from 5 patients up to now who underwent open surgery for colorectal cancer. A novel, non-toxic zinc-based fixative (Z7) was used for tissue preservation. Z7 demonstrates excellent protein preservation and protection against tissue autolysis. Micro-Raman spectra were recorded with a Renishaw Invia spectrometer from successive random 2 micrometers spots upon excitation at 785 nm to decrease fluorescent background and secure avoidance of tissue photodegradation. A temperature-controlled approach was adopted to stabilize the tissue at 2 °C, thus minimizing dehydration effects and consequent focus drift during measurement. A broad spectral range, 500-3200 cm-1,was covered with five consecutive full scans that lasted for 20 minutes in total. The average spectra were used for least square fitting analysis of the Raman modes.Subtle Raman differences were observed between normal and cancerous colorectal tissues mainly in the intensities of the 1556 cm-1 and 1628 cm-1 Raman modes which correspond to v(C=C) vibrations in porphyrins, as well as in the range of 2800-3000 cm-1 due to CH2 stretching of lipids and CH3 stretching of proteins. Raman spectra evaluation was supported by histological findings from twin specimens. This study demonstrates that Raman spectroscopy may constitute a promising tool for real-time verification of clear margins in colorectal cancer open surgery.

Keywords: colorectal cancer, Raman spectroscopy, malignant margins, spectral fingerprints

Procedia PDF Downloads 82
28117 Linking Soil Spectral Behavior and Moisture Content for Soil Moisture Content Retrieval at Field Scale

Authors: Yonwaba Atyosi, Moses Cho, Abel Ramoelo, Nobuhle Majozi, Cecilia Masemola, Yoliswa Mkhize

Abstract:

Spectroscopy has been widely used to understand the hyperspectral remote sensing of soils. Accurate and efficient measurement of soil moisture is essential for precision agriculture. The aim of this study was to understand the spectral behavior of soil at different soil water content levels and identify the significant spectral bands for soil moisture content retrieval at field-scale. The study consisted of 60 soil samples from a maize farm, divided into four different treatments representing different moisture levels. Spectral signatures were measured for each sample in laboratory under artificial light using an Analytical Spectral Device (ASD) spectrometer, covering a wavelength range from 350 nm to 2500 nm, with a spectral resolution of 1 nm. The results showed that the absorption features at 1450 nm, 1900 nm, and 2200 nm were particularly sensitive to soil moisture content and exhibited strong correlations with the water content levels. Continuum removal was developed in the R programming language to enhance the absorption features of soil moisture and to precisely understand its spectral behavior at different water content levels. Statistical analysis using partial least squares regression (PLSR) models were performed to quantify the correlation between the spectral bands and soil moisture content. This study provides insights into the spectral behavior of soil at different water content levels and identifies the significant spectral bands for soil moisture content retrieval. The findings highlight the potential of spectroscopy for non-destructive and rapid soil moisture measurement, which can be applied to various fields such as precision agriculture, hydrology, and environmental monitoring. However, it is important to note that the spectral behavior of soil can be influenced by various factors such as soil type, texture, and organic matter content, and caution should be taken when applying the results to other soil systems. The results of this study showed a good agreement between measured and predicted values of Soil Moisture Content with high R2 and low root mean square error (RMSE) values. Model validation using independent data was satisfactory for all the studied soil samples. The results has significant implications for developing high-resolution and precise field-scale soil moisture retrieval models. These models can be used to understand the spatial and temporal variation of soil moisture content in agricultural fields, which is essential for managing irrigation and optimizing crop yield.

Keywords: soil moisture content retrieval, precision agriculture, continuum removal, remote sensing, machine learning, spectroscopy

Procedia PDF Downloads 82
28116 Enhancing Cybersecurity Protective Behaviour: Role of Information Security Competencies and Procedural Information Security Countermeasure Awareness

Authors: Norshima Humaidi, Saif Hussein Abdallah Alghazo

Abstract:

Cybersecurity threat have become a serious issue recently, and one of the cause is because human error, which is usually constituted by carelessness, ignorance, and failure to practice cybersecurity behaviour adequately. Using a data from a quantitative survey, Partial Least Squares-Structural Equation Modelling (PLS-SEM) analysis was used to determine the factors that affect cybersecurity protective behaviour (CPB). This study adapts cybersecurity protective behaviour model by focusing on two constructs that can enhance CPB: manager’s information security competencies (MISI) and procedural information security countermeasure (PCM) awareness. Theory of leadership competencies were adapted to measure user’s perception towards competencies among security managers/leader in the organization. Confirmatory factor analysis (CFA) testing shows that all the measurement items of each constructs were adequate in their validity individually based on their factor loading value. Moreover, each constructs are valid based on their parameter estimates and statistical significance. The quantitative research findings show that PCM awareness strongly influences CPB compared to MISI. Meanwhile, MISI was significantlyPCM awarenss. This study believes that the research findings can contribute to human behaviour in IS studies and are particularly beneficial to policy makers in improving organizations’ strategic plans in information security, especially in this new era. Most organizations spend time and resources to provide and establish strategic plans of information security; however, if employees are not willing to comply and practice information security behaviour appropriately, then these efforts are in vain.

Keywords: cybersecurity, protection behaviour, information security, information security competencies, countermeasure awareness

Procedia PDF Downloads 83
28115 Offline High Voltage Diagnostic Test Findings on 15MVA Generator of Basochhu Hydropower Plant

Authors: Suprit Pradhan, Tshering Yangzom

Abstract:

Even with availability of the modern day online insulation diagnostic technologies like partial discharge monitoring, the measurements like Dissipation Factor (tanδ), DC High Voltage Insulation Currents, Polarization Index (PI) and Insulation Resistance Measurements are still widely used as a diagnostic tools to assess the condition of stator insulation in hydro power plants. To evaluate the condition of stator winding insulation in one of the generators that have been operated since 1999, diagnostic tests were performed on the stator bars of 15 MVA generators of Basochhu Hydropower Plant. This paper presents diagnostic study done on the data gathered from the measurements which were performed in 2015 and 2016 as part of regular maintenance as since its commissioning no proper aging data were maintained. Measurement results of Dissipation Factor, DC High Potential tests and Polarization Index are discussed with regard to their effectiveness in assessing the ageing condition of the stator insulation. After a brief review of the theoretical background, the strengths of each diagnostic method in detecting symptoms of insulation deterioration are identified. The interesting results observed from Basochhu Hydropower Plant is taken into consideration to conclude that Polarization Index and DC High Voltage Insulation current measurements are best suited for the detection of humidity and contamination problems and Dissipation Factor measurement is a robust indicator of long-term ageing caused by oxidative degradation.

Keywords: dissipation Factor (tanδ), polarization Index (PI), DC High Voltage Insulation Current, insulation resistance (IR), Tan Delta Tip-Up, dielectric absorption ratio

Procedia PDF Downloads 299