Search results for: geometrical attack
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 1015

Search results for: geometrical attack

55 Study on the Geometric Similarity in Computational Fluid Dynamics Calculation and the Requirement of Surface Mesh Quality

Authors: Qian Yi Ooi

Abstract:

At present, airfoil parameters are still designed and optimized according to the scale of conventional aircraft, and there are still some slight deviations in terms of scale differences. However, insufficient parameters or poor surface mesh quality is likely to occur if these small deviations are embedded in a future civil aircraft with a size that is quite different from conventional aircraft, such as a blended-wing-body (BWB) aircraft with future potential, resulting in large deviations in geometric similarity in computational fluid dynamics (CFD) simulations. To avoid this situation, the study on the CFD calculation on the geometric similarity of airfoil parameters and the quality of the surface mesh is conducted to obtain the ability of different parameterization methods applied on different airfoil scales. The research objects are three airfoil scales, including the wing root and wingtip of conventional civil aircraft and the wing root of the giant hybrid wing, used by three parameterization methods to compare the calculation differences between different sizes of airfoils. In this study, the constants including NACA 0012, a Reynolds number of 10 million, an angle of attack of zero, a C-grid for meshing, and the k-epsilon (k-ε) turbulence model are used. The experimental variables include three airfoil parameterization methods: point cloud method, B-spline curve method, and class function/shape function transformation (CST) method. The airfoil dimensions are set to 3.98 meters, 17.67 meters, and 48 meters, respectively. In addition, this study also uses different numbers of edge meshing and the same bias factor in the CFD simulation. Studies have shown that with the change of airfoil scales, different parameterization methods, the number of control points, and the meshing number of divisions should be used to improve the accuracy of the aerodynamic performance of the wing. When the airfoil ratio increases, the most basic point cloud parameterization method will require more and larger data to support the accuracy of the airfoil’s aerodynamic performance, which will face the severe test of insufficient computer capacity. On the other hand, when using the B-spline curve method, average number of control points and meshing number of divisions should be set appropriately to obtain higher accuracy; however, the quantitative balance cannot be directly defined, but the decisions should be made repeatedly by adding and subtracting. Lastly, when using the CST method, it is found that limited control points are enough to accurately parameterize the larger-sized wing; a higher degree of accuracy and stability can be obtained by using a lower-performance computer.

Keywords: airfoil, computational fluid dynamics, geometric similarity, surface mesh quality

Procedia PDF Downloads 212
54 Influence of Strain on the Corrosion Behavior of Dual Phase 590 Steel

Authors: Amit Sarkar, Jayanta K. Mahato, Tushar Bhattacharya, Amrita Kundu, P. C. Chakraborti

Abstract:

With increasing the demand for safety and fuel efficiency of automobiles, automotive manufacturers are looking for light weight, high strength steel with excellent formability and corrosion resistance. Dual-phase steel is finding applications in automotive sectors, because of its high strength, good formability, and high corrosion resistance. During service automotive components suffer from environmental attack and thereby gradual degradation of the components occurs reducing the service life of the components. The objective of the present investigation is to assess the effect of deformation on corrosion behaviour of DP590 grade dual phase steel which is used in automotive industries. The material was received from TATA Steel Jamshedpur, India in the form of 1 mm thick sheet. Tensile properties of the steel at strain rate of 10-3 sec-1: 0.2 % Yield Stress is 382 MPa, Ultimate Tensile Strength is 629 MPa, Uniform Strain is 16.30% and Ductility is 29%. Rectangular strips of 100x10x1 mm were machined keeping the long axis of the strips parallel to rolling direction of the sheet. These strips were longitudinally deformed at a strain rate at 10-3 sec-1 to a different percentage of strain, e.g. 2.5, 5, 7.5,10 and 12.5%, and then slowly unloaded. Small specimens were extracted from the mid region of the unclamped portion of these deformed strips. These small specimens were metallographic polished, and corrosion behaviour has been studied by potentiodynamic polarization, electrochemical impedance spectra, and cyclic polarization and potentiostatic tests. Present results show that among three different environments, the 3.5 pct NaCl solution is most aggressive in case of DP 590 dual-phase steel. It is observed that with the increase in the amount of deformation, corrosion rate increases. With deformation, the stored energy increases and leads to enhanced corrosion rate. Cyclic polarization results revealed highly deformed specimen are more prone to pitting corrosion as compared to the condition when amount of deformation is less. It is also observed that stability of the passive layer decreases with the amount of deformation. With the increase of deformation, current density increases in a passive zone and passive zone is also decreased. From Electrochemical impedance spectroscopy study it is found that with increasing amount of deformation polarization resistance (Rp) decreases. EBSD results showed that average geometrically necessary dislocation density increases with increasing strain which in term increased galvanic corrosion as dislocation areas act as the less noble metal.

Keywords: dual phase 590 steel, prestrain, potentiodynamic polarization, cyclic polarization, electrochemical impedance spectra

Procedia PDF Downloads 421
53 Force Sensor for Robotic Graspers in Minimally Invasive Surgery

Authors: Naghmeh M. Bandari, Javad Dargahi, Muthukumaran Packirisamy

Abstract:

Robot-assisted minimally invasive surgery (RMIS) has been widely performed around the world during the last two decades. RMIS demonstrates significant advantages over conventional surgery, e.g., improving the accuracy and dexterity of a surgeon, providing 3D vision, motion scaling, hand-eye coordination, decreasing tremor, and reducing x-ray exposure for surgeons. Despite benefits, surgeons cannot touch the surgical site and perceive tactile information. This happens due to the remote control of robots. The literature survey identified the lack of force feedback as the riskiest limitation in the existing technology. Without the perception of tool-tissue contact force, the surgeon might apply an excessive force causing tissue laceration or insufficient force causing tissue slippage. The primary use of force sensors has been to measure the tool-tissue interaction force in real-time in-situ. Design of a tactile sensor is subjected to a set of design requirements, e.g., biocompatibility, electrical-passivity, MRI-compatibility, miniaturization, ability to measure static and dynamic force. In this study, a planar optical fiber-based sensor was proposed to mount at the surgical grasper. It was developed based on the light intensity modulation principle. The deflectable part of the sensor was a beam modeled as a cantilever Euler-Bernoulli beam on rigid substrates. A semi-cylindrical indenter was attached to the bottom surface the beam at the mid-span. An optical fiber was secured at both ends on the same rigid substrates. The indenter was in contact with the fiber. External force on the sensor caused deflection in the beam and optical fiber simultaneously. The micro-bending of the optical fiber would consequently result in light power loss. The sensor was simulated and studied using finite element methods. A laser light beam with 800nm wavelength and 5mW power was used as the input to the optical fiber. The output power was measured using a photodetector. The voltage from photodetector was calibrated to the external force for a chirp input (0.1-5Hz). The range, resolution, and hysteresis of the sensor were studied under monotonic and harmonic external forces of 0-2.0N with 0 and 5Hz, respectively. The results confirmed the validity of proposed sensing principle. Also, the sensor demonstrated an acceptable linearity (R2 > 0.9). A minimum external force was observed below which no power loss was detectable. It is postulated that this phenomenon is attributed to the critical angle of the optical fiber to observe total internal reflection. The experimental results were of negligible hysteresis (R2 > 0.9) and in fair agreement with the simulations. In conclusion, the suggested planar sensor is assessed to be a cost-effective solution, feasible, and easy to use the sensor for being miniaturized and integrated at the tip of robotic graspers. Geometrical and optical factors affecting the minimum sensible force and the working range of the sensor should be studied and optimized. This design is intrinsically scalable and meets all the design requirements. Therefore, it has a significant potential of industrialization and mass production.

Keywords: force sensor, minimally invasive surgery, optical sensor, robotic surgery, tactile sensor

Procedia PDF Downloads 218
52 Deep Learning for SAR Images Restoration

Authors: Hossein Aghababaei, Sergio Vitale, Giampaolo Ferraioli

Abstract:

In the context of Synthetic Aperture Radar (SAR) data, polarization is an important source of information for Earth's surface monitoring. SAR Systems are often considered to transmit only one polarization. This constraint leads to either single or dual polarimetric SAR imaging modalities. Single polarimetric systems operate with a fixed single polarization of both transmitted and received electromagnetic (EM) waves, resulting in a single acquisition channel. Dual polarimetric systems, on the other hand, transmit in one fixed polarization and receive in two orthogonal polarizations, resulting in two acquisition channels. Dual polarimetric systems are obviously more informative than single polarimetric systems and are increasingly being used for a variety of remote sensing applications. In dual polarimetric systems, the choice of polarizations for the transmitter and the receiver is open. The choice of circular transmit polarization and coherent dual linear receive polarizations forms a special dual polarimetric system called hybrid polarimetry, which brings the properties of rotational invariance to geometrical orientations of features in the scene and optimizes the design of the radar in terms of reliability, mass, and power constraints. The complete characterization of target scattering, however, requires fully polarimetric data, which can be acquired with systems that transmit two orthogonal polarizations. This adds further complexity to data acquisition and shortens the coverage area or swath of fully polarimetric images compared to the swath of dual or hybrid polarimetric images. The search for solutions to augment dual polarimetric data to full polarimetric data will therefore take advantage of full characterization and exploitation of the backscattered field over a wider coverage with less system complexity. Several methods for reconstructing fully polarimetric images using hybrid polarimetric data can be found in the literature. Although the improvements achieved by the newly investigated and experimented reconstruction techniques are undeniable, the existing methods are, however, mostly based upon model assumptions (especially the assumption of reflectance symmetry), which may limit their reliability and applicability to vegetation and forest scenarios. To overcome the problems of these techniques, this paper proposes a new framework for reconstructing fully polarimetric information from hybrid polarimetric data. The framework uses Deep Learning solutions to augment hybrid polarimetric data without relying on model assumptions. A convolutional neural network (CNN) with a specific architecture and loss function is defined for this augmentation problem by focusing on different scattering properties of the polarimetric data. In particular, the method controls the CNN training process with respect to several characteristic features of polarimetric images defined by the combination of different terms in the cost or loss function. The proposed method is experimentally validated with real data sets and compared with a well-known and standard approach from the literature. From the experiments, the reconstruction performance of the proposed framework is superior to conventional reconstruction methods. The pseudo fully polarimetric data reconstructed by the proposed method also agree well with the actual fully polarimetric images acquired by radar systems, confirming the reliability and efficiency of the proposed method.

Keywords: SAR image, polarimetric SAR image, convolutional neural network, deep learnig, deep neural network

Procedia PDF Downloads 63
51 A Public Health Perspective on Deradicalisation: Re-Conceptualising Deradicalisation Approaches

Authors: Erin Lawlor

Abstract:

In 2008 Time magazine named terrorist rehabilitation as one of the best ideas of the year. The term deradicalisation has become synonymous with rehabilitation within security discourse. The allure for a “quick fix” when managing terrorist populations (particularly within prisons) has led to a focus on prescriptive programmes where there is a distinct lack of exploration into the drivers for a person to disengage or deradicalise from violence. It has been argued that to tackle a snowballing issue that interventions have moved too quickly for both theory development and methodological structure. This overly quick acceptance of a term that lacks rigorous testing, measuring, and monitoring means that there is distinct lack of evidence base for deradicalisation being a genuine process/phenomenon, leading to academics retrospectively attempting to design frameworks and interventions around a concept that is not truly understood. The UK Home Office has openly acknowledged the lack of empirical data on this subject. This lack of evidence has a direct impact on policy and intervention development. Extremism and deradicalisation are issues that affect public health outcomes on a global scale, to the point that terrorism has now been added to the list of causes of trauma, both in the direct form of being victim of an attack but also the indirect context of witnesses, children and ordinary citizens who live in daily fear. This study critiques current deradicalisation discourses to establish whether public health approaches offer opportunities for development. The research begins by exploring the theoretical constructs of both what deradicalisation, and public health issues are. Questioning: What does deradicalisation involve? Is there an evidential base on which deradicalisation theory has established itself? What theory are public health interventions devised from? What does success look like in both fields? From establishing this base, current deradicalisation practices will then be explored through examples of work already being carried out. Critiques can be broken into discussion points of: Language, the difficulties with conducting empirical studies and the issues around outcome measurements that deradicalisation interventions face. This study argues that a public health approach towards deradicalisation offers the opportunity to attempt to bring clarity to the definitions of radicalisation, identify what could be modified through intervention and offer insights into the evaluation of interventions. As opposed to simply focusing on an element of deradicalisation and analysing that in isolation, a public health approach allows for what the literature has pointed out is missing, a comprehensive analysis of current interventions and information on creating efficacy monitoring systems. Interventions, policies, guidance, and practices in both the UK and Australia will be compared and contrasted, due to the joint nature of this research between Sheffield Hallam University and La Trobe, Melbourne.

Keywords: radicalisation, deradicalisation, violent extremism, public health

Procedia PDF Downloads 60
50 Deep Learning Based Polarimetric SAR Images Restoration

Authors: Hossein Aghababaei, Sergio Vitale, Giampaolo ferraioli

Abstract:

In the context of Synthetic Aperture Radar (SAR) data, polarization is an important source of information for Earth's surface monitoring . SAR Systems are often considered to transmit only one polarization. This constraint leads to either single or dual polarimetric SAR imaging modalities. Single polarimetric systems operate with a fixed single polarization of both transmitted and received electromagnetic (EM) waves, resulting in a single acquisition channel. Dual polarimetric systems, on the other hand, transmit in one fixed polarization and receive in two orthogonal polarizations, resulting in two acquisition channels. Dual polarimetric systems are obviously more informative than single polarimetric systems and are increasingly being used for a variety of remote sensing applications. In dual polarimetric systems, the choice of polarizations for the transmitter and the receiver is open. The choice of circular transmit polarization and coherent dual linear receive polarizations forms a special dual polarimetric system called hybrid polarimetry, which brings the properties of rotational invariance to geometrical orientations of features in the scene and optimizes the design of the radar in terms of reliability, mass, and power constraints. The complete characterization of target scattering, however, requires fully polarimetric data, which can be acquired with systems that transmit two orthogonal polarizations. This adds further complexity to data acquisition and shortens the coverage area or swath of fully polarimetric images compared to the swath of dual or hybrid polarimetric images. The search for solutions to augment dual polarimetric data to full polarimetric data will therefore take advantage of full characterization and exploitation of the backscattered field over a wider coverage with less system complexity. Several methods for reconstructing fully polarimetric images using hybrid polarimetric data can be found in the literature. Although the improvements achieved by the newly investigated and experimented reconstruction techniques are undeniable, the existing methods are, however, mostly based upon model assumptions (especially the assumption of reflectance symmetry), which may limit their reliability and applicability to vegetation and forest scenarios. To overcome the problems of these techniques, this paper proposes a new framework for reconstructing fully polarimetric information from hybrid polarimetric data. The framework uses Deep Learning solutions to augment hybrid polarimetric data without relying on model assumptions. A convolutional neural network (CNN) with a specific architecture and loss function is defined for this augmentation problem by focusing on different scattering properties of the polarimetric data. In particular, the method controls the CNN training process with respect to several characteristic features of polarimetric images defined by the combination of different terms in the cost or loss function. The proposed method is experimentally validated with real data sets and compared with a well-known and standard approach from the literature. From the experiments, the reconstruction performance of the proposed framework is superior to conventional reconstruction methods. The pseudo fully polarimetric data reconstructed by the proposed method also agree well with the actual fully polarimetric images acquired by radar systems, confirming the reliability and efficiency of the proposed method.

Keywords: SAR image, deep learning, convolutional neural network, deep neural network, SAR polarimetry

Procedia PDF Downloads 78
49 Saco Sweet Cherry: Phenolic Profile and Biological Activity of Coloured and Non-Coloured Fractions

Authors: Catarina Bento, Ana Carolina Gonçalves, Fábio Jesus, Luís Rodrigues Silva

Abstract:

Increasing evidence suggests that a diet rich in fruits and vegetables plays important roles in the prevention of chronic diseases, such as heart disease, cancer, stroke, diabetes, Alzheimer’s disease, among others. Fruits and vegetables gained prominence due their richness in bioactive compounds, being the focus of many studies due to their biological properties acting as health promoters. Prunus avium Linnaeus (L.), commonly known as sweet cherry has been the centre of attention due to its health benefits, and has been highly studied. In Portugal, most of the cherry production comes from the Fundão region. The Saco is one of the most important cultivar produced in this region, attributed with geographical protection. In this work, we prepared 3 extracts through solid-phase extraction (SPE): a whole extract, fraction I (non-coloured phenolics) and fraction II (coloured phenolics). The three extracts were used to determine the phenolic profile of Saco cultivar by liquid chromatography with diode array detection (LC-DAD) technique. This was followed by the evaluation of their biological potential, testing the extracts’ capacity to scavenge free-radicals (DPPH•, nitric oxide (•NO) and superoxide radical (O2●-)) and to inhibit α-glucosidase enzyme of all extracts. Additionally, we evaluated, for the first time, the protective effects against peroxyl radical (ROO•)-induced hemoglobin oxidation and hemolysis in human erythrocytes. A total of 16 non-coloured phenolics were detected, 3-O-caffeoylquinic and ρ-coumaroylquinic acids were the main ones, and 6 anthocyanins were found, among which cyanidin-3-O-rutinoside represented the majority. In respect to antioxidant activity, Saco showed great antioxidant potential in a concentration-dependent manner, demonstrated through the DPPH•,•NO and O2●-radicals, and greater ability to inhibit the α-glucosidase enzyme in comparison to the regular drug acarbose used to treat diabetes. Additionally, Saco proved to be effective to protect erythrocytes against oxidative damage in a concentration-dependent manner against hemoglobin oxidation and hemolysis. Our work demonstrated that Saco cultivar is an excellent source of phenolic compounds which are natural antioxidants that easily capture reactive species, such as ROO• before they can attack the erythrocytes’ membrane. In a general way, the whole extract showed the best efficiency, most likely due to a synergetic interaction between the different compounds. Finally, comparing the two separate fractions, the coloured fraction showed the most activity in all the assays, proving to be the biggest contributor of Saco cherries’ biological activity.

Keywords: biological potential, coloured phenolics, non-coloured phenolics, sweet cherry

Procedia PDF Downloads 241
48 Mitochondrial DNA Defect and Mitochondrial Dysfunction in Diabetic Nephropathy: The Role of Hyperglycemia-Induced Reactive Oxygen Species

Authors: Ghada Al-Kafaji, Mohamed Sabry

Abstract:

Mitochondria are the site of cellular respiration and produce energy in the form of adenosine triphosphate (ATP) via oxidative phosphorylation. They are the major source of intracellular reactive oxygen species (ROS) and are also direct target to ROS attack. Oxidative stress and ROS-mediated disruptions of mitochondrial function are major components involved in the pathogenicity of diabetic complications. In this work, the changes in mitochondrial DNA (mtDNA) copy number, biogenesis, gene expression of mtDNA-encoded subunits of electron transport chain (ETC) complexes, and mitochondrial function in response to hyperglycemia-induced ROS and the effect of direct inhibition of ROS on mitochondria were investigated in an in vitro model of diabetic nephropathy using human renal mesangial cells. The cells were exposed to normoglycemic and hyperglycemic conditions in the presence and absence of Mn(III)tetrakis(4-benzoic acid) porphyrin chloride (MnTBAP) or catalase for 1, 4 and 7 days. ROS production was assessed by the confocal microscope and flow cytometry. mtDNA copy number and PGC-1a, NRF-1, and TFAM, as well as ND2, CYTB, COI, and ATPase 6 transcripts, were all analyzed by real-time PCR. PGC-1a, NRF-1, and TFAM, as well as ND2, CYTB, COI, and ATPase 6 proteins, were analyzed by Western blotting. Mitochondrial function was determined by assessing mitochondrial membrane potential and adenosine triphosphate (ATP) levels. Hyperglycemia-induced a significant increase in the production of mitochondrial superoxide and hydrogen peroxide at day 1 (P < 0.05), and this increase remained significantly elevated at days 4 and 7 (P < 0.05). The copy number of mtDNA and expression of PGC-1a, NRF-1, and TFAM as well as ND2, CYTB, CO1 and ATPase 6 increased after one day of hyperglycemia (P < 0.05), with a significant reduction in all those parameters at 4 and 7 days (P < 0.05). The mitochondrial membrane potential decreased progressively at 1 to 7 days of hyperglycemia with the parallel progressive reduction in ATP levels over time (P < 0.05). MnTBAP and catalase treatment of cells cultured under hyperglycemic conditions attenuated ROS production reversed renal mitochondrial oxidative stress and improved mtDNA, mitochondrial biogenesis, and function. These results show that hyperglycemia-induced ROS caused an early increase in mtDNA copy number, mitochondrial biogenesis and mtDNA-encoded gene expression of the ETC subunits in human mesangial cells as a compensatory response to the decline in mitochondrial function, which precede the mtDNA defect and mitochondrial dysfunction with a progressive oxidative response. Protection from ROS-mediated damage to renal mitochondria induced by hyperglycemia may be a novel therapeutic approach for the prevention/treatment of DN.

Keywords: diabetic nephropathy, hyperglycemia, reactive oxygen species, oxidative stress, mtDNA, mitochondrial dysfunction, manganese superoxide dismutase, catalase

Procedia PDF Downloads 240
47 Structural and Morphological Characterization of the Biomass of Aquatics Macrophyte (Egeria densa) Submitted to Thermal Pretreatment

Authors: Joyce Cruz Ferraz Dutra, Marcele Fonseca Passos, Rubens Maciel Filho, Douglas Fernandes Barbin, Gustavo Mockaitis

Abstract:

The search for alternatives to control hunger in the world, generated a major environmental problem. Intensive systems of fish production can cause an imbalance in the aquatic environment, triggering the phenomenon of eutrophication. Currently, there are many forms of growth control aquatic plants, such as mechanical withdrawal, however some difficulties arise for their final destination. The Egeria densa is a species of submerged aquatic macrophyte-rich in cellulose and low concentrations of lignin. By applying the concept of second generation energy, which uses lignocellulose for energy production, the reuse of these aquatic macrophytes (Egeria densa) in the biofuels production can turn an interesting alternative. In order to make lignocellulose sugars available for effective fermentation, it is important to use pre-treatments in order to separate the components and modify the structure of the cellulose and thus facilitate the attack of the microorganisms responsible for the fermentation. Therefore, the objective of this research work was to evaluate the structural and morphological transformations occurring in the biomass of aquatic macrophytes (E.densa) submitted to a thermal pretreatment. The samples were collected in an intensive fish growing farm, in the low São Francisco dam, in the northeastern region of Brazil. After collection, the samples were dried in a 65 0C ventilation oven and milled in a 5mm micron knife mill. A duplicate assay was carried, comparing the in natural biomass with the pretreated biomass with heat (MT). The sample (MT) was submitted to an autoclave with a temperature of 1210C and a pressure of 1.1 atm, for 30 minutes. After this procedure, the biomass was characterized in terms of degree of crystallinity and morphology, using X-ray diffraction (XRD) techniques and scanning electron microscopy (SEM), respectively. The results showed that there was a decrease of 11% in the crystallinity index (% CI) of the pretreated biomass, leading to the structural modification in the cellulose and greater presence of amorphous structures. Increases in porosity and surface roughness of the samples were also observed. These results suggest that biomass may become more accessible to the hydrolytic enzymes of fermenting microorganisms. Therefore, the morphological transformations caused by the thermal pretreatment may be favorable for a subsequent fermentation and, consequently, a higher yield of biofuels. Thus, the use of thermally pretreated aquatic macrophytes (E.densa) can be an environmentally, financially and socially sustainable alternative. In addition, it represents a measure of control for the aquatic environment, which can generate income (biogas production) and maintenance of fish farming activities in local communities.

Keywords: aquatics macrophyte, biofuels, crystallinity, morphology, pretreatment thermal

Procedia PDF Downloads 321
46 Prosecution as Persecution: Exploring the Enduring Legacy of Judicial Harassment of Human Rights Defenders and Political Opponents in Zimbabwe, Cases from 2013-2016

Authors: Bellinda R. Chinowawa

Abstract:

As part of a wider strategy to stifle civil society, Governments routinely resort to judicial harassment through the use of civil and criminal to impugn the integrity of human rights defenders and that of perceived political opponents. This phenomenon is rife in militarised or autocratic regimes where there is no tolerance for dissenting voices. Zimbabwe, ostensibly a presidential republic founded on the values of transparency, equality, freedom, is characterised by brutal suppression of perceived political opponents and those who assert their basic human rights. This is done through a wide range of tactics including unlawful arrests and detention, torture and other cruel, inhuman degrading treatment and enforced disappearances. Professionals including, journalists and doctors are similarly not spared from state attack. For human rights defenders, the most widely used tool of repression is that of judicial harassment where the judicial system is used to persecute them. This can include the levying of criminal charges, civil lawsuits and unnecessary administrative proceedings. Charges preferred against range from petty offences such as criminal nuisance to more serious charges of terrorism and subverting a constitutional government. Additionally, government sponsored individuals and organisations file strategic lawsuits with pecuniary implications order to intimidate and silence critics and engender self-censorship. Some HRDs are convicted and sentenced to prison terms, despite not being criminals in a true sense. While others are acquitted judicial harassment diverts energy and resources away from their human rights work. Through a consideration of statistical data reported by human rights organisations and face to face interviews with a cross section of human rights defenders, the article will map the incidence of judicial harassment in Zimbabwe. The article will consider the multi-level sociological and contextual factors which influence the Government of Zimbabwe to have easy recourse to criminal law and the debilitating effect of these actions on HRDs. These factors include the breakdown of the rule of law resulting in state capture of the judiciary, the proven efficacy of judicial harassment from colonial times to date, and the lack of an adequate redress mechanism at international level. By mapping the use of the judiciary as a tool of repression, from the inception of modern day Zimbabwe to date, it is hoped that HRDs will realise that they are part of a greater community of activists throughout the ages and should emboldened in the realisation that it is an age old tactic used by fallen regimes which should not deter them from calling for accountability.

Keywords: autocratic regime, colonial legacy, judicial harassment, human rights defenders

Procedia PDF Downloads 229
45 A Computer-Aided System for Tooth Shade Matching

Authors: Zuhal Kurt, Meral Kurt, Bilge T. Bal, Kemal Ozkan

Abstract:

Shade matching and reproduction is the most important element of success in prosthetic dentistry. Until recently, shade matching procedure was implemented by dentists visual perception with the help of shade guides. Since many factors influence visual perception; tooth shade matching using visual devices (shade guides) is highly subjective and inconsistent. Subjective nature of this process has lead to the development of instrumental devices. Nowadays, colorimeters, spectrophotometers, spectroradiometers and digital image analysing systems are used for instrumental shade selection. Instrumental devices have advantages that readings are quantifiable, can obtain more rapidly and simply, objectively and precisely. However, these devices have noticeable drawbacks. For example, translucent structure and irregular surfaces of teeth lead to defects on measurement with these devices. Also between the results acquired by devices with different measurement principles may make inconsistencies. So, its obligatory to search for new methods for dental shade matching process. A computer-aided system device; digital camera has developed rapidly upon today. Currently, advances in image processing and computing have resulted in the extensive use of digital cameras for color imaging. This procedure has a much cheaper process than the use of traditional contact-type color measurement devices. Digital cameras can be taken by the place of contact-type instruments for shade selection and overcome their disadvantages. Images taken from teeth show morphology and color texture of teeth. In last decades, a new method was recommended to compare the color of shade tabs taken by a digital camera using color features. This method showed that visual and computer-aided shade matching systems should be used as concatenated. Recently using methods of feature extraction techniques are based on shape description and not used color information. However, color is mostly experienced as an essential property in depicting and extracting features from objects in the world around us. When local feature descriptors with color information are extended by concatenating color descriptor with the shape descriptor, that descriptor will be effective on visual object recognition and classification task. Therefore, the color descriptor is to be used in combination with a shape descriptor it does not need to contain any spatial information, which leads us to use local histograms. This local color histogram method is remain reliable under variation of photometric changes, geometrical changes and variation of image quality. So, coloring local feature extraction methods are used to extract features, and also the Scale Invariant Feature Transform (SIFT) descriptor used to for shape description in the proposed method. After the combination of these descriptors, the state-of-art descriptor named by Color-SIFT will be used in this study. Finally, the image feature vectors obtained from quantization algorithm are fed to classifiers such as Nearest Neighbor (KNN), Naive Bayes or Support Vector Machines (SVM) to determine label(s) of the visual object category or matching. In this study, SVM are used as classifiers for color determination and shade matching. Finally, experimental results of this method will be compared with other recent studies. It is concluded from the study that the proposed method is remarkable development on computer aided tooth shade determination system.

Keywords: classifiers, color determination, computer-aided system, tooth shade matching, feature extraction

Procedia PDF Downloads 419
44 Multifield Problems in 3D Structural Analysis of Advanced Composite Plates and Shells

Authors: Salvatore Brischetto, Domenico Cesare

Abstract:

Major improvements in future aircraft and spacecraft could be those dependent on an increasing use of conventional and unconventional multilayered structures embedding composite materials, functionally graded materials, piezoelectric or piezomagnetic materials, and soft foam or honeycomb cores. Layers made of such materials can be combined in different ways to obtain structures that are able to fulfill several structural requirements. The next generation of aircraft and spacecraft will be manufactured as multilayered structures under the action of a combination of two or more physical fields. In multifield problems for multilayered structures, several physical fields (thermal, hygroscopic, electric and magnetic ones) interact each other with different levels of influence and importance. An exact 3D shell model is here proposed for these types of analyses. This model is based on a coupled system including 3D equilibrium equations, 3D Fourier heat conduction equation, 3D Fick diffusion equation and electric and magnetic divergence equations. The set of partial differential equations of second order in z is written using a mixed curvilinear orthogonal reference system valid for spherical and cylindrical shell panels, cylinders and plates. The order of partial differential equations is reduced to the first one thanks to the redoubling of the number of variables. The solution in the thickness z direction is obtained by means of the exponential matrix method and the correct imposition of interlaminar continuity conditions in terms of displacements, transverse stresses, electric and magnetic potentials, temperature, moisture content and transverse normal multifield fluxes. The investigated structures have simply supported sides in order to obtain a closed form solution in the in-plane directions. Moreover, a layerwise approach is proposed which allows a 3D correct description of multilayered anisotropic structures subjected to field loads. Several results will be proposed in tabular and graphical formto evaluate displacements, stresses and strains when mechanical loads, temperature gradients, moisture content gradients, electric potentials and magnetic potentials are applied at the external surfaces of the structures in steady-state conditions. In the case of inclusions of piezoelectric and piezomagnetic layers in the multilayered structures, so called smart structures are obtained. In this case, a free vibration analysis in open and closed circuit configurations and a static analysis for sensor and actuator applications will be proposed. The proposed results will be useful to better understand the physical and structural behaviour of multilayered advanced composite structures in the case of multifield interactions. Moreover, these analytical results could be used as reference solutions for those scientists interested in the development of 3D and 2D numerical shell/plate models based, for example, on the finite element approach or on the differential quadrature methodology. The correct impositions of boundary geometrical and load conditions, interlaminar continuity conditions and the zigzag behaviour description due to transverse anisotropy will be also discussed and verified.

Keywords: composite structures, 3D shell model, stress analysis, multifield loads, exponential matrix method, layer wise approach

Procedia PDF Downloads 59
43 Inhibition of Mild Steel Corrosion in Hydrochloric Acid Medium Using an Aromatic Hydrazide Derivative

Authors: Preethi Kumari P., Shetty Prakasha, Rao Suma A.

Abstract:

Mild steel has been widely employed as construction materials for pipe work in the oil and gas production such as down hole tubular, flow lines and transmission pipelines, in chemical and allied industries for handling acids, alkalis and salt solutions due to its excellent mechanical property and low cost. Acid solutions are widely used for removal of undesirable scale and rust in many industrial processes. Among the commercially available acids hydrochloric acid is widely used for pickling, cleaning, de-scaling and acidization of oil process. Mild steel exhibits poor corrosion resistance in presence of hydrochloric acid. The high reactivity of mild steel in presence of hydrochloric acid is due to the soluble nature of ferrous chloride formed and the cementite phase (Fe3C) normally present in the steel is also readily soluble in hydrochloric acid. Pitting attack is also reported to be a major form of corrosion in mild steel in the presence of high concentrations of acids and thereby causing the complete destruction of metal. Hydrogen from acid reacts with the metal surface and makes it brittle and causes cracks, which leads to pitting type of corrosion. The use of chemical inhibitor to minimize the rate of corrosion has been considered to be the first line of defense against corrosion. In spite of long history of corrosion inhibition, a highly efficient and durable inhibitor that can completely protect mild steel in aggressive environment is yet to be realized. It is clear from the literature review that there is ample scope for the development of new organic inhibitors, which can be conveniently synthesized from relatively cheap raw materials and provide good inhibition efficiency with least risk of environmental pollution. The aim of the present work is to evaluate the electrochemical parameters for the corrosion inhibition behavior of an aromatic hydrazide derivative, 4-hydroxy- N '-[(E)-1H-indole-2-ylmethylidene)] benzohydrazide (HIBH) on mild steel in 2M hydrochloric acid using Tafel polarization and electrochemical impedance spectroscopy (EIS) techniques at 30-60 °C. The results showed that inhibition efficiency increased with increase in inhibitor concentration and decreased marginally with increase in temperature. HIBH showed a maximum inhibition efficiency of 95 % at 8×10-4 M concentration at 30 °C. Polarization curves showed that HIBH act as a mixed-type inhibitor. The adsorption of HIBH on mild steel surface obeys the Langmuir adsorption isotherm. The adsorption process of HIBH at the mild steel/hydrochloric acid solution interface followed mixed adsorption with predominantly physisorption at lower temperature and chemisorption at higher temperature. Thermodynamic parameters for the adsorption process and kinetic parameters for the metal dissolution reaction were determined.

Keywords: electrochemical parameters, EIS, mild steel, tafel polarization

Procedia PDF Downloads 328
42 Analysis of Short Counter-Flow Heat Exchanger (SCFHE) Using Non-Circular Micro-Tubes Operated on Water-CuO Nanofluid

Authors: Avdhesh K. Sharma

Abstract:

Key, in the development of energy-efficient micro-scale heat exchanger devices, is to select large heat transfer surface to volume ratio without much expanse on re-circulated pumps. The increased interest in short heat exchanger (SHE) is due to accessibility of advanced technologies for manufacturing of micro-tubes in range of 1 micron m - 1 mm. Such SHE using micro-tubes are highly effective for high flux heat transfer technologies. Nanofluids, are used to enhance the thermal conductivity of re-circulated coolant and thus enhances heat transfer rate further. Higher viscosity associated with nanofluid expands more pumping power. Thus, there is a trade-off between heat transfer rate and pressure drop with geometry of micro-tubes. Herein, a novel design of short counter flow heat exchanger (SCFHE) using non-circular micro-tubes flooded with CuO-water nanofluid is conceptualized by varying the ratio of surface area to cross-sectional area of micro-tubes. A framework for comparative analysis of SCFHE using micro-tubes non-circular shape flooded by CuO-water nanofluid is presented. In SCFHE concept, micro-tubes having various geometrical shapes (viz., triangular, rectangular and trapezoidal) has been arranged row-wise to facilitate two aspects: (1) allowing easy flow distribution for cold and hot stream, and (2) maximizing the thermal interactions with neighboring channels. Adequate distribution of rows for cold and hot flow streams enables above two aspects. For comparative analysis, a specific volume or cross-section area is assigned to each elemental cell (which includes flow area and area corresponds to half wall thickness). A specific volume or cross-section area is assumed to be constant for each elemental cell (which includes flow area and half wall thickness area) and variation in surface area is allowed by selecting different geometry of micro-tubes in SCFHE. Effective thermal conductivity model for CuO-water nanofluid has been adopted, while the viscosity values for water based nanofluids are obtained empirically. Correlations for Nusselt number (Nu) and Poiseuille number (Po) for micro-tubes have been derived or adopted. Entrance effect is accounted for. Thermal and hydrodynamic performances of SCFHE are defined in terms of effectiveness and pressure drop or pumping power, respectively. For defining the overall performance index of SCFHE, two links are employed. First one relates heat transfer between the fluid streams q and pumping power PP as (=qj/PPj); while another link relates effectiveness eff and pressure drop dP as (=effj/dPj). For analysis, the inlet temperatures of hot and cold streams are varied in usual range of 20dC-65dC. Fully turbulent regime is seldom encountered in micro-tubes and transition of flow regime occurs much early (i.e., ~Re=1000). Thus, Re is fixed at 900, however, the uncertainty in Re due to addition of nanoparticles in base fluid is quantified by averaging of Re. Moreover, for minimizing error, volumetric concentration is limited to range 0% to ≤4% only. Such framework may be helpful in utilizing maximum peripheral surface area of SCFHE without any serious severity on pumping power and towards developing advanced short heat exchangers.

Keywords: CuO-water nanofluid, non-circular micro-tubes, performance index, short counter flow heat exchanger

Procedia PDF Downloads 205
41 A Cloud-Based Federated Identity Management in Europe

Authors: Jesus Carretero, Mario Vasile, Guillermo Izquierdo, Javier Garcia-Blas

Abstract:

Currently, there is a so called ‘identity crisis’ in cybersecurity caused by the substantial security, privacy and usability shortcomings encountered in existing systems for identity management. Federated Identity Management (FIM) could be solution for this crisis, as it is a method that facilitates management of identity processes and policies among collaborating entities without enforcing a global consistency, that is difficult to achieve when there are ID legacy systems. To cope with this problem, the Connecting Europe Facility (CEF) initiative proposed in 2014 a federated solution in anticipation of the adoption of the Regulation (EU) N°910/2014, the so-called eIDAS Regulation. At present, a network of eIDAS Nodes is being deployed at European level to allow that every citizen recognized by a member state is to be recognized within the trust network at European level, enabling the consumption of services in other member states that, until now were not allowed, or whose concession was tedious. This is a very ambitious approach, since it tends to enable cross-border authentication of Member States citizens without the need to unify the authentication method (eID Scheme) of the member state in question. However, this federation is currently managed by member states and it is initially applied only to citizens and public organizations. The goal of this paper is to present the results of a European Project, named eID@Cloud, that focuses on the integration of eID in 5 cloud platforms belonging to authentication service providers of different EU Member States to act as Service Providers (SP) for private entities. We propose an initiative based on a private eID Scheme both for natural and legal persons. The methodology followed in the eID@Cloud project is that each Identity Provider (IdP) is subscribed to an eIDAS Node Connector, requesting for authentication, that is subscribed to an eIDAS Node Proxy Service, issuing authentication assertions. To cope with high loads, load balancing is supported in the eIDAS Node. The eID@Cloud project is still going on, but we already have some important outcomes. First, we have deployed the federation identity nodes and tested it from the security and performance point of view. The pilot prototype has shown the feasibility of deploying this kind of systems, ensuring good performance due to the replication of the eIDAS nodes and the load balance mechanism. Second, our solution avoids the propagation of identity data out of the native domain of the user or entity being identified, which avoids problems well known in cybersecurity due to network interception, man in the middle attack, etc. Last, but not least, this system allows to connect any country or collectivity easily, providing incremental development of the network and avoiding difficult political negotiations to agree on a single authentication format (which would be a major stopper).

Keywords: cybersecurity, identity federation, trust, user authentication

Procedia PDF Downloads 160
40 A Comparative Human Rights Analysis of Expulsion as a Counterterrorism Instrument: An Evaluation of Belgium

Authors: Louise Reyntjens

Abstract:

Where criminal law used to be the traditional response to cope with the terrorist threat, European governments are increasingly relying on administrative paths. The reliance on immigration law fits into this trend. Terrorism is seen as a civilization menace emanating from abroad. In this context, the expulsion of dangerous aliens, immigration law’s core task, is put forward as a key security tool. Governments all over Europe are focusing on removing dangerous individuals from their territory rather than bringing them to justice. This research reflects on the consequences for the expelled individuals’ fundamental rights. For this, the author selected four European countries for a comparative study: Belgium, France, the United Kingdom and Sweden. All these countries face similar social and security issues, igniting the recourse to immigration law as a counterterrorism tool. Yet, they adopt a very different approach on this: the United Kingdom positions itself on the repressive side of the spectrum. Sweden on the other hand, also 'securitized' its immigration policy after the recent terrorist hit in Stockholm, but remains on the tolerant side of the spectrum. Belgium and France are situated in between. This paper addresses the situation in Belgium. In 2017, the Belgian parliament introduced several legislative changes by which it considerably expanded and facilitated the possibility to expel unwanted aliens. First, the expulsion measure was subjected to new and questionably definitions: a serious attack on the nation’s safety used to be required to expel certain categories of aliens. Presently, mere suspicions suffice to fulfil the new definition of a 'serious threat to national security'. A definition which fails to respond to the principle of legality; the law, nor the prepatory works clarify what is meant by 'a threat to national security'. This creates the risk of submitting this concept’s interpretation almost entirely to the discretion of the immigration authorities. Secondly, in name of intervening more quickly and efficiently, the automatic suspensive appeal for expulsions was abolished. The European Court of Human Rights nonetheless requires such an automatic suspensive appeal under Article 13 and 3 of the Convention. Whether this procedural reform will stand to endure, is thus questionable. This contribution also raises questions regarding expulsion’s efficacy as a key security tool. In a globalized and mobilized world, particularly in a European Union with no internal boundaries, questions can be raised about the usefulness of this measure. Even more so, by simply expelling a dangerous individual, States avoid their responsibility and shift the risk to another State. Criminal law might in these instances be more capable of providing a conclusive and long term response. This contribution explores the human rights consequences of expulsion as a security tool in Belgium. It also offers a critical view on its efficacy for protecting national security.

Keywords: Belgium, counter-terrorism and human rights, expulsion, immigration law

Procedia PDF Downloads 117
39 Improvements and Implementation Solutions to Reduce the Computational Load for Traffic Situational Awareness with Alerts (TSAA)

Authors: Salvatore Luongo, Carlo Luongo

Abstract:

This paper discusses the implementation solutions to reduce the computational load for the Traffic Situational Awareness with Alerts (TSAA) application, based on Automatic Dependent Surveillance-Broadcast (ADS-B) technology. In 2008, there were 23 total mid-air collisions involving general aviation fixed-wing aircraft, 6 of which were fatal leading to 21 fatalities. These collisions occurred during visual meteorological conditions, indicating the limitations of the see-and-avoid concept for mid-air collision avoidance as defined in the Federal Aviation Administration’s (FAA). The commercial aviation aircraft are already equipped with collision avoidance system called TCAS, which is based on classic transponder technology. This system dramatically reduced the number of mid-air collisions involving air transport aircraft. In general aviation, the same reduction in mid-air collisions has not occurred, so this reduction is the main objective of the TSAA application. The major difference between the original conflict detection application and the TSAA application is that the conflict detection is focused on preventing loss of separation in en-route environments. Instead TSAA is devoted to reducing the probability of mid-air collision in all phases of flight. The TSAA application increases the flight crew traffic situation awareness providing alerts of traffic that are detected in conflict with ownship in support of the see-and-avoid responsibility. The relevant effort has been spent in the design process and the code generation in order to maximize the efficiency and performances in terms of computational load and memory consumption reduction. The TSAA architecture is divided into two high-level systems: the “Threats database” and the “Conflict detector”. The first one receives the traffic data from ADS-B device and provides the memorization of the target’s data history. Conflict detector module estimates ownship and targets trajectories in order to perform the detection of possible future loss of separation between ownship and each target. Finally, the alerts are verified by additional conflict verification logic, in order to prevent possible undesirable behaviors of the alert flag. In order to reduce the computational load, a pre-check evaluation module is used. This pre-check is only a computational optimization, so the performances of the conflict detector system are not modified in terms of number of alerts detected. The pre-check module uses analytical trajectories propagation for both target and ownship. This allows major accuracy and avoids the step-by-step propagation, which requests major computational load. Furthermore, the pre-check permits to exclude the target that is certainly not a threat, using an analytical and efficient geometrical approach, in order to decrease the computational load for the following modules. This software improvement is not suggested by FAA documents, and so it is the main innovation of this work. The efficiency and efficacy of this enhancement are verified using fast-time and real-time simulations and by the execution on a real device in several FAA scenarios. The final implementation also permits the FAA software certification in compliance with DO-178B standard. The computational load reduction allows the installation of TSAA application also on devices with multiple applications and/or low capacity in terms of available memory and computational capabilities

Keywords: traffic situation awareness, general aviation, aircraft conflict detection, computational load reduction, implementation solutions, software certification

Procedia PDF Downloads 270
38 Computer-Integrated Surgery of the Human Brain, New Possibilities

Authors: Ugo Galvanetto, Pirto G. Pavan, Mirco Zaccariotto

Abstract:

The discipline of Computer-integrated surgery (CIS) will provide equipment able to improve the efficiency of healthcare systems and, which is more important, clinical results. Surgeons and machines will cooperate in new ways that will extend surgeons’ ability to train, plan and carry out surgery. Patient specific CIS of the brain requires several steps: 1 - Fast generation of brain models. Based on image recognition of MR images and equipped with artificial intelligence, image recognition techniques should differentiate among all brain tissues and segment them. After that, automatic mesh generation should create the mathematical model of the brain in which the various tissues (white matter, grey matter, cerebrospinal fluid …) are clearly located in the correct positions. 2 – Reliable and fast simulation of the surgical process. Computational mechanics will be the crucial aspect of the entire procedure. New algorithms will be used to simulate the mechanical behaviour of cutting through cerebral tissues. 3 – Real time provision of visual and haptic feedback A sophisticated human-machine interface based on ergonomics and psychology will provide the feedback to the surgeon. The present work will address in particular point 2. Modelling the cutting of soft tissue in a structure as complex as the human brain is an extremely challenging problem in computational mechanics. The finite element method (FEM), that accurately represents complex geometries and accounts for material and geometrical nonlinearities, is the most used computational tool to simulate the mechanical response of soft tissues. However, the main drawback of FEM lies in the mechanics theory on which it is based, classical continuum Mechanics, which assumes matter is a continuum with no discontinuity. FEM must resort to complex tools such as pre-defined cohesive zones, external phase-field variables, and demanding remeshing techniques to include discontinuities. However, all approaches to equip FEM computational methods with the capability to describe material separation, such as interface elements with cohesive zone models, X-FEM, element erosion, phase-field, have some drawbacks that make them unsuitable for surgery simulation. Interface elements require a-priori knowledge of crack paths. The use of XFEM in 3D is cumbersome. Element erosion does not conserve mass. The Phase Field approach adopts a diffusive crack model instead of describing true tissue separation typical of surgical procedures. Modelling discontinuities, so difficult when using computational approaches based on classical continuum Mechanics, is instead easy for novel computational methods based on Peridynamics (PD). PD is a non-local theory of mechanics formulated with no use of spatial derivatives. Its governing equations are valid at points or surfaces of discontinuity, and it is, therefore especially suited to describe crack propagation and fragmentation problems. Moreover, PD does not require any criterium to decide the direction of crack propagation or the conditions for crack branching or coalescence; in the PD-based computational methods, cracks develop spontaneously in the way which is the most convenient from an energy point of view. Therefore, in PD computational methods, crack propagation in 3D is as easy as it is in 2D, with a remarkable advantage with respect to all other computational techniques.

Keywords: computational mechanics, peridynamics, finite element, biomechanics

Procedia PDF Downloads 65
37 Risks beyond Cyber in IoT Infrastructure and Services

Authors: Mattias Bergstrom

Abstract:

Significance of the Study: This research will provide new insights into the risks with digital embedded infrastructure. Through this research, we will analyze each risk and its potential negation strategies, especially for AI and autonomous automation. Moreover, the analysis that is presented in this paper will convey valuable information for future research that can create more stable, secure, and efficient autonomous systems. To learn and understand the risks, a large IoT system was envisioned, and risks with hardware, tampering, and cyberattacks were collected, researched, and evaluated to create a comprehensive understanding of the potential risks. Potential solutions have then been evaluated on an open source IoT hardware setup. This list shows the identified passive and active risks evaluated in the research. Passive Risks: (1) Hardware failures- Critical Systems relying on high rate data and data quality are growing; SCADA systems for infrastructure are good examples of such systems. (2) Hardware delivers erroneous data- Sensors break, and when they do so, they don’t always go silent; they can keep going, just that the data they deliver is garbage, and if that data is not filtered out, it becomes disruptive noise in the system. (3) Bad Hardware injection- Erroneous generated sensor data can be pumped into a system by malicious actors with the intent to create disruptive noise in critical systems. (4) Data gravity- The weight of the data collected will affect Data-Mobility. (5) Cost inhibitors- Running services that need huge centralized computing is cost inhibiting. Large complex AI can be extremely expensive to run. Active Risks: Denial of Service- It is one of the most simple attacks, where an attacker just overloads the system with bogus requests so that valid requests disappear in the noise. Malware- Malware can be anything from simple viruses to complex botnets created with specific goals, where the creator is stealing computer power and bandwidth from you to attack someone else. Ransomware- It is a kind of malware, but it is so different in its implementation that it is worth its own mention. The goal with these pieces of software is to encrypt your system so that it can only be unlocked with a key that is held for ransom. DNS spoofing- By spoofing DNS calls, valid requests and data dumps can be sent to bad destinations, where the data can be extracted for extortion or to corrupt and re-inject into a running system creating a data echo noise loop. After testing multiple potential solutions. We found that the most prominent solution to these risks was to use a Peer 2 Peer consensus algorithm over a blockchain to validate the data and behavior of the devices (sensors, storage, and computing) in the system. By the devices autonomously policing themselves for deviant behavior, all risks listed above can be negated. In conclusion, an Internet middleware that provides these features would be an easy and secure solution to any future autonomous IoT deployments. As it provides separation from the open Internet, at the same time, it is accessible over the blockchain keys.

Keywords: IoT, security, infrastructure, SCADA, blockchain, AI

Procedia PDF Downloads 96
36 Treatment with Triton-X 100: An Enhancement Approach for Cardboard Bioprocessing

Authors: Ahlam Said Al Azkawi, Nallusamy Sivakumar, Saif Nasser Al Bahri

Abstract:

Diverse approaches and pathways are under development with the determination to develop cellulosic biofuels and other bio-products eventually at commercial scale in “bio-refineries”; however, the key challenge is mainly the high level of complexity in processing the feedstock which is complicated and energy consuming. To overcome the complications in utilizing the naturally occurring lignocellulose biomass, using waste paper as a feedstock for bio-production may solve the problem. Besides being abundant and cheap, bioprocessing of waste paper has evolved in response to the public concern from rising landfill cost from shrinking landfill capacity. Cardboard (CB) is one of the major components of municipal solid waste and one of the most important items to recycle. Although 50-70% of cardboard constitute is known to be cellulose and hemicellulose, the presence of lignin around them cause hydrophobic cross-link which physically obstructs the hydrolysis by rendering it resistant to enzymatic cleavage. Therefore, pretreatment is required to disrupt this resistance and to enhance the exposure of the targeted carbohydrates to the hydrolytic enzymes. Several pretreatment approaches have been explored, and the best ones would be those can influence cellulose conversion rates and hydrolytic enzyme performance with minimal or less cost and downstream processes. One of the promising strategies in this field is the application of surfactants, especially non-ionic surfactants. In this study, triton-X 100 was used as surfactants to treat cardboard prior enzymatic hydrolysis and compare it with acid treatment using 0.1% H2SO4. The effect of the surfactant enhancement was evaluated through its effect on hydrolysis rate in respect to time in addition to evaluating the structural changes and modification by scanning electron microscope (SEM) and X-ray diffraction (XRD) and through compositional analysis. Further work was performed to produce ethanol from CB treated with triton-X 100 via separate hydrolysis and fermentation (SHF) and simultaneous saccharification and fermentation (SSF). The hydrolysis studies have demonstrated enhancement in saccharification by 35%. After 72 h of hydrolysis, a saccharification rate of 98% was achieved from CB enhanced with triton-X 100, while only 89 of saccharification achieved from acid pre-treated CB. At 120 h, the saccharification % exceeded 100 as reducing sugars continued to increase with time. This enhancement was not supported by any significant changes in the cardboard content as the cellulose, hemicellulose and lignin content remained same after treatment, but obvious structural changes were observed through SEM images. The cellulose fibers were clearly exposed with very less debris and deposits compared to cardboard without triton-X 100. The XRD pattern has also revealed the ability of the surfactant in removing calcium carbonate, a filler found in waste paper known to have negative effect on enzymatic hydrolysis. The cellulose crystallinity without surfactant was 73.18% and reduced to 66.68% rendering it more amorphous and susceptible to enzymatic attack. Triton-X 100 has proved to effectively enhance CB hydrolysis and eventually had positive effect on the ethanol yield via SSF. Treating cardboard with only triton-X 100 was a sufficient treatment to enhance the enzymatic hydrolysis and ethanol production.

Keywords: cardboard, enhancement, ethanol, hydrolysis, treatment, Triton-X 100

Procedia PDF Downloads 143
35 Molecular Identification of Camel Tick and Investigation of Its Natural Infection by Rickettsia and Borrelia in Saudi Arabia

Authors: Reem Alajmi, Hind Al Harbi, Tahany Ayaad, Zainab Al Musawi

Abstract:

Hard ticks Hyalomma spp. (family: Ixodidae) are obligate ectoparasite in their all life stages on some domestic animals mainly camels and cattle. Ticks may lead to many economic and public health problems because of their blood feeding behavior. Also, they act as vectors for many bacterial, viral and protozoan agents which may cause serious diseases such as tick-born encephalitis, Rocky-mountain spotted fever, Q-fever and Lyme disease which can affect human and/or animals. In the present study, molecular identification of ticks that attack camels in Riyadh region, Saudi Arabia based on the partial sequence of mitochondrial 16s rRNA gene was applied. Also, the present study aims to detect natural infections of collected camel ticks with Rickessia spp. and Borelia spp. using PCR/hybridization of Citrate synthase encoding gene present in bacterial cells. Hard ticks infesting camels were collected from different camels located in a farm in Riyadh region, Saudi Arabia. Results of the present study showed that the collected specimens belong to two species: Hyalomma dromedari represent 99% of the identified specimens and Hyalomma marginatum which account for 1 % of identified ticks. The molecular identification was made through blasting the obtained sequence of this study with sequences already present and identified in GeneBank. All obtained sequences of H. dromedarii specimens showed 97-100% identity with the same gene sequence of the same species (Accession # L34306.1) which was used as a reference. Meanwhile, no intraspecific variations of H. marginatum mesured because only one specimen was collected. Results also had shown that the intraspecific variability between individuals of H. dromedarii obtained in 92 % of samples ranging from 0.2- 6.6%, while the remaining 7 % of the total samples of H. dromedarii showed about 10.3 % individual differences. However, the interspecific variability between H. dromedarii and H. marginatum was approximately 18.3 %. On the other hand, by using the technique of PCR/hybridization, we could detect natural infection of camel ticks with Rickettsia spp. and Borrelia spp. Results revealed the natural presence of both bacteria in collected ticks. Rickettsial spp. infection present in 29% of collected ticks, while 35% of collected specimen were infected with Borrelia spp. The valuable results obtained from the present study are a new record for the molecular identification of camel ticks in Riyadh, Saudi Arabia and their natural infection with both Rickettsia spp. and Borrelia spp. These results may help scientists to provide a good and direct control strategy of ticks in order to protect one of the most important economic animals which are camels. Also results of this project spotlight on the disease that might be transmitted by ticks to put out a direct protective plan to prevent spreading of these dangerous agents. Further molecular studies are needed to confirm the results of the present study by using other mitochondrial and nuclear genes for tick identification.

Keywords: Camel ticks, Rickessia spp. , Borelia spp. , mitochondrial 16s rRNA gene

Procedia PDF Downloads 266
34 Howard Mold Count of Tomato Pulp Commercialized in the State of São Paulo, Brazil

Authors: M. B. Atui, A. M. Silva, M. A. M. Marciano, M. I. Fioravanti, V. A. Franco, L. B. Chasin, A. R. Ferreira, M. D. Nogueira

Abstract:

Fungi attack large amount of fruits and those who have suffered an injury on the surface are more susceptible to the growth, as they have pectinolytic enzymes that destroy the edible portion forming an amorphous and soft dough. The spores can reach the plant by the wind, rain and insects and fruit may have on its surface, besides the contaminants from the fruit trees, land and water, forming a flora composed mainly of yeasts and molds. Other contamination can occur for the equipment used to harvest, for the use of boxes and contaminated water to the fruit washing, for storage in dirty places. The hyphae in tomato products indicate the use of raw materials contaminated or unsuitable hygiene conditions during processing. Although fungi are inactivated in heat processing step, its hyphae remain in the final product and search for detection and quantification is an indicator of the quality of raw material. Howard Method count of fungi mycelia in industrialized pulps evaluates the amount of decayed fruits existing in raw material. The Brazilian legislation governing processed and packaged products set the limit of 40% of positive fields in tomato pulps. The aim of this study was to evaluate the quality of the tomato pulp sold in greater São Paulo, through a monitoring during the four seasons of the year. All over 2010, 110 samples have been examined; 21 were taking in spring, 31 in summer, 31 in fall and 27 in winter, all from different lots and trademarks. Samples have been picked up in several stores located in the city of São Paulo. Howard method was used, recommended by the AOAC, 19th ed, 2011 16:19:02 technique - method 965.41. Hundred percent of the samples contained fungi mycelia. The count average of fungi mycelia per season was 23%, 28%, 8,2% and 9,9% in spring, summer, fall and winter, respectively. Regarding the spring samples of the 21 samples analyzed, 14.3% were off-limits proposed by the legislation. As for the samples of the fall and winter, all were in accordance with the legislation and the average of mycelial filament count has not exceeded 20%, which can be explained by the low temperatures during this time of the year. The acquired samples in the summer and spring showed high percentage of fungal mycelium in the final product, related to the high temperatures in these seasons. Considering that the limit of 40% of positive fields is accepted for the Brazilian Legislation (RDC nº 14/2014), 3 spring samples (14%) and 6 summer samples (19%) will be over this limit and subject to law penalties. According to gathered data, 82% of manufacturers of this product manage to keep acceptable levels of fungi mycelia in their product. In conclusion, only 9.2% samples were for the limits established by Resolution RDC. 14/2014, showing that the limit of 40% is feasible and can be used by these segment industries. The result of the filament count mycelial by Howard method is an important tool in the microscopic analysis since it measures the quality of raw material used in the production of tomato products.

Keywords: fungi, howard, method, tomato, pulps

Procedia PDF Downloads 370
33 A Supply Chain Risk Management Model Based on Both Qualitative and Quantitative Approaches

Authors: Henry Lau, Dilupa Nakandala, Li Zhao

Abstract:

In today’s business, it is well-recognized that risk is an important factor that needs to be taken into consideration before a decision is made. Studies indicate that both the number of risks faced by organizations and their potential consequences are growing. Supply chain risk management has become one of the major concerns for practitioners and researchers. Supply chain leaders and scholars are now focusing on the importance of managing supply chain risk. In order to meet the challenge of managing and mitigating supply chain risk (SCR), we must first identify the different dimensions of SCR and assess its relevant probability and severity. SCR has been classified in many different ways, and there are no consistently accepted dimensions of SCRs and several different classifications are reported in the literature. Basically, supply chain risks can be classified into two dimensions namely disruption risk and operational risk. Disruption risks are those caused by events such as bankruptcy, natural disasters and terrorist attack. Operational risks are related to supply and demand coordination and uncertainty, such as uncertain demand and uncertain supply. Disruption risks are rare but severe and hard to manage, while operational risk can be reduced through effective SCM activities. Other SCRs include supply risk, process risk, demand risk and technology risk. In fact, the disorganized classification of SCR has created confusion for SCR scholars. Moreover, practitioners need to identify and assess SCR. As such, it is important to have an overarching framework tying all these SCR dimensions together for two reasons. First, it helps researchers use these terms for communication of ideas based on the same concept. Second, a shared understanding of the SCR dimensions will support the researchers to focus on the more important research objective: operationalization of SCR, which is very important for assessing SCR. In general, fresh food supply chain is subject to certain level of risks, such as supply risk (low quality, delivery failure, hot weather etc.) and demand risk (season food imbalance, new competitors). Effective strategies to mitigate fresh food supply chain risk are required to enhance operations. Before implementing effective mitigation strategies, we need to identify the risk sources and evaluate the risk level. However, assessing the supply chain risk is not an easy matter, and existing research mainly use qualitative method, such as risk assessment matrix. To address the relevant issues, this paper aims to analyze the risk factor of the fresh food supply chain using an approach comprising both fuzzy logic and hierarchical holographic modeling techniques. This novel approach is able to take advantage the benefits of both of these well-known techniques and at the same time offset their drawbacks in certain aspects. In order to develop this integrated approach, substantial research work is needed to effectively combine these two techniques in a seamless way, To validate the proposed integrated approach, a case study in a fresh food supply chain company was conducted to verify the feasibility of its functionality in a real environment.

Keywords: fresh food supply chain, fuzzy logic, hierarchical holographic modelling, operationalization, supply chain risk

Procedia PDF Downloads 233
32 Large Scale Method to Assess the Seismic Vulnerability of Heritage Buidings: Modal Updating of Numerical Models and Vulnerability Curves

Authors: Claire Limoge Schraen, Philippe Gueguen, Cedric Giry, Cedric Desprez, Frédéric Ragueneau

Abstract:

Mediterranean area is characterized by numerous monumental or vernacular masonry structures illustrating old ways of build and live. Those precious buildings are often poorly documented, present complex shapes and loadings, and are protected by the States, leading to legal constraints. This area also presents a moderate to high seismic activity. Even moderate earthquakes can be magnified by local site effects and cause collapse or significant damage. Moreover the structural resistance of masonry buildings, especially when less famous or located in rural zones has been generally lowered by many factors: poor maintenance, unsuitable restoration, ambient pollution, previous earthquakes. Recent earthquakes prove that any damage to these architectural witnesses to our past is irreversible, leading to the necessity of acting preventively. This means providing preventive assessments for hundreds of structures with no or few documents. In this context we want to propose a general method, based on hierarchized numerical models, to provide preliminary structural diagnoses at a regional scale, indicating whether more precise investigations and models are necessary for each building. To this aim, we adapt different tools, being developed such as photogrammetry or to be created such as a preprocessor starting from pictures to build meshes for a FEM software, in order to allow dynamic studies of the buildings of the panel. We made an inventory of 198 baroque chapels and churches situated in the French Alps. Then their structural characteristics have been determined thanks field surveys and the MicMac photogrammetric software. Using structural criteria, we determined eight types of churches and seven types for chapels. We studied their dynamical behavior thanks to CAST3M, using EC8 spectrum and accelerogramms of the studied zone. This allowed us quantifying the effect of the needed simplifications in the most sensitive zones and choosing the most effective ones. We also proposed threshold criteria based on the observed damages visible in the in situ surveys, old pictures and Italian code. They are relevant in linear models. To validate the structural types, we made a vibratory measures campaign using vibratory ambient noise and velocimeters. It also allowed us validating this method on old masonry and identifying the modal characteristics of 20 churches. Then we proceeded to a dynamic identification between numerical and experimental modes. So we updated the linear models thanks to material and geometrical parameters, often unknown because of the complexity of the structures and materials. The numerically optimized values have been verified thanks to the measures we made on the masonry components in situ and in laboratory. We are now working on non-linear models redistributing the strains. So we validate the damage threshold criteria which we use to compute the vulnerability curves of each defined structural type. Our actual results show a good correlation between experimental and numerical data, validating the final modeling simplifications and the global method. We now plan to use non-linear analysis in the critical zones in order to test reinforcement solutions.

Keywords: heritage structures, masonry numerical modeling, seismic vulnerability assessment, vibratory measure

Procedia PDF Downloads 485
31 Religious Discourses and Their Impact on Regional and Global Geopolitics: A Study of Deobandi in India, Pakistan and Afghanistan

Authors: Soumya Awasthi

Abstract:

The spread of radical ideology is possible not merely through public meetings, protests, and mosques but even in schools, seminaries, and madrasas. The rhetoric created around the relationship between religion and conflict has been the primary factor for instigating global conflicts – when religion is used to achieve broader objectives. There have been numerous cases of religion-driven conflict around the world be it the Jewish revolt between 66 AD and 628 AD or the 1119 AD the Crusades revolt or during the Cold War period or the rise of right-wing politics in India. Some of the major developments which reiterate the significance of religion in the contemporary times include: (1) The emergence of theocracy in Iran in 1979 (2) Resurgence of world-wide religious beliefs in post-Soviet space. (3) Emergence of transnational terrorism shaped by twisted depiction of Islam by the self proclaimed protectors of the religion. Therefore this paper is premised in the argument that religion has always found itself on the periphery of the discipline of International Relations (IR), and has received less attention than it deserves. The focus of the topic is on the discourses of ‘Deobandi’ and its impact both on the geopolitics of the region- particularly in India, Pakistan, and Afghanistan- and also at the global level. Discourse is a mechanism in use since time immemorial and has been a key tool to mobilise masses against the ruling authority. With the help of field surveys, qualitative and analytical method of research in religion and international relations, it has been found that they are numerous madrassas that are running illegally and are unregistered. These seminaries are operating in the Khyber-Pakhtunkhwa and Federally Administered Tribal Area (FATA). During the Soviet invasion of Afghanistan in 1979, relation between religion and geopolitics was highlighted when there was a sudden spread of radical ideas, finding support from countries like Saudi Arabia (who funded the campaign) and Pakistan (which organised the Saudi funds and set up training camps, both educational and military). During this period there was a huge influence of Wahabi theology on the madrasas which started with Deoband philosophy and later became a mix of Wahabi (influenced by Ahmad Ibn Hannabal and Ibn Taimmiya) and Deobandi philosophy, tending towards fundamentalism. Later the impact of regional geopolitics had influence on the global geopolitics when the incidents like attack on the US in 2001, bomb blasts in U.K, Indonesia, Turkey, and Israel in 2000s. In the midst of all this, there were several scholars who pointed towards Deobandi Philosophy as one of the drivers in the creation of armed Islamic groups in Pakistan, Afghanistan. Hence this paper will make an attempt to understand the trend as to how Deobandi religious discourses originating from India have changed over the decades, and who the agents of such changes are. It will throw light on Deoband from pre-independence till date to create a narrative around the religious discourses and Deobandi philosophy and its spill over impact on the map of global and regional security.

Keywords: Deobandi School of Thought, radicalization, regional and global geopolitics, religious discourses, Whabi movement

Procedia PDF Downloads 207
30 Electronic Raman Scattering Calibration for Quantitative Surface-Enhanced Raman Spectroscopy and Improved Biostatistical Analysis

Authors: Wonil Nam, Xiang Ren, Inyoung Kim, Masoud Agah, Wei Zhou

Abstract:

Despite its ultrasensitive detection capability, surface-enhanced Raman spectroscopy (SERS) faces challenges as a quantitative biochemical analysis tool due to the significant dependence of local field intensity in hotspots on nanoscale geometric variations of plasmonic nanostructures. Therefore, despite enormous progress in plasmonic nanoengineering of high-performance SERS devices, it is still challenging to quantitatively correlate the measured SERS signals with the actual molecule concentrations at hotspots. A significant effort has been devoted to developing SERS calibration methods by introducing internal standards. It has been achieved by placing Raman tags at plasmonic hotspots. Raman tags undergo similar SERS enhancement at the same hotspots, and ratiometric SERS signals for analytes of interest can be generated with reduced dependence on geometrical variations. However, using Raman tags still faces challenges for real-world applications, including spatial competition between the analyte and tags in hotspots, spectral interference, laser-induced degradation/desorption due to plasmon-enhanced photochemical/photothermal effects. We show that electronic Raman scattering (ERS) signals from metallic nanostructures at hotspots can serve as the internal calibration standard to enable quantitative SERS analysis and improve biostatistical analysis. We perform SERS with Au-SiO₂ multilayered metal-insulator-metal nano laminated plasmonic nanostructures. Since the ERS signal is proportional to the volume density of electron-hole occupation in hotspots, the ERS signals exponentially increase when the wavenumber is approaching the zero value. By a long-pass filter, generally used in backscattered SERS configurations, to chop the ERS background continuum, we can observe an ERS pseudo-peak, IERS. Both ERS and SERS processes experience the |E|⁴ local enhancements during the excitation and inelastic scattering transitions. We calibrated IMRS of 10 μM Rhodamine 6G in solution by IERS. The results show that ERS calibration generates a new analytical value, ISERS/IERS, insensitive to variations from different hotspots and thus can quantitatively reflect the molecular concentration information. Given the calibration capability of ERS signals, we performed label-free SERS analysis of living biological systems using four different breast normal and cancer cell lines cultured on nano-laminated SERS devices. 2D Raman mapping over 100 μm × 100 μm, containing several cells, was conducted. The SERS spectra were subsequently analyzed by multivariate analysis using partial least square discriminant analysis. Remarkably, after ERS calibration, MCF-10A and MCF-7 cells are further separated while the two triple-negative breast cancer cells (MDA-MB-231 and HCC-1806) are more overlapped, in good agreement with the well-known cancer categorization regarding the degree of malignancy. To assess the strength of ERS calibration, we further carried out a drug efficacy study using MDA-MB-231 and different concentrations of anti-cancer drug paclitaxel (PTX). After ERS calibration, we can more clearly segregate the control/low-dosage groups (0 and 1.5 nM), the middle-dosage group (5 nM), and the group treated with half-maximal inhibitory concentration (IC50, 15 nM). Therefore, we envision that ERS calibrated SERS can find crucial opportunities in label-free molecular profiling of complicated biological systems.

Keywords: cancer cell drug efficacy, plasmonics, surface-enhanced Raman spectroscopy (SERS), SERS calibration

Procedia PDF Downloads 128
29 Identification of Hub Genes in the Development of Atherosclerosis

Authors: Jie Lin, Yiwen Pan, Li Zhang, Zhangyong Xia

Abstract:

Atherosclerosis is a chronic inflammatory disease characterized by the accumulation of lipids, immune cells, and extracellular matrix in the arterial walls. This pathological process can lead to the formation of plaques that can obstruct blood flow and trigger various cardiovascular diseases such as heart attack and stroke. The underlying molecular mechanisms still remain unclear, although many studies revealed the dysfunction of endothelial cells, recruitment and activation of monocytes and macrophages, and the production of pro-inflammatory cytokines and chemokines in atherosclerosis. This study aimed to identify hub genes involved in the progression of atherosclerosis and to analyze their biological function in silico, thereby enhancing our understanding of the disease’s molecular mechanisms. Through the analysis of microarray data, we examined the gene expression in media and neo-intima from plaques, as well as distant macroscopically intact tissue, across a cohort of 32 hypertensive patients. Initially, 112 differentially expressed genes (DEGs) were identified. Subsequent immune infiltration analysis indicated a predominant presence of 27 immune cell types in the atherosclerosis group, particularly noting an increase in monocytes and macrophages. In the Weighted gene co-expression network analysis (WGCNA), 10 modules with a minimum of 30 genes were defined as key modules, with blue, dark, Oliver green and sky-blue modules being the most significant. These modules corresponded respectively to monocyte, activated B cell, and activated CD4 T cell gene patterns, revealing a strong morphological-genetic correlation. From these three gene patterns (modules morphology), a total of 2509 key genes (Gene Significance >0.2, module membership>0.8) were extracted. Six hub genes (CD36, DPP4, HMOX1, PLA2G7, PLN2, and ACADL) were then identified by intersecting 2509 key genes, 102 DEGs with lipid-related genes from the Genecard database. The bio-functional analysis of six hub genes was estimated by a robust classifier with an area under the curve (AUC) of 0.873 in the ROC plot, indicating excellent efficacy in differentiating between the disease and control group. Moreover, PCA visualization demonstrated clear separation between the groups based on these six hub genes, suggesting their potential utility as classification features in predictive models. Protein-protein interaction (PPI) analysis highlighted DPP4 as the most interconnected gene. Within the constructed key gene-drug network, 462 drugs were predicted, with ursodeoxycholic acid (UDCA) being identified as a potential therapeutic agent for modulating DPP4 expression. In summary, our study identified critical hub genes implicated in the progression of atherosclerosis through comprehensive bioinformatic analyses. These findings not only advance our understanding of the disease but also pave the way for applying similar analytical frameworks and predictive models to other diseases, thereby broadening the potential for clinical applications and therapeutic discoveries.

Keywords: atherosclerosis, hub genes, drug prediction, bioinformatics

Procedia PDF Downloads 55
28 Golden Dawn's Rhetoric on Social Networks: Populism, Xenophobia and Antisemitism

Authors: Georgios Samaras

Abstract:

New media such as Facebook, YouTube and Twitter introduced the world to a new era of instant communication. An era where online interactions could replace a lot of offline actions. Technology can create a mediated environment in which participants can communicate (one-to-one, one-to-many, and many-to-many) both synchronously and asynchronously and participate in reciprocal message exchanges. Currently, social networks are attracting similar academic attention to that of the internet after its mainstream implementation into public life. Websites and platforms are seen as the forefront of a new political change. There is a significant backdrop of previous methodologies employed to research the effects of social networks. New approaches are being developed to be able to adapt to the growth of social networks and the invention of new platforms. Golden Dawn was the first openly neo-Nazi party post World War II to win seats in the parliament of a European country. Its racist rhetoric and violent tactics on social networks were rewarded by their supporters, who in the face of Golden Dawn’s leaders saw a ‘new dawn’ in Greek politics. Mainstream media banned its leaders and members of the party indefinitely after Ilias Kasidiaris attacked Liana Kanelli, a member of the Greek Communist Party, on live television. This media ban was seen as a treasonous move by a significant percentage of voters, who believed that the system was desperately trying to censor Golden Dawn to favor mainstream parties. The shocking attack on live television received international coverage and while European countries were condemning this newly emerged neo-Nazi rhetoric, almost 7 percent of the Greek population rewarded Golden Dawn with 18 seats in the Greek parliament. Many seem to think that Golden Dawn mobilised its voters online and this approach played a significant role in spreading their message and appealing to wider audiences. No strict online censorship existed back in 2012 and although Golden Dawn was openly used neo-Nazi symbolism, it was allowed to use social networks without serious restrictions until 2017. This paper used qualitative methods to investigate Golden Dawn’s rise in social networks from 2012 to 2019. The focus of the content analysis was set on three social networking platforms: Facebook, Twitter and YouTube, while the existence of Golden Dawn’s website, which was used as a news sharing hub, was also taken into account. The content analysis included text and visual analyses that sampled content from their social networking pages to translate their political messaging through an ideological lens focused on extreme-right populism. The absence of hate speech regulations on social network platforms in 2012 allowed the free expression of those heavily ultranationalist and populist views, as they were employed by Golden Dawn in the Greek political scene. On YouTube, Facebook and Twitter, the influence of their rhetoric was particularly strong. Official channels and MPs profiles were investigated to explore the messaging in-depth and understand its ideological elements.

Keywords: populism, far-right, social media, Greece, golden dawn

Procedia PDF Downloads 137
27 From Shelf to Shell - The Corporate Form in the Era of Over-Regulation

Authors: Chrysthia Papacleovoulou

Abstract:

The era of de-regulation, off-shore and tax haven jurisdictions, and shelf companies has come to an end. The usage of complex corporate structures involving trust instruments, special purpose vehicles, holding-subsidiaries in offshore haven jurisdictions, and taking advantage of tax treaties is soaring. States which raced to introduce corporate friendly legislation, tax incentives, and creative international trust law in order to attract greater FDI are now faced with regulatory challenges and are forced to revisit the corporate form and its tax treatment. The fiduciary services industry, which dominated over the last 3 decades, is now striving to keep up with the new regulatory framework as a result of a number of European and international legislative measures. This article considers the challenges to the company and the corporate form as a result of the legislative measures on tax planning and tax avoidance, CRS reporting, FATCA, CFC rules, OECD’s BEPS, the EU Commission's new transparency rules for intermediaries that extends to tax advisors, accountants, banks & lawyers who design and promote tax planning schemes for their clients, new EU rules to block artificial tax arrangements and new transparency requirements for financial accounts, tax rulings and multinationals activities (DAC 6), G20's decision for a global 15% minimum corporate tax and banking regulation. As a result, states are found in a race of over-regulation and compliance. These legislative measures constitute a global up-side down tax-harmonisation. Through the adoption of the OECD’s BEPS, states agreed to an international collaboration to end tax avoidance and reform international taxation rules. Whilst the idea was to ensure that multinationals would pay their fair share of tax everywhere they operate, an indirect result of the aforementioned regulatory measures was to attack private clients-individuals who -over the past 3 decades- used the international tax system and jurisdictions such as Marshal Islands, Cayman Islands, British Virgin Islands, Bermuda, Seychelles, St. Vincent, Jersey, Guernsey, Liechtenstein, Monaco, Cyprus, and Malta, to name but a few, to engage in legitimate tax planning and tax avoidance. Companies can no longer maintain bank accounts without satisfying the real substance test. States override the incorporation doctrine theory and apply a real seat or real substance test in taxing companies and their activities, targeting even the beneficial owners personally with tax liability. Tax authorities in civil law jurisdictions lift the corporate veil through the public registries of UBO Registries and Trust Registries. As a result, the corporate form and the doctrine of limited liability are challenged in their core. Lastly, this article identifies the development of new instruments, such as funds and private placement insurance policies, and the trend of digital nomad workers. The baffling question is whether industry and states can meet somewhere in the middle and exit this over-regulation frenzy.

Keywords: company, regulation, TAX, corporate structure, trust vehicles, real seat

Procedia PDF Downloads 129
26 Antimicrobial Properties of SEBS Compounds with Zinc Oxide and Zinc Ions

Authors: Douglas N. Simões, Michele Pittol, Vanda F. Ribeiro, Daiane Tomacheski, Ruth M. C. Santana

Abstract:

The increasing demand of thermoplastic elastomers is related to the wide range of applications, such as automotive, footwear, wire and cable industries, adhesives and medical devices, cell phones, sporting goods, toys and others. These materials are susceptible to microbial attack. Moisture and organic matter present in some areas (such as shower area and sink), provide favorable conditions for microbial proliferation, which contributes to the spread of diseases and reduces the product life cycle. Compounds based on SEBS copolymers, poly(styrene-b-(ethylene-co-butylene)-b-styrene, are a class of thermoplastic elastomers (TPE), fully recyclable and largely used in domestic appliances like bath mats and tooth brushes (soft touch). Zinc oxide and zinc ions loaded in personal and home care products have become common in the last years due to its biocidal effect. In that sense, the aim of this study was to evaluate the effect of zinc as antimicrobial agent in compounds based on SEBS/polypropylene/oil/ calcite for use as refrigerator seals (gaskets), bath mats and sink squeegee. Two zinc oxides from different suppliers (ZnO-Pe and ZnO-WR) and one masterbatch of zinc ions (M-Zn-ion) were used in proportions of 0%, 1%, 3% and 5%. The compounds were prepared using a co-rotating double screw extruder (L/D ratio of 40/1 and 16 mm screw diameter). The extrusion parameters were kept constant for all materials. Tests specimens were prepared using the injection molding machine. A compound with no antimicrobial additive (standard) was also tested. Compounds were characterized by physical (density), mechanical (hardness and tensile properties) and rheological properties (melt flow rate - MFR). The Japan Industrial Standard (JIS) Z 2801:2010 was applied to evaluate antibacterial properties against Staphylococcus aureus (S. aureus) and Escherichia coli (E. coli). The Brazilian Association of Technical Standards (ABNT) NBR 15275:2014 were used to evaluate antifungal properties against Aspergillus niger (A. niger), Aureobasidium pullulans (A. pullulans), Candida albicans (C. albicans), and Penicillium chrysogenum (P. chrysogenum). The microbiological assay showed a reduction over 42% in E. coli and over 49% in S. aureus population. The tests with fungi showed inconclusive results because the sample without zinc also demonstrated an inhibition of fungal development when tested against A. pullulans, C. albicans and P. chrysogenum. In addition, the zinc loaded samples showed worse results than the standard sample when tested against A. niger. The zinc addition did not show significant variation in mechanical properties. However, the density values increased with the rise in ZnO additives concentration, and had a little decrease in M-Zn-ion samples. Also, there were differences in the MFR results in all compounds compared to the standard.

Keywords: antimicrobial, home device, SEBS, zinc

Procedia PDF Downloads 315