Search results for: file tampering attack
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 883

Search results for: file tampering attack

73 Effects of Probiotic Pseudomonas fluorescens on the Growth Performance, Immune Modulation, and Histopathology of African Catfish (Clarias gariepinus)

Authors: Nelson R. Osungbemiro, O. A. Bello-Olusoji, M. Oladipupo

Abstract:

This study was carried out to determine the effects of probiotics Pseudomonas fluorescens on the growth performance, histology examination and immune modulation of African Catfish, (Clarias gariepinus) challenged with Clostridium botulinum. P. fluorescens, and C. botulinum isolates were removed from the gut, gill and skin organs of procured adult samples of Clarias gariepinus from commercial fish farms in Akure, Ondo State, Nigeria. The physical and biochemical tests were performed on the bacterial isolates using standard microbiological techniques for their identification. Antibacterial activity tests on P. fluorescens showed inhibition zone with mean value of 3.7 mm which indicates high level of antagonism. The experimental diets were prepared at different probiotics bacterial concentration comprises of five treatments of different bacterial suspension, including the control (T1), T2 (10³), T3 (10⁵), T4 (10⁷) and T5 (10⁹). Three replicates for each treatment type were prepared. Growth performance and nutrients utilization indices were calculated. The proximate analysis of fish carcass and experimental diet was carried out using standard methods. After feeding for 70 days, haematological values and histological test were done following standard methods; also a subgroup from each experimental treatment was challenged by inoculating Intraperitonieally (I/P) with different concentration of pathogenic C. botulinum. Statistically, there were significant differences (P < 0.05) in the growth performance and nutrient utilization of C. gariepinus. Best weight gain and feed conversion ratio were recorded in fish fed T4 (10⁷) and poorest value obtained in the control. Haematological analyses of C. gariepinus fed the experimental diets indicated that all the fish fed diets with P. fluorescens had marked significantly (p < 0.05) higher White Blood Cell than the control diet. The results of the challenge test showed that fish fed the control diet had the highest mortality rate. Histological examination of the gill, intestine, and liver of fish in this study showed several histopathological alterations in fish fed the control diets compared with those fed the P. fluorescens diets. The study indicated that the optimum level of P. fluorescens required for C. gariepinus growth and white blood cells formation is 10⁷ CFU g⁻¹, while carcass protein deposition required 10⁵ CFU g⁻¹ of P. fluorescens concentration. The study also confirmed P. fluorescens as efficient probiotics that is capable of improving the immune response of C. gariepinus against the attack of a virulent fish pathogen, C. botulinum.

Keywords: Clarias gariepinus, Clostridium botulinum, probiotics, Pseudomonas fluorescens

Procedia PDF Downloads 165
72 Structural Health Assessment of a Masonry Bridge Using Wireless

Authors: Nalluri Lakshmi Ramu, C. Venkat Nihit, Narayana Kumar, Dillep

Abstract:

Masonry bridges are the iconic heritage transportation infrastructure throughout the world. Continuous increase in traffic loads and speed have kept engineers in dilemma about their structural performance and capacity. Henceforth, research community has an urgent need to propose an effective methodology and validate on real-time bridges. The presented research aims to assess the structural health of an Eighty-year-old masonry railway bridge in India using wireless accelerometer sensors. The bridge consists of 44 spans with length of 24.2 m each and individual pier is 13 m tall laid on well foundation. To calculate the dynamic characteristic properties of the bridge, ambient vibrations were recorded from the moving traffic at various speeds and the same are compared with the developed three-dimensional numerical model using finite element-based software. The conclusions about the weaker or deteriorated piers are drawn from the comparison of frequencies obtained from the experimental tests conducted on alternative spans. Masonry is a heterogeneous anisotropic material made up of incoherent materials (such as bricks, stones, and blocks). It is most likely the earliest largely used construction material. Masonry bridges, which were typically constructed of brick and stone, are still a key feature of the world's highway and railway networks. There are 1,47,523 railway bridges across India and about 15% of these bridges are built by masonry, which are around 80 to 100 year old. The cultural significance of masonry bridges cannot be overstated. These bridges are considered to be complicated due to the presence of arches, spandrel walls, piers, foundations, and soils. Due to traffic loads and vibrations, wind, rain, frost attack, high/low temperature cycles, moisture, earthquakes, river overflows, floods, scour, and soil under their foundations may cause material deterioration, opening of joints and ring separation in arch barrels, cracks in piers, loss of brick-stones and mortar joints, distortion of the arch profile. Few NDT tests like Flat jack Tests are being employed to access the homogeneity, durability of masonry structure, however there are many drawbacks because of the test. A modern approach of structural health assessment of masonry structures by vibration analysis, frequencies and stiffness properties is being explored in this paper.

Keywords: masonry bridges, condition assessment, wireless sensors, numerical analysis modal frequencies

Procedia PDF Downloads 171
71 Implementation of Smart Card Automatic Fare Collection Technology in Small Transit Agencies for Standards Development

Authors: Walter E. Allen, Robert D. Murray

Abstract:

Many large transit agencies have adopted RFID technology and electronic automatic fare collection (AFC) or smart card systems, but small and rural agencies remain tied to obsolete manual, cash-based fare collection. Small countries or transit agencies can benefit from the implementation of smart card AFC technology with the promise of increased passenger convenience, added passenger satisfaction and improved agency efficiency. For transit agencies, it reduces revenue loss, improves passenger flow and bus stop data. For countries, further implementation into security, distribution of social services or currency transactions can provide greater benefits. However, small countries or transit agencies cannot afford expensive proprietary smart card solutions typically offered by the major system suppliers. Deployment of Contactless Fare Media System (CFMS) Standard eliminates the proprietary solution, ultimately lowering the cost of implementation. Acumen Building Enterprise, Inc. chose the Yuma County Intergovernmental Public Transportation Authority (YCIPTA) existing proprietary YCAT smart card system to implement CFMS. The revised system enables the purchase of fare product online with prepaid debit or credit cards using the Payment Gateway Processor. Open and interoperable smart card standards for transit have been developed. During the 90-day Pilot Operation conducted, the transit agency gathered the data from the bus AcuFare 200 Card Reader, loads (copies) the data to a USB Thumb Drive and uploads the data to the Acumen Host Processing Center for consolidation of the data into the transit agency master data file. The transition from the existing proprietary smart card data format to the new CFMS smart card data format was transparent to the transit agency cardholders. It was proven that open standards and interoperability design can work and reduce both implementation and operational costs for small transit agencies or countries looking to expand smart card technology. Acumen was able to avoid the implementation of the Payment Card Industry (PCI) Data Security Standards (DSS) which is expensive to develop and costly to operate on a continuing basis. Due to the substantial additional complexities of implementation and the variety of options presented to the transit agency cardholder, Acumen chose to implement only the Directed Autoload. To improve the implementation efficiency and the results for a similar undertaking, it should be considered that some passengers lack credit cards and are averse to technology. There are more than 1,300 small and rural agencies in the United States. This grows by 10 fold when considering small countries or rural locations throughout Latin American and the world. Acumen is evaluating additional countries, sites or transit agency that can benefit from the smart card systems. Frequently, payment card systems require extensive security procedures for implementation. The Project demonstrated the ability to purchase fare value, rides and passes with credit cards on the internet at a reasonable cost without highly complex security requirements.

Keywords: automatic fare collection, near field communication, small transit agencies, smart cards

Procedia PDF Downloads 284
70 Endocrine Therapy Resistance and Epithelial to Mesenchymal Transition Inhibits by INT3 & Quercetin in MCF7 Cell Lines

Authors: D. Pradhan, G. Tripathy, S. Pradhan

Abstract:

Objectives: Imperviousness gainst estrogen treatments is a noteworthy reason for infection backslide and mortality in estrogen receptor alpha (ERα)- positive breast diseases. Tamoxifen or estrogen withdrawal builds the reliance of breast malignancy cells on INT3 flagging. Here, we researched the commitment of Quercetin and INT3 motioning in endocrine-safe breast tumor cells. Methods: We utilized two models of endocrine treatments safe (ETR) breast tumor: Tamoxifen-safe (TamR) and long haul estrogen-denied (LTED) MCF7 cells. We assessed the transitory and intrusive limit of these cells by Transwell cells. Articulation of epithelial to mesenchymal move (EMT) controllers and in addition INT3 receptors and targets were assessed by constant PCR and western smudge investigation. Besides, we tried in-vitro hostile to Quercetin monoclonal Antibodies (mAbs) and Gamma Secretase Inhibitors (GSIs) as potential EMT inversion remedial specialists. At last, we created stable Quercetin overexpressing MCF7 cells and assessed their EMT components and reaction to Tamoxifen. Results: We found that ETR cells procured an Epithelial to Mesenchymal move (EMT) phenotype and showed expanded levels of Quercetin and INT3 targets. Interestingly, we distinguished more elevated amount of INT3 however lower levels of INT1 and INT3 proposing a change to motioning through distinctive INT3 receptors after obtaining of resistance. Against Quercetin monoclonal antibodies and the GSI PF03084014 were powerful in obstructing the Quercetin/INT3 pivot and in part repressing the EMT process. As a consequence of this, cell relocation and attack were weakened and the immature microorganism like populace was essentially decreased. Hereditary hushing of Quercetin and INT3 prompted proportionate impacts. At long last, stable overexpression of Quercetin was adequate to make MCF7 lethargic to Tamoxifen by INT3 initiation. Conclusions: ETR cells express abnormal amounts of Quercetin and INT3, whose actuation eventually drives intrusive conduct. Hostile to Quercetin mAbs and GSI PF03084014 lessen articulation of EMT particles decreasing cell obtrusiveness. Quercetin overexpression instigates Tamoxifen resistance connected to obtaining of EMT phenotype. Our discovering propose that focusing on Quercetin and INT3 warrants further clinical Correlation as substantial restorative methodologies in endocrine-safe breast.

Keywords: endocrine, epithelial, mesenchymal, INT3, quercetin, MCF7

Procedia PDF Downloads 306
69 Integrating System-Level Infrastructure Resilience and Sustainability Based on Fractal: Perspectives and Review

Authors: Qiyao Han, Xianhai Meng

Abstract:

Urban infrastructures refer to the fundamental facilities and systems that serve cities. Due to the global climate change and human activities in recent years, many urban areas around the world are facing enormous challenges from natural and man-made disasters, like flood, earthquake and terrorist attack. For this reason, urban resilience to disasters has attracted increasing attention from researchers and practitioners. Given the complexity of infrastructure systems and the uncertainty of disasters, this paper suggests that studies of resilience could focus on urban functional sustainability (in social, economic and environmental dimensions) supported by infrastructure systems under disturbance. It is supposed that urban infrastructure systems with high resilience should be able to reconfigure themselves without significant declines in critical functions (services), such as primary productivity, hydrological cycles, social relations and economic prosperity. Despite that some methods have been developed to integrate the resilience and sustainability of individual infrastructure components, more work is needed to enable system-level integration. This research presents a conceptual analysis framework for integrating resilience and sustainability based on fractal theory. It is believed that the ability of an ecological system to maintain structure and function in face of disturbance and to reorganize following disturbance-driven change is largely dependent on its self-similar and hierarchical fractal structure, in which cross-scale resilience is produced by the replication of ecosystem processes dominating at different levels. Urban infrastructure systems are analogous to ecological systems because they are interconnected, complex and adaptive, are comprised of interconnected components, and exhibit characteristic scaling properties. Therefore, analyzing resilience of ecological system provides a better understanding about the dynamics and interactions of infrastructure systems. This paper discusses fractal characteristics of ecosystem resilience, reviews literature related to system-level infrastructure resilience, identifies resilience criteria associated with sustainability dimensions, and develops a conceptual analysis framework. Exploration of the relevance of identified criteria to fractal characteristics reveals that there is a great potential to analyze infrastructure systems based on fractal. In the conceptual analysis framework, it is proposed that in order to be resilient, urban infrastructure system needs to be capable of “maintaining” and “reorganizing” multi-scale critical functions under disasters. Finally, the paper identifies areas where further research efforts are needed.

Keywords: fractal, urban infrastructure, sustainability, system-level resilience

Procedia PDF Downloads 275
68 Deployment of Armed Soldiers in European Cities as a Source of Insecurity among Czech Population

Authors: Blanka Havlickova

Abstract:

In the last ten years, there are growing numbers of troops with machine guns serving on streets of European cities. We can see them around government buildings, major transport hubs, synagogues, galleries and main tourist landmarks. As the main purpose of armed soldier’s presence in European cities authorities declare the prevention of terrorist attacks and psychological support for tourists and domestic population. The main objective of the following study is to find out whether the deployment of armed soldiers in European cities has a calming and reassuring effect on Czech citizens (if the presence at armed soldiers make the Czech population feel more secure) or rather becomes a stress factor (the presence of soldiers standing guard in full military fatigues recalls serious criminality and terrorist attacks which are reflected in the fears and insecurity of Czech population). The initial hypothesis of this study is connected with the priming theory, the idea that when we are exposed to an image (armed soldier), it makes us unconsciously focus on a topic connected with this image (terrorism). This paper is based on a quantitative public survey, which was carried out in the form of electronic questioning among the citizens of the Czech Republic. Respondents answered 14 questions about two European cities – London and Paris. Besides general questions investigating the respondents' awareness of these cities, some of the questions focused on the fear that the respondents had when picturing themselves leaving next Monday for the given city (London or Paris). The questions asking about respondent´s travel fears and concerns were accompanied by different photos. When answering the question about fear some respondents have been presented with a photo of Westminster Palace and the Eiffel with ordinary citizens while other respondents have been presented with a picture of the Westminster Palace, the and Eiffel's tower not only with ordinary citizens, but also with one soldier holding a machine gun. The main goal of this paper is to analyse and compare data about concerns for these two groups of respondents (presented with different pictures) and find out if and how an armed soldier with a machine gun in front of the Westminster Palace or the Eiffel Tower affects the public's concerns about visiting the site. In other words, the aim of this paper is to confirm or rebut the hypothesis that the look at a soldier with a machine gun in front of the Eiffel Tower or the Westminster Palace automatically triggers the association with a terrorist attack leading to an increase in fear and insecurity among Czech population.

Keywords: terrorism, security measures, priming, risk perception

Procedia PDF Downloads 252
67 A Battle of Identity(ies): Deconstructing Spaces of Belonging in Saleem Haddad’s Guapa and Hasan Namir’s God in Pink

Authors: Nour Aladdin

Abstract:

This paper explores the interconnectedness of belonging, space, and identity in Anglo Arab literature, particularly Saleem Haddad’s Guapa and Hasan Namir’sGod in Pink. This paper suggest that Rasa and Ramy, the queer Arab characters respectively, do not belong in either the Middle East or the West. Using Amin Maalouf’s analysis of the Arab identity, specifically his argument that an individual identifies strongly with the aspect of their identity that is under attack, this paper argues that all of Rasa and Ramy’s spaces are politically charged - a term that denotes that all values and beliefs instilled in Arabs and their spaces are heavily influenced by Arab politics, culture, and, often times religion. Therefore, the politically charged environments Rasa and Ramy inhabit will always be against one part of their identity, which is why they cannot identify as queer and Arab simultaneously. For Rasa, the unnamed Middle Eastern country, his home environment, as well as the so-called safe space nightclub, condemn his queerness, leading him to connect more to his sexual orientation. However, Rasa associates himself with his Arab roots when he migrates to America, a different form of politically charged space that minoritizes his ethnicity. Similarly, Ramy’s spaces are naturally religiopolitical after Islam heightened in Iraq during the Iraq War; as a result, Ramy’s home environment, Sheikh Ammar’s house, the mosque, and the nightclub are influenced by the religiopolitics and bombard his ability to identify as not only a queer Arab but a queer Arab Muslim. Ultimately, because Rasa and Ramy are constantly in movement, their identity attributes are also in movement. This paper is divided into three sections. The first section focuses on Guapa and the Arab Spring’s politics, mainly its influence on queer Arabs in and around the Middle East. Drawing from a number of queer and Arab gender theories, I analyze all of Rasa’s spaces as politically charged that prevent him from the means to be queer and Arab. The second section examines God in Pink in close connection to the 2003 invasion of Iraq. Ramy’s spaces are religiopolitically charged, that prevent him to embrace all of his identity attributes – nationality, ethnicity, sexual orientation, and religious affiliation – concomitantly. The last section considers the rapid use of technology and social media in the Middle East as a means to provide deviant heterotopic spaces for queer Arabs. With the rise of subtle and covert queer heterotopias, there is a slow and steady shift of queer tolerance in the Arab world.

Keywords: belonging, identity, spaces, queer, arabness, middle east, orientalism

Procedia PDF Downloads 115
66 Security Issues in Long Term Evolution-Based Vehicle-To-Everything Communication Networks

Authors: Mujahid Muhammad, Paul Kearney, Adel Aneiba

Abstract:

The ability for vehicles to communicate with other vehicles (V2V), the physical (V2I) and network (V2N) infrastructures, pedestrians (V2P), etc. – collectively known as V2X (Vehicle to Everything) – will enable a broad and growing set of applications and services within the intelligent transport domain for improving road safety, alleviate traffic congestion and support autonomous driving. The telecommunication research and industry communities and standardization bodies (notably 3GPP) has finally approved in Release 14, cellular communications connectivity to support V2X communication (known as LTE – V2X). LTE – V2X system will combine simultaneous connectivity across existing LTE network infrastructures via LTE-Uu interface and direct device-to-device (D2D) communications. In order for V2X services to function effectively, a robust security mechanism is needed to ensure legal and safe interaction among authenticated V2X entities in the LTE-based V2X architecture. The characteristics of vehicular networks, and the nature of most V2X applications, which involve human safety makes it significant to protect V2X messages from attacks that can result in catastrophically wrong decisions/actions include ones affecting road safety. Attack vectors include impersonation attacks, modification, masquerading, replay, MiM attacks, and Sybil attacks. In this paper, we focus our attention on LTE-based V2X security and access control mechanisms. The current LTE-A security framework provides its own access authentication scheme, the AKA protocol for mutual authentication and other essential cryptographic operations between UEs and the network. V2N systems can leverage this protocol to achieve mutual authentication between vehicles and the mobile core network. However, this protocol experiences technical challenges, such as high signaling overhead, lack of synchronization, handover delay and potential control plane signaling overloads, as well as privacy preservation issues, which cannot satisfy the adequate security requirements for majority of LTE-based V2X services. This paper examines these challenges and points to possible ways by which they can be addressed. One possible solution, is the implementation of the distributed peer-to-peer LTE security mechanism based on the Bitcoin/Namecoin framework, to allow for security operations with minimal overhead cost, which is desirable for V2X services. The proposed architecture can ensure fast, secure and robust V2X services under LTE network while meeting V2X security requirements.

Keywords: authentication, long term evolution, security, vehicle-to-everything

Procedia PDF Downloads 168
65 Accurate Calculation of the Penetration Depth of a Bullet Using ANSYS

Authors: Eunsu Jang, Kang Park

Abstract:

In developing an armored ground combat vehicle (AGCV), it is a very important step to analyze the vulnerability (or the survivability) of the AGCV against enemy’s attack. In the vulnerability analysis, the penetration equations are usually used to get the penetration depth and check whether a bullet can penetrate the armor of the AGCV, which causes the damage of internal components or crews. The penetration equations are derived from penetration experiments which require long time and great efforts. However, they usually hold only for the specific material of the target and the specific type of the bullet used in experiments. Thus, penetration simulation using ANSYS can be another option to calculate penetration depth. However, it is very important to model the targets and select the input parameters in order to get an accurate penetration depth. This paper performed a sensitivity analysis of input parameters of ANSYS on the accuracy of the calculated penetration depth. Two conflicting objectives need to be achieved in adopting ANSYS in penetration analysis: maximizing the accuracy of calculation and minimizing the calculation time. To maximize the calculation accuracy, the sensitivity analysis of the input parameters for ANSYS was performed and calculated the RMS error with the experimental data. The input parameters include mesh size, boundary condition, material properties, target diameter are tested and selected to minimize the error between the calculated result from simulation and the experiment data from the papers on the penetration equation. To minimize the calculation time, the parameter values obtained from accuracy analysis are adjusted to get optimized overall performance. As result of analysis, the followings were found: 1) As the mesh size gradually decreases from 0.9 mm to 0.5 mm, both the penetration depth and calculation time increase. 2) As diameters of the target decrease from 250mm to 60 mm, both the penetration depth and calculation time decrease. 3) As the yield stress which is one of the material property of the target decreases, the penetration depth increases. 4) The boundary condition with the fixed side surface of the target gives more penetration depth than that with the fixed side and rear surfaces. By using above finding, the input parameters can be tuned to minimize the error between simulation and experiments. By using simulation tool, ANSYS, with delicately tuned input parameters, penetration analysis can be done on computer without actual experiments. The data of penetration experiments are usually hard to get because of security reasons and only published papers provide them in the limited target material. The next step of this research is to generalize this approach to anticipate the penetration depth by interpolating the known penetration experiments. This result may not be accurate enough to be used to replace the penetration experiments, but those simulations can be used in the early stage of the design process of AGCV in modelling and simulation stage.

Keywords: ANSYS, input parameters, penetration depth, sensitivity analysis

Procedia PDF Downloads 402
64 Understanding the Role of Nitric Oxide Synthase 1 in Low-Density Lipoprotein Uptake by Macrophages and Implication in Atherosclerosis Progression

Authors: Anjali Roy, Mirza S. Baig

Abstract:

Atherosclerosis is a chronic inflammatory disease characterized by the formation of lipid rich plaque enriched with necrotic core, modified lipid accumulation, smooth muscle cells, endothelial cells, leucocytes and macrophages. Macrophage foam cells play a critical role in the occurrence and development of inflammatory atherosclerotic plaque. Foam cells are the fat-laden macrophages in the initial stage atherosclerotic lesion formation. Foam cells are an indication of plaque build-up, or atherosclerosis, which is commonly associated with increased risk of heart attack and stroke as a result of arterial narrowing and hardening. The mechanisms that drive atherosclerotic plaque progression remain largely unknown. Dissecting the molecular mechanism involved in process of macrophage foam cell formation will help to develop therapeutic interventions for atherosclerosis. To investigate the mechanism, we studied the role of nitric oxide synthase 1(NOS1)-mediated nitric oxide (NO) on low-density lipoprotein (LDL) uptake by bone marrow derived macrophages (BMDM). Using confocal microscopy, we found that incubation of macrophages with NOS1 inhibitor, TRIM (1-(2-Trifluoromethylphenyl) imidazole) or L-NAME (N omega-nitro-L-arginine methyl ester) prior to LDL treatment significantly reduces the LDL uptake by BMDM. Further, addition of NO donor (DEA NONOate) in NOS1 inhibitor treated macrophages recovers the LDL uptake. Our data strongly suggest that NOS1 derived NO regulates LDL uptake by macrophages and foam cell formation. Moreover, we also checked proinflammatory cytokine mRNA expression through real time PCR in BMDM treated with LDL and copper oxidized LDL (OxLDL) in presences and absences of inhibitor. Normal LDL does not evoke cytokine expression whereas OxLDL induced proinflammatory cytokine expression which significantly reduced in presences of NOS1 inhibitor. Rapid NOS-1-derived NO and its stable derivative formation act as signaling agents for inducible NOS-2 expression in endothelial cells, leading to endothelial vascular wall lining disruption and dysfunctioning. This study highlights the role of NOS1 as critical players of foam cell formation and would reveal much about the key molecular proteins involved in atherosclerosis. Thus, targeting NOS1 would be a useful strategy in reducing LDL uptake by macrophages at early stage of disease and hence dampening the atherosclerosis progression.

Keywords: atherosclerosis, NOS1, inflammation, oxidized LDL

Procedia PDF Downloads 127
63 Gluten Intolerance, Celiac Disease, and Neuropsychiatric Disorders: A Translational Perspective

Authors: Jessica A. Hellings, Piyushkumar Jani

Abstract:

Background: Systemic autoimmune disorders are increasingly implicated in neuropsychiatric illness, especially in the setting of treatment resistance in individuals of all ages. Gluten allergy in fullest extent results in celiac disease, affecting multiple organs including central nervous system (CNS). Clinicians often lack awareness of the association between neuropsychiatric illness and gluten allergy, partly since many such research studies are published in immunology and gastroenterology journals. Methods: Following a Pubmed literature search and online searches on celiac disease websites, 40 articles are critically reviewed in detail. This work reviews celiac disease, gluten intolerance and current evidence of their relationship to neuropsychiatric and systemic illnesses. The review also covers current work-up and diagnosis, as well as dietary interventions, gluten restriction outcomes, and future research directions. Results: Gluten allergy in susceptible individuals damages the small intestine, producing a leaky gut and malabsorption state, as well as allowing antibodies into the bloodstream, which attack major organs. Lack of amino acid precursors for neurotransmitter synthesis together with antibody-associated brain changes and hypoperfusion may result in neuropsychiatric illness. This is well documented; however, studies in neuropsychiatry are often small. In the large CATIE trial, subjects with schizophrenia had significantly increased antibodies to tissue transglutaminase (TTG), and antigliadin antibodies, both significantly greater gluten antibodies than in control subjects. On later follow up, TTG-6 antibodies were identified in these subjects’ brains but not in their intestines. Significant evidence mostly from small studies also exists for gluten allergy and celiac-related depression, anxiety disorders, attention-deficit/hyperactivity disorder, autism spectrum disorders, ataxia, and epilepsy. Dietary restriction of gluten resulted in remission in several published cases, including for treatment-resistant schizophrenia. Conclusions: Ongoing and larger studies are needed of the diagnosis and treatment efficacy of the gluten-free diet in neuropsychiatric illness. Clinicians should ask about the patient history of anemia, hypothyroidism, irritable bowel syndrome and family history of benefit from the gluten-free diet, not limited to but especially in cases of treatment resistance. Obtaining gluten antibodies by a simple blood test, and referral for gastrointestinal work-up in positive cases should be considered.

Keywords: celiac, gluten, neuropsychiatric, translational

Procedia PDF Downloads 162
62 Effects of Gender on Kinematics Kicking in Soccer

Authors: Abdolrasoul Daneshjoo

Abstract:

Soccer is a game which draws more attention in different countries especially in Brazil. Kicking among different skills in soccer and soccer players is an excellent role for the success and preference of a team. The way of point gaining in this game is passing the ball over the goal lines which are gained by shoot skill in attack time and or during the penalty kicks.Regarding the above assumption, identifying the effective factors in instep kicking in different distances shoot with maximum force and high accuracy or pass and penalty kick, may assist the coaches and players in raising qualitative level of performing the skill.The aim of the present study was to study of a few kinematical parameters in instep kicking from 5 and 7 meter distance among the male and female elite soccer players.24 right dominant lower limb subjects (12 males and 12 females) among Tehran elite soccer players with average and the standard deviation (22.5 ± 1.5) & (22.08± 1.31) years, height of (179.5 ± 5.81) & (164.3 ± 4.09) cm, weight of (69.66 ± 4.09) & (53.16 ± 3.51) kg, %BMI (21.06 ± .731) & (19.67 ± .709), having playing history of (4 ± .73) & (3.08 ± .66) years respectively participated in this study. They had at least two years of continuous playing experience in Tehran soccer league.For sampling player's kick; Kinemetrix Motion analysis with three cameras with 1000 Hz was used. Five reflective markers were placed laterally on the kicking leg over anatomical points (the iliac crest, major trochanter, lateral epicondyle of femur, lateral malleolus, and lateral aspect of distal head of the fifth metatarsus). Instep kick was filmed, with one step approach and 30 to 45 degrees angle from stationary ball. Three kicks were filmed, one kick selected for further analyses. Using Kinemetrix 3D motion analysis software, the position of the markers was analyzed. Descriptive statistics were used to describe the mean and standard deviation, while the analysis of variance, and independent t-test (P < 0.05) were used to compare the kinematic parameters between two genders.Among the evaluated parameters, the knee acceleration, the thigh angular velocity, the angle of knee proportionately showed significant relationship with consequence of kick. While company performance on 5m in 2 genders, significant differences were observed in internal – external displacement of toe, ankle, hip and the velocity of toe, ankle and the acceleration of toe and the angular velocity of pelvic, thigh and before time contact . Significant differences showed the internal – external displacement of toe, the ankle, the knee and the hip, the iliac crest and the velocity of toe, the ankle and acceleration of ankle and angular velocity of the pelvic and the knee.

Keywords: biomechanics, kinematics, instep kicking, soccer

Procedia PDF Downloads 504
61 Analysis and Design Modeling for Next Generation Network Intrusion Detection and Prevention System

Authors: Nareshkumar Harale, B. B. Meshram

Abstract:

The continued exponential growth of successful cyber intrusions against today’s businesses has made it abundantly clear that traditional perimeter security measures are no longer adequate and effective. We evolved the network trust architecture from trust-untrust to Zero-Trust, With Zero Trust, essential security capabilities are deployed in a way that provides policy enforcement and protection for all users, devices, applications, data resources, and the communications traffic between them, regardless of their location. Information exchange over the Internet, in spite of inclusion of advanced security controls, is always under innovative, inventive and prone to cyberattacks. TCP/IP protocol stack, the adapted standard for communication over network, suffers from inherent design vulnerabilities such as communication and session management protocols, routing protocols and security protocols are the major cause of major attacks. With the explosion of cyber security threats, such as viruses, worms, rootkits, malwares, Denial of Service attacks, accomplishing efficient and effective intrusion detection and prevention is become crucial and challenging too. In this paper, we propose a design and analysis model for next generation network intrusion detection and protection system as part of layered security strategy. The proposed system design provides intrusion detection for wide range of attacks with layered architecture and framework. The proposed network intrusion classification framework deals with cyberattacks on standard TCP/IP protocol, routing protocols and security protocols. It thereby forms the basis for detection of attack classes and applies signature based matching for known cyberattacks and data mining based machine learning approaches for unknown cyberattacks. Our proposed implemented software can effectively detect attacks even when malicious connections are hidden within normal events. The unsupervised learning algorithm applied to network audit data trails results in unknown intrusion detection. Association rule mining algorithms generate new rules from collected audit trail data resulting in increased intrusion prevention though integrated firewall systems. Intrusion response mechanisms can be initiated in real-time thereby minimizing the impact of network intrusions. Finally, we have shown that our approach can be validated and how the analysis results can be used for detecting and protection from the new network anomalies.

Keywords: network intrusion detection, network intrusion prevention, association rule mining, system analysis and design

Procedia PDF Downloads 228
60 The Incoherence of the Philosophers as a Defense of Philosophy against Theology

Authors: Edward R. Moad

Abstract:

Al-Ghazali’s Tahāfat al Falāsifa is widely construed as an attack on philosophy in favor of theological fideism. Consequently, he has been blamed for ‘death of philosophy’ in the Muslim world. ‘Falsifa’ however is not philosophy itself, but rather a range of philosophical doctrines mainly influenced by or inherited form Greek thought. In these terms, this work represents a defense of philosophy against what we could call ‘falsifical’ fideism. In the introduction, Ghazali describes his target audience as, not the falasifa, but a group of pretenders engaged in taqlid to a misconceived understanding of falasifa, including the belief that they were capable of demonstrative certainty in the field of metaphysics. He promises to use falsifa standards of logic (with which he independently agrees), to show that that the falasifa failed to demonstratively prove many of their positions. Whether or not he succeeds in that, the exercise of subjecting alleged proofs to critical scrutiny is quintessentially philosophical, while uncritical adherence to a doctrine, in the name of its being ‘philosophical’, is decidedly unphilosophical. If we are to blame the intellectual decline of the Muslim world on someone’s ‘bad’ way of thinking, rather than more material historical circumstances (which is already a mistake), then blame more appropriately rests with modernist Muslim thinkers who, under the influence of orientalism (and like Ghazali’s philosophical pretenders) mistook taqlid to the falasifa as philosophy itself. The discussion of the Tahāfut takes place in the context of an epistemic (and related social) hierarchy envisioned by the falasifa, corresponding to the faculties of the sense, the ‘estimative imagination’ (wahm), and the pure intellect, along with the respective forms of discourse – rhetoric, dialectic, and demonstration – appropriate to each category of that order. Al-Farabi in his Book of Letters describes a relation between dialectic and demonstration on the one hand, and theology and philosophy on the other. The latter two are distinguished by method rather than subject matter. Theology is that which proceeds dialectically, while philosophy is (or aims to be?) demonstrative. Yet, Al-Farabi tells us, dialectic precedes philosophy like ‘nourishment for the tree precedes its fruit.’ That is, dialectic is part of the process, by which we interrogate common and imaginative notions in the pursuit of clearly understood first principles that we can then deploy in the demonstrative argument. Philosophy is, therefore, something we aspire to through, and from a discursive condition of, dialectic. This stands in apparent contrast to the understanding of Ibn Sina, for whom one arrives at the knowledge of first principles through contact with the Active Intellect. It also stands in contrast to that of Ibn Rushd, who seems to think our knowledge of first principles can only come through reading Aristotle. In conclusion, based on Al-Farabi’s framework, Ghazali’s Tahafut is a truly an exercise in philosophy, and an effort to keep the door open for true philosophy in the Muslim mind, against the threat of a kind of developing theology going by the name of falsifa.

Keywords: philosophy, incoherence, theology, Tahafut

Procedia PDF Downloads 162
59 Automatic Aggregation and Embedding of Microservices for Optimized Deployments

Authors: Pablo Chico De Guzman, Cesar Sanchez

Abstract:

Microservices are a software development methodology in which applications are built by composing a set of independently deploy-able, small, modular services. Each service runs a unique process and it gets instantiated and deployed in one or more machines (we assume that different microservices are deployed into different machines). Microservices are becoming the de facto standard for developing distributed cloud applications due to their reduced release cycles. In principle, the responsibility of a microservice can be as simple as implementing a single function, which can lead to the following issues: - Resource fragmentation due to the virtual machine boundary. - Poor communication performance between microservices. Two composition techniques can be used to optimize resource fragmentation and communication performance: aggregation and embedding of microservices. Aggregation allows the deployment of a set of microservices on the same machine using a proxy server. Aggregation helps to reduce resource fragmentation, and is particularly useful when the aggregated services have a similar scalability behavior. Embedding deals with communication performance by deploying on the same virtual machine those microservices that require a communication channel (localhost bandwidth is reported to be about 40 times faster than cloud vendor local networks and it offers better reliability). Embedding can also reduce dependencies on load balancer services since the communication takes place on a single virtual machine. For example, assume that microservice A has two instances, a1 and a2, and it communicates with microservice B, which also has two instances, b1 and b2. One embedding can deploy a1 and b1 on machine m1, and a2 and b2 are deployed on a different machine m2. This deployment configuration allows each pair (a1-b1), (a2-b2) to communicate using the localhost interface without the need of a load balancer between microservices A and B. Aggregation and embedding techniques are complex since different microservices might have incompatible runtime dependencies which forbid them from being installed on the same machine. There is also a security concern since the attack surface between microservices can be larger. Luckily, container technology allows to run several processes on the same machine in an isolated manner, solving the incompatibility of running dependencies and the previous security concern, thus greatly simplifying aggregation/embedding implementations by just deploying a microservice container on the same machine as the aggregated/embedded microservice container. Therefore, a wide variety of deployment configurations can be described by combining aggregation and embedding to create an efficient and robust microservice architecture. This paper presents a formal method that receives a declarative definition of a microservice architecture and proposes different optimized deployment configurations by aggregating/embedding microservices. The first prototype is based on i2kit, a deployment tool also submitted to ICWS 2018. The proposed prototype optimizes the following parameters: network/system performance, resource usage, resource costs and failure tolerance.

Keywords: aggregation, deployment, embedding, resource allocation

Procedia PDF Downloads 204
58 Poisoning in Morocco: Evolution and Risk Factors

Authors: El Khaddam Safaa, Soulaymani Abdelmajid, Mokhtari Abdelghani, Ouammi Lahcen, Rachida Soulaymani-Beincheikh

Abstract:

The poisonings represent a problem of health in the world and Morocco, The exact dimensions of this phenomenon are still poorly recorded that we see the lack of exhaustive statistical data. The objective of this retrospective study of a series of cases of the poisonings declared at the level of the region of Tadla-Azilal and collected by the Moroccan Poison Control and Pharmacovigilance Center. An epidemiological profile of the poisonings was to raise, to determine the risk factors influencing the vital preview of the poisoned And to follow the evolution of the incidence, the lethality, and the mortality. During the period of study, we collected and analyzed 9303 cases of poisonings by different incriminated toxic products with the exception of the scorpion poisonings. These poisonings drove to 99 deaths. The epidemiological profile which we raised, showed that the poisoned were of any age with an average of 24.62±16.61 years, The sex-ratio (woman/man) was 1.36 in favor of the women. The difference between both sexes is highly significant (χ2 = 210.5; p<0,001). Most of the poisoned which declared to be of urban origin (60.5 %) (χ2=210.5; p<0,001). Carbon monoxide was the most incriminated among the cases of poisonings (24.15 %), them putting in head, followed by some pesticides and farm produces (21.44 %) and food (19.95 %). The analysis of the risk factors showed that the grown-up patients whose age is between 20 and 74 years have twice more risk of evolving towards the death (RR=1,57; IC95 % = 1,03-2,38) than the other age brackets, so the male genital organ was the most exposed (explained) to the death that the female genital organ (RR=1,59; IC95 % = 1,07-2,38) The patients of rural origin had presented 5 times more risk (RR=4,713; IC95 % = 2,543-8,742). Poisoned by the mineral products had presented the maximum of risk on the vital preview death (RR=23,19, IC95 % = 2,39-224,1). The poisonings by pesticides produce a risk of 9 (RR=9,31; IC95 % = 6,10-14,18). The incidence was 3,3 cases of 10000 inhabitants, and the mortality was 0,004 cases of 1000 inhabitants (that is 4 cases by 1000 000 inhabitants). The rate of lethality registered annually was 10.6 %. The evolution of the indicators of health according to the years showed that the rate of statement measured by the incidence increased by a significant way. We also noted an improvement in the coverage which (who) ended up with a decrease in the rate of the lethality and the mortality during last years. The fight anti-toxic is a work of length time. He asks for a lot of work various levels. It is necessary to attack the delay accumulated by our country on the various legal, institutional and technical aspects. The ideal solution is to develop and to set up a national strategy.

Keywords: epidemiology, poisoning, risk factors, indicators of health, Tadla-Azilal grated by anti-toxic fight

Procedia PDF Downloads 365
57 Variation of Carbon Isotope Ratio (δ13C) and Leaf-Productivity Traits in Aquilaria Species (Thymelaeceae)

Authors: Arlene López-Sampson, Tony Page, Betsy Jackes

Abstract:

Aquilaria genus produces a highly valuable fragrant oleoresin known as agarwood. Agarwood forms in a few trees in the wild as a response to injure or pathogen attack. The resin is used in perfume and incense industry and medicine. Cultivation of Aquilaria species as a sustainable source of the resin is now a common strategy. Physiological traits are frequently used as a proxy of crop and tree productivity. Aquilaria species growing in Queensland, Australia were studied to investigate relationship between leaf-productivity traits with tree growth. Specifically, 28 trees, representing 12 plus trees and 16 trees from yield plots, were selected to conduct carbon isotope analysis (δ13C) and monitor six leaf attributes. Trees were grouped on four diametric classes (diameter at 150 mm above ground level) ensuring the variability in growth of the whole population was sampled. Model averaging technique based on the Akaike’s information criterion (AIC) was computed to identify whether leaf traits could assist in diameter prediction. Carbon isotope values were correlated with height classes and leaf traits to determine any relationship. In average four leaves per shoot were recorded. Approximately one new leaf per week is produced by a shoot. Rate of leaf expansion was estimated in 1.45 mm day-1. There were no statistical differences between diametric classes and leaf expansion rate and number of new leaves per week (p > 0.05). Range of δ13C values in leaves of Aquilaria species was from -25.5 ‰ to -31 ‰ with an average of -28.4 ‰ (± 1.5 ‰). Only 39% of the variability in height can be explained by δ13C in leaf. Leaf δ13C and nitrogen content values were positively correlated. This relationship implies that leaves with higher photosynthetic capacities also had lower intercellular carbon dioxide concentrations (ci/ca) and less depleted values of 13C. Most of the predictor variables have a weak correlation with diameter (D). However, analysis of the 95% confidence of best-ranked regression models indicated that the predictors that could likely explain growth in Aquilaria species are petiole length (PeLen), values of δ13C (true13C) and δ15N (true15N), leaf area (LA), specific leaf area (SLA) and number of new leaf produced per week (NL.week). The model constructed with PeLen, true13C, true15N, LA, SLA and NL.week could explain 45% (R2 0.4573) of the variability in D. The leaf traits studied gave a better understanding of the leaf attributes that could assist in the selection of high-productivity trees in Aquilaria.

Keywords: 13C, petiole length, specific leaf area, tree growth

Procedia PDF Downloads 512
56 Performance Estimation of Small Scale Wind Turbine Rotor for Very Low Wind Regime Condition

Authors: Vilas Warudkar, Dinkar Janghel, Siraj Ahmed

Abstract:

Rapid development experienced by India requires huge amount of energy. Actual supply capacity additions have been consistently lower than the targets set by the government. According to World Bank 40% of residences are without electricity. In 12th five year plan 30 GW grid interactive renewable capacity is planned in which 17 GW is Wind, 10 GW is from solar and 2.1 GW from small hydro project, and rest is compensated by bio gas. Renewable energy (RE) and energy efficiency (EE) meet not only the environmental and energy security objectives, but also can play a crucial role in reducing chronic power shortages. In remote areas or areas with a weak grid, wind energy can be used for charging batteries or can be combined with a diesel engine to save fuel whenever wind is available. India according to IEC 61400-1 belongs to class IV Wind Condition; it is not possible to set up wind turbine in large scale at every place. So, the best choice is to go for small scale wind turbine at lower height which will have good annual energy production (AEP). Based on the wind characteristic available at MANIT Bhopal, rotor for small scale wind turbine is designed. Various Aero foil data is reviewed for selection of airfoil in the Blade Profile. Airfoil suited of Low wind conditions i.e. at low Reynold’s number is selected based on Coefficient of Lift, Drag and angle of attack. For designing of the rotor blade, standard Blade Element Momentum (BEM) Theory is implanted. Performance of the Blade is estimated using BEM theory in which axial induction factor and angular induction factor is optimized using iterative technique. Rotor performance is estimated for particular designed blade specifically for low wind Conditions. Power production of rotor is determined at different wind speeds for particular pitch angle of the blade. At pitch 15o and velocity 5 m/sec gives good cut in speed of 2 m/sec and power produced is around 350 Watts. Tip speed of the Blade is considered as 6.5 for which Coefficient of Performance of the rotor is calculated 0.35, which is good acceptable value for Small scale Wind turbine. Simple Load Model (SLM, IEC 61400-2) is also discussed to improve the structural strength of the rotor. In SLM, Edge wise Moment and Flap Wise moment is considered which cause bending stress at the root of the blade. Various Load case mentioned in the IEC 61400-2 is calculated and checked for the partial safety factor of the wind turbine blade.

Keywords: annual energy production, Blade Element Momentum Theory, low wind Conditions, selection of airfoil

Procedia PDF Downloads 338
55 The Relationship between Incidental Emotions, Risk Perceptions and Type of Army Service

Authors: Sharon Garyn-Tal, Shoshana Shahrabani

Abstract:

Military service in general, and in combat units in particular, can be physically and psychologically stressful. Therefore, type of service may have significant implications for soldiers during and after their military service including emotions, judgments and risk perceptions. Previous studies have focused on risk propensity and risky behavior among soldiers, however there is still lack of knowledge on the impact of type of army service on risk perceptions. The current study examines the effect of type of army service (combat versus non-combat service) and negative incidental emotions on risk perceptions. In 2014 a survey was conducted among 153 combat and non-combat Israeli soldiers. The survey was distributed in train stations and central bus stations in various places in Israel among soldiers waiting for the train/bus. Participants answered questions related to the levels of incidental negative emotions they felt, to their risk perceptions (chances to be hurt by terror attack, by violent crime and by car accident), and personal details including type of army service. The data in this research is unique because military service in Israel is compulsory, so that the Israeli population serving in the army is wide and diversified. The results indicate that currently serving combat participants were more pessimistic in their risk perceptions (for all type of risks) compared to the currently serving non-combat participants. Since combat participants probably experienced severe and distressing situations during their service, they became more pessimistic regarding their probabilities of being hurt in different situations in life. This result supports the availability heuristic theory and the findings of previous studies indicating that those who directly experience distressing events tend to overestimate danger. The findings also indicate that soldiers who feel higher levels of incidental fear and anger have pessimistic risk perceptions. In addition, respondents who experienced combat army service also have pessimistic risk perceptions if they feel higher levels of fear. In addition, the findings suggest that higher levels of the incidental emotions of fear and anger are related to more pessimistic risk perceptions. These results can be explained by the compulsory army service in Israel that constitutes a focused threat to soldiers' safety during their period of service. Thus, in this stressful environment, negative incidental emotions even during routine times correlate with higher risk perceptions. In conclusion, the current study results suggest that combat army service shapes risk perceptions and the way young people control their negative incidental emotions in everyday life. Recognizing the factors affecting risk perceptions among soldiers is important for better understanding the impact of army service on young people.

Keywords: army service, combat soldiers, incidental emotions, risk perceptions

Procedia PDF Downloads 235
54 Cessna Citation X Business Aircraft Stability Analysis Using Linear Fractional Representation LFRs Model

Authors: Yamina Boughari, Ruxandra Mihaela Botez, Florian Theel, Georges Ghazi

Abstract:

Clearance of flight control laws of a civil aircraft is a long and expensive process in the Aerospace industry. Thousands of flight combinations in terms of speeds, altitudes, gross weights, centers of gravity and angles of attack have to be investigated, and proved to be safe. Nonetheless, in this method, a worst flight condition can be easily missed, and its missing would lead to a critical situation. Definitively, it would be impossible to analyze a model because of the infinite number of cases contained within its flight envelope, that might require more time, and therefore more design cost. Therefore, in industry, the technique of the flight envelope mesh is commonly used. For each point of the flight envelope, the simulation of the associated model ensures the satisfaction or not of specifications. In order to perform fast, comprehensive and effective analysis, other varying parameters models were developed by incorporating variations, or uncertainties in the nominal models, known as Linear Fractional Representation LFR models; these LFR models were able to describe the aircraft dynamics by taking into account uncertainties over the flight envelope. In this paper, the LFRs models are developed using the speeds and altitudes as varying parameters; The LFR models were built using several flying conditions expressed in terms of speeds and altitudes. The use of such a method has gained a great interest by the aeronautical companies that have seen a promising future in the modeling, and particularly in the design and certification of control laws. In this research paper, we will focus on the Cessna Citation X open loop stability analysis. The data are provided by a Research Aircraft Flight Simulator of Level D, that corresponds to the highest level flight dynamics certification; this simulator was developed by CAE Inc. and its development was based on the requirements of research at the LARCASE laboratory. The acquisition of these data was used to develop a linear model of the airplane in its longitudinal and lateral motions, and was further used to create the LFR’s models for 12 XCG /weights conditions, and thus the whole flight envelope using a friendly Graphical User Interface developed during this study. Then, the LFR’s models are analyzed using Interval Analysis method based upon Lyapunov function, and also the ‘stability and robustness analysis’ toolbox. The results were presented under the form of graphs, thus they have offered good readability, and were easily exploitable. The weakness of this method stays in a relatively long calculation, equal to about four hours for the entire flight envelope.

Keywords: flight control clearance, LFR, stability analysis, robustness analysis

Procedia PDF Downloads 352
53 Effect of Accelerated Aging on Antibacterial and Mechanical Properties of SEBS Compounds

Authors: Douglas N. Simoes, Michele Pittol, Vanda F. Ribeiro, Daiane Tomacheski, Ruth M. C. Santana

Abstract:

Thermoplastic elastomers (TPE) compounds are used in a wide range of applications, like home appliances, automotive components, medical devices, footwear, and others. These materials are susceptible to microbial attack, causing a crack in polymer chains compounds based on SEBS copolymers, poly (styrene-b-(ethylene-co-butylene)-b-styrene, are a class of TPE, largely used in domestic appliances like refrigerator seals (gaskets), bath mats and sink squeegee. Moisture present in some areas (such as shower area and sink) in addition to organic matter provides favorable conditions for microbial survival and proliferation, contributing to the spread of diseases besides the reduction of product life cycle due the biodegradation process. Zinc oxide (ZnO) has been studied as an alternative antibacterial additive due its biocidal effect. It is important to know the influence of these additives in the properties of the compounds, both at the beginning and during the life cycle. In that sense, the aim of this study was to evaluate the effect of accelerated aging in oven on antibacterial and mechanical properties of ZnO loaded SEBS based TPE compounds. Two different comercial zinc oxide, named as WR and Pe were used in proportion of 1%. A compound with no antimicrobial additive (standard) was also tested. The compounds were prepared using a co-rotating double screw extruder (L/D ratio of 40/1 and 16 mm screw diameter). The extrusion parameters were kept constant for all materials, screw rotation rate was set at 226 rpm, with a temperature profile from 150 to 190 ºC. Test specimens were prepared using the injection molding machine at 190 ºC. The Standard Test Method for Rubber Property—Effect of Liquids was applied in order to simulate the exposition of TPE samples to detergent ingredients during service. For this purpose, ZnO loaded TPE samples were immersed in a 3.0% w/v detergent (neutral) and accelerated aging in oven at 70°C for 7 days. Compounds were characterized by changes in mechanical (hardness and tension properties) and mass. The Japan Industrial Standard (JIS) Z 2801:2010 was applied to evaluate antibacterial properties against Staphylococcus aureus (S. aureus) and Escherichia coli (E. coli). The microbiological tests showed a reduction up to 42% in E. coli and up to 49% in S. aureus population in non-aged samples. There were observed variations in elongation and hardness values with the addition of zinc The changes in tensile at rupture and mass were not significant between non-aged and aged samples.

Keywords: antimicrobial, domestic appliance, sebs, zinc oxide

Procedia PDF Downloads 247
52 Structural Invertibility and Optimal Sensor Node Placement for Error and Input Reconstruction in Dynamic Systems

Authors: Maik Kschischo, Dominik Kahl, Philipp Wendland, Andreas Weber

Abstract:

Understanding and modelling of real-world complex dynamic systems in biology, engineering and other fields is often made difficult by incomplete knowledge about the interactions between systems states and by unknown disturbances to the system. In fact, most real-world dynamic networks are open systems receiving unknown inputs from their environment. To understand a system and to estimate the state dynamics, these inputs need to be reconstructed from output measurements. Reconstructing the input of a dynamic system from its measured outputs is an ill-posed problem if only a limited number of states is directly measurable. A first requirement for solving this problem is the invertibility of the input-output map. In our work, we exploit the fact that invertibility of a dynamic system is a structural property, which depends only on the network topology. Therefore, it is possible to check for invertibility using a structural invertibility algorithm which counts the number of node disjoint paths linking inputs and outputs. The algorithm is efficient enough, even for large networks up to a million nodes. To understand structural features influencing the invertibility of a complex dynamic network, we analyze synthetic and real networks using the structural invertibility algorithm. We find that invertibility largely depends on the degree distribution and that dense random networks are easier to invert than sparse inhomogeneous networks. We show that real networks are often very difficult to invert unless the sensor nodes are carefully chosen. To overcome this problem, we present a sensor node placement algorithm to achieve invertibility with a minimum set of measured states. This greedy algorithm is very fast and also guaranteed to find an optimal sensor node-set if it exists. Our results provide a practical approach to experimental design for open, dynamic systems. Since invertibility is a necessary condition for unknown input observers and data assimilation filters to work, it can be used as a preprocessing step to check, whether these input reconstruction algorithms can be successful. If not, we can suggest additional measurements providing sufficient information for input reconstruction. Invertibility is also important for systems design and model building. Dynamic models are always incomplete, and synthetic systems act in an environment, where they receive inputs or even attack signals from their exterior. Being able to monitor these inputs is an important design requirement, which can be achieved by our algorithms for invertibility analysis and sensor node placement.

Keywords: data-driven dynamic systems, inversion of dynamic systems, observability, experimental design, sensor node placement

Procedia PDF Downloads 152
51 Impact of Alkaline Activator Composition and Precursor Types on Properties and Durability of Alkali-Activated Cements Mortars

Authors: Sebastiano Candamano, Antonio Iorfida, Patrizia Frontera, Anastasia Macario, Fortunato Crea

Abstract:

Alkali-activated materials are promising binders obtained by an alkaline attack on fly-ashes, metakaolin, blast slag among others. In order to guarantee the highest ecological and cost efficiency, a proper selection of precursors and alkaline activators has to be carried out. These choices deeply affect the microstructure, chemistry and performances of this class of materials. Even if, in the last years, several researches have been focused on mix designs and curing conditions, the lack of exhaustive activation models, standardized mix design and curing conditions and an insufficient investigation on shrinkage behavior, efflorescence, additives and durability prevent them from being perceived as an effective and reliable alternative to Portland. The aim of this study is to develop alkali-activated cements mortars containing high amounts of industrial by-products and waste, such as ground granulated blast furnace slag (GGBFS) and ashes obtained from the combustion process of forest biomass in thermal power plants. In particular, the experimental campaign was performed in two steps. In the first step, research was focused on elucidating how the workability, mechanical properties and shrinkage behavior of produced mortars are affected by the type and fraction of each precursor as well as by the composition of the activator solutions. In order to investigate the microstructures and reaction products, SEM and diffractometric analyses have been carried out. In the second step, their durability in harsh environments has been evaluated. Mortars obtained using only GGBFS as binder showed mechanical properties development and shrinkage behavior strictly dependent on SiO2/Na2O molar ratio of the activator solutions. Compressive strengths were in the range of 40-60 MPa after 28 days of curing at ambient temperature. Mortars obtained by partial replacement of GGBFS with metakaolin and forest biomass ash showed lower compressive strengths (≈35 MPa) and shrinkage values when higher amount of ashes were used. By varying the activator solutions and binder composition, compressive strength up to 70 MPa associated with shrinkage values of about 4200 microstrains were measured. Durability tests were conducted to assess the acid and thermal resistance of the different mortars. They all showed good resistance in a solution of 5%wt of H2SO4 also after 60 days of immersion, while they showed a decrease of mechanical properties in the range of 60-90% when exposed to thermal cycles up to 700°C.

Keywords: alkali activated cement, biomass ash, durability, shrinkage, slag

Procedia PDF Downloads 326
50 BIM Modeling of Site and Existing Buildings: Case Study of ESTP Paris Campus

Authors: Rita Sassine, Yassine Hassani, Mohamad Al Omari, Stéphanie Guibert

Abstract:

Building Information Modelling (BIM) is the process of creating, managing, and centralizing information during the building lifecycle. BIM can be used all over a construction project, from the initiation phase to the planning and execution phases to the maintenance and lifecycle management phase. For existing buildings, BIM can be used for specific applications such as lifecycle management. However, most of the existing buildings don’t have a BIM model. Creating a compatible BIM for existing buildings is very challenging. It requires special equipment for data capturing and efforts to convert these data into a BIM model. The main difficulties for such projects are to define the data needed, the level of development (LOD), and the methodology to be adopted. In addition to managing information for an existing building, studying the impact of the built environment is a challenging topic. So, integrating the existing terrain that surrounds buildings into the digital model is essential to be able to make several simulations as flood simulation, energy simulation, etc. Making a replication of the physical model and updating its information in real-time to make its Digital Twin (DT) is very important. The Digital Terrain Model (DTM) represents the ground surface of the terrain by a set of discrete points with unique height values over 2D points based on reference surface (e.g., mean sea level, geoid, and ellipsoid). In addition, information related to the type of pavement materials, types of vegetation and heights and damaged surfaces can be integrated. Our aim in this study is to define the methodology to be used in order to provide a 3D BIM model for the site and the existing building based on the case study of “Ecole Spéciale des Travaux Publiques (ESTP Paris)” school of engineering campus. The property is located on a hilly site of 5 hectares and is composed of more than 20 buildings with a total area of 32 000 square meters and a height between 50 and 68 meters. In this work, the campus precise levelling grid according to the NGF-IGN69 altimetric system and the grid control points are computed according to (Réseau Gédésique Français) RGF93 – Lambert 93 french system with different methods: (i) Land topographic surveying methods using robotic total station, (ii) GNSS (Global Network Satellite sytem) levelling grid with NRTK (Network Real Time Kinematic) mode, (iii) Point clouds generated by laser scanning. These technologies allow the computation of multiple building parameters such as boundary limits, the number of floors, the floors georeferencing, the georeferencing of the 4 base corners of each building, etc. Once the entry data are identified, the digital model of each building is done. The DTM is also modeled. The process of altimetric determination is complex and requires efforts in order to collect and analyze multiple data formats. Since many technologies can be used to produce digital models, different file formats such as DraWinG (DWG), LASer (LAS), Comma-separated values (CSV), Industry Foundation Classes (IFC) and ReViT (RVT) will be generated. Checking the interoperability between BIM models is very important. In this work, all models are linked together and shared on 3DEXPERIENCE collaborative platform.

Keywords: building information modeling, digital terrain model, existing buildings, interoperability

Procedia PDF Downloads 114
49 Forensic Investigation: The Impact of Biometric-Based Solution in Combatting Mobile Fraud

Authors: Mokopane Charles Marakalala

Abstract:

Research shows that mobile fraud has grown exponentially in South Africa during the lockdown caused by the COVID-19 pandemic. According to the South African Banking Risk Information Centre (SABRIC), fraudulent online banking and transactions resulted in a sharp increase in cybercrime since the beginning of the lockdown, resulting in a huge loss to the banking industry in South Africa. While the Financial Intelligence Centre Act, 38 of 2001, regulate financial transactions, it is evident that criminals are making use of technology to their advantage. Money-laundering ranks among the major crimes, not only in South Africa but worldwide. This paper focuses on the impact of biometric-based solutions in combatting mobile fraud at the South African Risk Information. SABRIC had the challenges of a successful mobile fraud; cybercriminals could hijack a mobile device and use it to gain access to sensitive personal data and accounts. Cybercriminals are constantly looting the depths of cyberspace in search of victims to attack. Millions of people worldwide use online banking to do their regular bank-related transactions quickly and conveniently. This was supported by the SABRIC, who regularly highlighted incidents of mobile fraud, corruption, and maladministration in SABRIC, resulting in a lack of secure their banking online; they are vulnerable to falling prey to fraud scams such as mobile fraud. Criminals have made use of digital platforms since the development of technology. In 2017, 13 438 instances involving banking apps, internet banking, and mobile banking caused the sector to suffer gross losses of more than R250,000,000. The final three parties are forced to point fingers at one another while the fraudster makes off with the money. A non-probability sampling (purposive sampling) was used in selecting these participants. These included telephone calls and virtual interviews. The results indicate that there is a relationship between remote online banking and the increase in money-laundering as the system allows transactions to take place with limited verification processes. This paper highlights the significance of considering the development of prevention mechanisms, capacity development, and strategies for both financial institutions as well as law enforcement agencies in South Africa to reduce crime such as money-laundering. The researcher recommends that strategies to increase awareness for bank staff must be harnessed through the provision of requisite training and to be provided adequate training.

Keywords: biometric-based solution, investigation, cybercrime, forensic investigation, fraud, combatting

Procedia PDF Downloads 104
48 Effect of Tooth Bleaching Agents on Enamel Demineralisation

Authors: Najlaa Yousef Qusti, Steven J. Brookes, Paul A. Brunton

Abstract:

Background: Tooth discoloration can be an aesthetic problem, and tooth whitening using carbamide peroxide bleaching agents are a popular treatment option. However, there are concerns about possible adverse effects such as demineralisation of the bleached enamel; however, the cause of this demineralisation is unclear. Introduction: Teeth can become stained or discoloured over time. Tooth whitening is an aesthetic solution for tooth discoloration. Bleaching solutions of 10% carbamide peroxide (CP) have become the standard agent used in dentist-prescribed and home-applied ’vital bleaching techniques’. These materials release hydrogen peroxide (H₂O₂), the active whitening agent. However, there is controversy in the literature regarding the effect of bleaching agents on enamel integrity and enamel mineral content. The purpose of this study was to establish if carbamide peroxide bleaching agents affect the acid solubility of enamel (i.e., make teeth more prone to demineralisation). Materials and Methods: Twelve human premolar teeth were sectioned longitudinally along the midline and varnished to leave the natural enamel surface exposed. The baseline behavior of each tooth half in relation to its demineralisation in acid was established by sequential exposure to 4 vials containing 1ml of 10mM acetic acid (1 minute/vial). This was followed by exposure to 10% CP for 8 hours. After washing in distilled water, the tooth half was sequentially exposed to 4 further vials containing acid to test if the acid susceptibility of the enamel had been affected. The corresponding tooth half acted as a control and was exposed to distilled water instead of CP. The mineral loss was determined by measuring [Ca²⁺] and [PO₄³⁻] released in each vial using a calcium ion-selective electrode and the phosphomolybdenum blue method, respectively. The effect of bleaching on the tooth surfaces was also examined using SEM. Results: Exposure to carbamide peroxide did not significantly alter the susceptibility of enamel to acid attack, and SEM of the enamel surface revealed a slight alteration in surface appearance. SEM images of the control enamel surface showed a flat enamel surface with some shallow pits, whereas the bleached enamel appeared with an increase in surface porosity and some areas of mild erosion. Conclusions: Exposure to H₂O₂ equivalent to 10% CP does not significantly increase subsequent acid susceptibility of enamel as determined by Ca²⁺ release from the enamel surface. The effects of bleaching on mineral loss were indistinguishable from distilled water in the experimental system used. However, some surface differences were observed by SEM. The phosphomolybdenum blue method for phosphate is compromised by peroxide bleaching agents due to their oxidising properties. However, the Ca²⁺ electrode is unaffected by oxidising agents and can be used to determine the mineral loss in the presence of peroxides.

Keywords: bleaching, carbamide peroxide, demineralisation, teeth whitening

Procedia PDF Downloads 127
47 A Study of Kinematical Parameters I9N Instep Kicking in Soccer

Authors: Abdolrasoul Daneshjoo

Abstract:

Introduction: Soccer is a game which draws more attention in different countries especially in Brazil. Kicking among different skills in soccer and soccer players is an excellent role for the success and preference of a team. The way of point gaining in this game is passing the ball over the goal lines which are gained by shoot skill in attack time and or during the penalty kicks.Regarding the above assumption, identifying the effective factors in instep kicking in different distances shoot with maximum force and high accuracy or pass and penalty kick, may assist the coaches and players in raising qualitative level of performing the skill. Purpose: The aim of the present study was to study of a few kinematical parameters in instep kicking from 3 and 5 meter distance among the male and female elite soccer players. Methods: 24 right dominant lower limb subjects (12 males and 12 females) among Tehran elite soccer players with average and the standard deviation (22.5 ± 1.5) & (22.08± 1.31) years, height of (179.5 ± 5.81) & (164.3 ± 4.09) cm, weight of (69.66 ± 4.09) & (53.16 ± 3.51) kg, %BMI (21.06 ± .731) & (19.67 ± .709), having playing history of (4 ± .73) & (3.08 ± .66) years respectively participated in this study. They had at least two years of continuous playing experience in Tehran soccer league.For sampling player's kick; Kinemetrix Motion analysis with three cameras with 500 Hz was used. Five reflective markers were placed laterally on the kicking leg over anatomical points (the iliac crest, major trochanter, lateral epicondyle of femur, lateral malleolus, and lateral aspect of distal head of the fifth metatarsus). Instep kick was filmed, with one step approach and 30 to 45 degrees angle from stationary ball. Three kicks were filmed, one kick selected for further analyses. Using Kinemetrix 3D motion analysis software, the position of the markers was analyzed. Descriptive statistics were used to describe the mean and standard deviation, while the analysis of variance, and independent t-test (P < 0.05) were used to compare the kinematic parameters between two genders. Results and Discussion: Among the evaluated parameters, the knee acceleration, the thigh angular velocity, the angle of knee proportionately showed significant relationship with consequence of kick. While company performance on 5m in 2 genders, significant differences were observed in internal – external displacement of toe, ankle, hip and the velocity of toe, ankle and the acceleration of toe and the angular velocity of pelvic, thigh and before time contact. Significant differences showed the internal – external displacement of toe, the ankle, the knee and the hip, the iliac crest and the velocity of toe, the ankle and acceleration of ankle and angular velocity of the pelvic and the knee.

Keywords: biomechanics, kinematics, soccer, instep kick, male, female

Procedia PDF Downloads 415
46 Definition of Aerodynamic Coefficients for Microgravity Unmanned Aerial System

Authors: Gamaliel Salazar, Adriana Chazaro, Oscar Madrigal

Abstract:

The evolution of Unmanned Aerial Systems (UAS) has made it possible to develop new vehicles capable to perform microgravity experiments which due its cost and complexity were beyond the reach for many institutions. In this study, the aerodynamic behavior of an UAS is studied through its deceleration stage after an initial free fall phase (where the microgravity effect is generated) using Computational Fluid Dynamics (CFD). Due to the fact that the payload would be analyzed under a microgravity environment and the nature of the payload itself, the speed of the UAS must be reduced in a smoothly way. Moreover, the terminal speed of the vehicle should be low enough to preserve the integrity of the payload and vehicle during the landing stage. The UAS model is made by a study pod, control surfaces with fixed and mobile sections, landing gear and two semicircular wing sections. The speed of the vehicle is decreased by increasing the angle of attack (AoA) of each wing section from 2° (where the airfoil S1091 has its greatest aerodynamic efficiency) to 80°, creating a circular wing geometry. Drag coefficients (Cd) and forces (Fd) are obtained employing CFD analysis. A simplified 3D model of the vehicle is analyzed using Ansys Workbench 16. The distance between the object of study and the walls of the control volume is eight times the length of the vehicle. The domain is discretized using an unstructured mesh based on tetrahedral elements. The refinement of the mesh is made by defining an element size of 0.004 m in the wing and control surfaces in order to figure out the fluid behavior in the most important zones, as well as accurate approximations of the Cd. The turbulent model k-epsilon is selected to solve the governing equations of the fluids while a couple of monitors are placed in both wing and all-body vehicle to visualize the variation of the coefficients along the simulation process. Employing a statistical approximation response surface methodology the case of study is parametrized considering the AoA of the wing as the input parameter and Cd and Fd as output parameters. Based on a Central Composite Design (CCD), the Design Points (DP) are generated so the Cd and Fd for each DP could be estimated. Applying a 2nd degree polynomial approximation the drag coefficients for every AoA were determined. Using this values, the terminal speed at each position is calculated considering a specific Cd. Additionally, the distance required to reach the terminal velocity at each AoA is calculated, so the minimum distance for the entire deceleration stage without comprising the payload could be determine. The Cd max of the vehicle is 1.18, so its maximum drag will be almost like the drag generated by a parachute. This guarantees that aerodynamically the vehicle can be braked, so it could be utilized for several missions allowing repeatability of microgravity experiments.

Keywords: microgravity effect, response surface, terminal speed, unmanned system

Procedia PDF Downloads 173
45 Study on the Geometric Similarity in Computational Fluid Dynamics Calculation and the Requirement of Surface Mesh Quality

Authors: Qian Yi Ooi

Abstract:

At present, airfoil parameters are still designed and optimized according to the scale of conventional aircraft, and there are still some slight deviations in terms of scale differences. However, insufficient parameters or poor surface mesh quality is likely to occur if these small deviations are embedded in a future civil aircraft with a size that is quite different from conventional aircraft, such as a blended-wing-body (BWB) aircraft with future potential, resulting in large deviations in geometric similarity in computational fluid dynamics (CFD) simulations. To avoid this situation, the study on the CFD calculation on the geometric similarity of airfoil parameters and the quality of the surface mesh is conducted to obtain the ability of different parameterization methods applied on different airfoil scales. The research objects are three airfoil scales, including the wing root and wingtip of conventional civil aircraft and the wing root of the giant hybrid wing, used by three parameterization methods to compare the calculation differences between different sizes of airfoils. In this study, the constants including NACA 0012, a Reynolds number of 10 million, an angle of attack of zero, a C-grid for meshing, and the k-epsilon (k-ε) turbulence model are used. The experimental variables include three airfoil parameterization methods: point cloud method, B-spline curve method, and class function/shape function transformation (CST) method. The airfoil dimensions are set to 3.98 meters, 17.67 meters, and 48 meters, respectively. In addition, this study also uses different numbers of edge meshing and the same bias factor in the CFD simulation. Studies have shown that with the change of airfoil scales, different parameterization methods, the number of control points, and the meshing number of divisions should be used to improve the accuracy of the aerodynamic performance of the wing. When the airfoil ratio increases, the most basic point cloud parameterization method will require more and larger data to support the accuracy of the airfoil’s aerodynamic performance, which will face the severe test of insufficient computer capacity. On the other hand, when using the B-spline curve method, average number of control points and meshing number of divisions should be set appropriately to obtain higher accuracy; however, the quantitative balance cannot be directly defined, but the decisions should be made repeatedly by adding and subtracting. Lastly, when using the CST method, it is found that limited control points are enough to accurately parameterize the larger-sized wing; a higher degree of accuracy and stability can be obtained by using a lower-performance computer.

Keywords: airfoil, computational fluid dynamics, geometric similarity, surface mesh quality

Procedia PDF Downloads 223
44 Influence of Strain on the Corrosion Behavior of Dual Phase 590 Steel

Authors: Amit Sarkar, Jayanta K. Mahato, Tushar Bhattacharya, Amrita Kundu, P. C. Chakraborti

Abstract:

With increasing the demand for safety and fuel efficiency of automobiles, automotive manufacturers are looking for light weight, high strength steel with excellent formability and corrosion resistance. Dual-phase steel is finding applications in automotive sectors, because of its high strength, good formability, and high corrosion resistance. During service automotive components suffer from environmental attack and thereby gradual degradation of the components occurs reducing the service life of the components. The objective of the present investigation is to assess the effect of deformation on corrosion behaviour of DP590 grade dual phase steel which is used in automotive industries. The material was received from TATA Steel Jamshedpur, India in the form of 1 mm thick sheet. Tensile properties of the steel at strain rate of 10-3 sec-1: 0.2 % Yield Stress is 382 MPa, Ultimate Tensile Strength is 629 MPa, Uniform Strain is 16.30% and Ductility is 29%. Rectangular strips of 100x10x1 mm were machined keeping the long axis of the strips parallel to rolling direction of the sheet. These strips were longitudinally deformed at a strain rate at 10-3 sec-1 to a different percentage of strain, e.g. 2.5, 5, 7.5,10 and 12.5%, and then slowly unloaded. Small specimens were extracted from the mid region of the unclamped portion of these deformed strips. These small specimens were metallographic polished, and corrosion behaviour has been studied by potentiodynamic polarization, electrochemical impedance spectra, and cyclic polarization and potentiostatic tests. Present results show that among three different environments, the 3.5 pct NaCl solution is most aggressive in case of DP 590 dual-phase steel. It is observed that with the increase in the amount of deformation, corrosion rate increases. With deformation, the stored energy increases and leads to enhanced corrosion rate. Cyclic polarization results revealed highly deformed specimen are more prone to pitting corrosion as compared to the condition when amount of deformation is less. It is also observed that stability of the passive layer decreases with the amount of deformation. With the increase of deformation, current density increases in a passive zone and passive zone is also decreased. From Electrochemical impedance spectroscopy study it is found that with increasing amount of deformation polarization resistance (Rp) decreases. EBSD results showed that average geometrically necessary dislocation density increases with increasing strain which in term increased galvanic corrosion as dislocation areas act as the less noble metal.

Keywords: dual phase 590 steel, prestrain, potentiodynamic polarization, cyclic polarization, electrochemical impedance spectra

Procedia PDF Downloads 430