Search results for: mobile standards
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 3463

Search results for: mobile standards

403 Making the Choice: Educational Mobility Decisions of International Doctoral Students

Authors: Adel Pasztor

Abstract:

International doctoral mobility is a largely under-researched component of academic mobility and migration. This is in stark contrast to the case of student mobility where much research has been undertaken on Erasmus students; or the growing research on academic staff mobility which can be viewed as a key part of highly skilled migration. The aim of this paper is to remedy the situation by specifically focusing on international doctoral students studying at elite higher education institutions in the United Kingdom. In doing so, in-depth qualitative interviews with doctoral students and recent graduates were carried out in order to identify the signifiers of an internationally mobile doctoral student and unpack the decision-making processes leading onto the choice of higher education institution abroad. Overall, a diverse range of degree subjects from within the humanities and the social sciences were covered with a relatively large spread of nationalities which include the following countries: Italy, Germany, Hungary, Latvia, Bulgaria, Turkey, Lebanon, Israel, Australia, USA, China, and Chile. The interview questions were designed to probe the motivations, choices, educational trajectories and career plans of international doctoral students relative to their social class background, gender, nationality or funding. It was clear from the interviews that there were two main types of international doctoral students: those who ‘did not think anything else was ever a serious possibility’, contrasted with the other, more opportune type, to whom ‘it happened to be a PhD’. There were marked differences between the two types since initial access to university, mainly because educational decisions such as the doctorate do not happen in a vacuum, rather are built on the individual’s higher education aspirations and previous educational choices. The results were in line with existing literature suggesting that those with higher educated parents and from schools strongly supporting the choice process fared better as they were able to make well informed, well thought through as well as strategic decisions for their future involving the very best universities within the national boundaries. Being ‘at the right place’ often meant access to prestigious doctoral scholarships thus, the route of the PhD has been chosen even if it did not necessarily enhance career opportunities. At the same time, the initial higher education choices of those with limited capital were played out locally, although they did aim for the best universities within their geographically constrained landscape of choice. Here, the majority of students referred to some ‘turning points’ in their lives which lead them towards considering international doctoral opportunities but essentially their proactive, do-it-yourself attitude was behind the life-changing educational opportunities.

Keywords: choice, doctoral students, international mobility, PhD, UK

Procedia PDF Downloads 237
402 Fine-Scale Modeling the Influencing Factors of Multi-Time Dimensions of Transit Ridership at Station Level: The Study of Guangzhou City

Authors: Dijiang Lyu, Shaoying Li, Zhangzhi Tan, Zhifeng Wu, Feng Gao

Abstract:

Nowadays, China is experiencing rapidly urban rail transit expansions in the world. The purpose of this study is to finely model factors influencing transit ridership at multi-time dimensions within transit stations’ pedestrian catchment area (PCA) in Guangzhou, China. This study was based on multi-sources spatial data, including smart card data, high spatial resolution images, points of interest (POIs), real-estate online data and building height data. Eight multiple linear regression models using backward stepwise method and Geographic Information System (GIS) were created at station-level. According to Chinese code for classification of urban land use and planning standards of development land, residential land-use were divided into three categories: first-level (e.g. villa), second-level (e.g. community) and third-level (e.g. urban villages). Finally, it concluded that: (1) four factors (CBD dummy, number of feeder bus route, number of entrance or exit and the years of station operation) were proved to be positively correlated with transit ridership, but the area of green land-use and water land-use negative correlated instead. (2) The area of education land-use, the second-level and third-level residential land-use were found to be highly connected to the average value of morning peak boarding and evening peak alighting ridership. But the area of commercial land-use and the average height of buildings, were significantly positive associated with the average value of morning peak alighting and evening peak boarding ridership. (3) The area of the second-level residential land-use was rarely correlated with ridership in other regression models. Because private car ownership is still large in Guangzhou now, and some residents living in the community around the stations go to work by transit at peak time, but others are much more willing to drive their own car at non-peak time. The area of the third-level residential land-use, like urban villages, was highly positive correlated with ridership in all models, indicating that residents who live in the third-level residential land-use are the main passenger source of the Guangzhou Metro. (4) The diversity of land-use was found to have a significant impact on the passenger flow on the weekend, but was non-related to weekday. The findings can be useful for station planning, management and policymaking.

Keywords: fine-scale modeling, Guangzhou city, multi-time dimensions, multi-sources spatial data, transit ridership

Procedia PDF Downloads 122
401 The Environmental Impact Assessment of Land Use Planning (Case Study: Tannery Industry in Al-Garma District)

Authors: Husam Abdulmuttaleb Hashim

Abstract:

The environmental pollution problems represent a great challenge to the world, threatening to destroy all the evolution that mankind has reached, the organizations and associations that cares about environment are trying to warn the world from the forthcoming danger resulted from excessive use of nature resources and consuming it without looking to the damage happened as a result of unfair use of it. Most of the urban centers suffers from the environmental pollution problems and health, economic, and social dangers resulted from this pollution, and while the land use planning is responsible for distributing different uses in urban centers and controlling the interactions between these uses to reach a homogeneous and perfect state for the different activities in cities, the occurrence of environmental problems in the shade of existing land use planning operation refers to the disorder or insufficiency in this operation which leads to presence of such problems, and this disorder lays in lack of sufficient importance to the environmental considerations during the land use planning operations and setting up the master plan, so the research start to study this problem and finding solutions for it, the research assumes that using accurate and scientific methods in early stages of land use planning operation will prevent occurring of environmental pollution problems in the future, the research aims to study and show the importance of the environmental impact assessment method (EIA) as an important planning tool to investigate and predict the pollution ranges of the land use that has a polluting pattern in land use planning operation. This research encompasses the concept of environmental assessment and its kinds and clarifies environmental impact assessment and its contents, the research also dealt with urban planning concept and land use planning, it also dealt with the current situation of the case study (Al-Garma district) and the land use planning in it and explain the most polluting use on the environment which is the industrial land use represented in the tannery industries and then there was a stating of current situation of this land use and explaining its contents and environmental impacts resulted from it, and then we analyzed the tests applied by the researcher for water and soil, and perform environmental evaluation through applying environmental impact assessment matrix using the direct method to reveal the pollution ranges on the ambient environment of industrial land use, and we also applied the environmental and site limits and standards by using (GIS) and (AUTOCAD) to select the site of the best alternative of the industrial region in Al-Garma district after the research approved the unsuitability of its current site location for the environmental and site limitations, the research conducted some conclusions and recommendations regard clarifying the concluded facts and to set the proper solutions.

Keywords: EIA, pollution, tannery industry, land use planning

Procedia PDF Downloads 432
400 Evaluating the Factors Controlling the Hydrochemistry of Gaza Coastal Aquifer Using Hydrochemical and Multivariate Statistical Analysis

Authors: Madhat Abu Al-Naeem, Ismail Yusoff, Ng Tham Fatt, Yatimah Alias

Abstract:

Groundwater in Gaza strip is increasingly being exposed to anthropic and natural factors that seriously impacted the groundwater quality. Physiochemical data of groundwater can offer important information on changes in groundwater quality that can be useful in improving water management tactics. An integrative hydrochemical and statistical techniques (Hierarchical cluster analysis (HCA) and factor analysis (FA)) have been applied on the existence ten physiochemical data of 84 samples collected in (2000/2001) using STATA, AquaChem, and Surfer softwares to: 1) Provide valuable insight into the salinization sources and the hydrochemical processes controlling the chemistry of groundwater. 2) Differentiate the influence of natural processes and man-made activities. The recorded large diversity in water facies with dominance Na-Cl type that reveals a highly saline aquifer impacted by multiple complex hydrochemical processes. Based on WHO standards, only (15.5%) of the wells were suitable for drinking. HCA yielded three clusters. Cluster 1 is the highest in salinity, mainly due to the impact of Eocene saline water invasion mixed with human inputs. Cluster 2 is the lowest in salinity also due to Eocene saline water invasion but mixed with recent rainfall recharge and limited carbonate dissolution and nitrate pollution. Cluster 3 is similar in salinity to Cluster 2, but with a high diversity of facies due to the impact of many sources of salinity as sea water invasion, carbonate dissolution and human inputs. Factor analysis yielded two factors accounting for 88% of the total variance. Factor 1 (59%) is a salinization factor demonstrating the mixing contribution of natural saline water with human inputs. Factor 2 measure the hardness and pollution which explained 29% of the total variance. The negative relationship between the NO3- and pH may reveal a denitrification process in a heavy polluted aquifer recharged by a limited oxygenated rainfall. Multivariate statistical analysis combined with hydrochemical analysis indicate that the main factors controlling groundwater chemistry were Eocene saline invasion, seawater invasion, sewage invasion and rainfall recharge and the main hydrochemical processes were base ion and reverse ion exchange processes with clay minerals (water rock interactions), nitrification, carbonate dissolution and a limited denitrification process.

Keywords: dendrogram and cluster analysis, water facies, Eocene saline invasion and sea water invasion, nitrification and denitrification

Procedia PDF Downloads 336
399 Criteria to Access Justice in Remote Criminal Trial Implementation

Authors: Inga Žukovaitė

Abstract:

This work aims to present postdoc research on remote criminal proceedings in court in order to streamline the proceedings and, at the same time, ensure the effective participation of the parties in criminal proceedings and the court's obligation to administer substantive and procedural justice. This study tests the hypothesis that remote criminal proceedings do not in themselves violate the fundamental principles of criminal procedure; however, their implementation must ensure the right of the parties to effective legal remedies and a fair trial and, only then, must address the issues of procedural economy, speed and flexibility/functionality of the application of technologies. In order to ensure that changes in the regulation of criminal proceedings are in line with fair trial standards, this research will provide answers to the questions of what conditions -first of all, legal and only then organisational- are required for remote criminal proceedings to ensure respect for the parties and enable their effective participation in public proceedings, to create conditions for quality legal defence and its accessibility, to give a correct impression to the party that they are heard and that the court is impartial and fair. It also seeks to present the results of empirical research in the courts of Lithuania that was made by using the interview method. The research will serve as a basis for developing a theoretical model for remote criminal proceedings in the EU to ensure a balance between the intention to have innovative, cost-effective, and flexible criminal proceedings and the positive obligation of the State to ensure the rights of participants in proceedings to just and fair criminal proceedings. Moreover, developments in criminal proceedings also keep changing the image of the court itself; therefore, in the paper will create preconditions for future research on the impact of remote criminal proceedings on the trust in courts. The study aims at laying down the fundamentals for theoretical models of a remote hearing in criminal proceedings and at making recommendations for the safeguarding of human rights, in particular the rights of the accused, in such proceedings. The following criteria are relevant for the remote form of criminal proceedings: the purpose of judicial instance, the legal position of participants in proceedings, their vulnerability, and the nature of required legal protection. The content of the study consists of: 1. Identification of the factual and legal prerequisites for a decision to organise the entire criminal proceedings by remote means or to carry out one or several procedural actions by remote means 2. After analysing the legal regulation and practice concerning the application of the elements of remote criminal proceedings, distinguish the main legal safeguards for protection of the rights of the accused to ensure: (a) the right of effective participation in a court hearing; (b) the right of confidential consultation with the defence counsel; (c) the right of participation in the examination of evidence, in particular material evidence, as well as the right to question witnesses; and (d) the right to a public trial.

Keywords: remote criminal proceedings, fair trial, right to defence, technology progress

Procedia PDF Downloads 48
398 Prediction of Sound Transmission Through Framed Façade Systems

Authors: Fangliang Chen, Yihe Huang, Tejav Deganyar, Anselm Boehm, Hamid Batoul

Abstract:

With growing population density and further urbanization, the average noise level in cities is increasing. Excessive noise is not only annoying but also leads to a negative impact on human health. To deal with the increasing city noise, environmental regulations bring up higher standards on acoustic comfort in buildings by mitigating the noise transmission from building envelope exterior to interior. Framed window, door and façade systems are the leading choice for modern fenestration construction, which provides demonstrated quality of weathering reliability, environmental efficiency, and installation ease. The overall sound insulation of such systems depends both on glasses and frames, where glass usually covers the majority of the exposed surfaces, thus it is the main source of sound energy transmission. While frames in modern façade systems become slimmer for aesthetic appearance, which contribute to a minimal percentage of exposed surfaces. Nevertheless, frames might provide substantial transmission paths for sound travels through because of much less mass crossing the path, thus becoming more critical in limiting the acoustic performance of the whole system. There are various methodologies and numerical programs that can accurately predict the acoustic performance of either glasses or frames. However, due to the vast variance of size and dimension between frame and glass in the same system, there is no satisfactory theoretical approach or affordable simulation tool in current practice to access the over acoustic performance of a whole façade system. For this reason, laboratory test turns out to be the only reliable source. However, laboratory test is very time consuming and high costly, moreover different lab might provide slightly different test results because of varieties of test chambers, sample mounting, and test operations, which significantly constrains the early phase design of framed façade systems. To address this dilemma, this study provides an effective analytical methodology to predict the acoustic performance of framed façade systems, based on vast amount of acoustic test results on glass, frame and the whole façade system consist of both. Further test results validate the current model is able to accurately predict the overall sound transmission loss of a framed system as long as the acoustic behavior of the frame is available. Though the presented methodology is mainly developed from façade systems with aluminum frames, it can be easily extended to systems with frames of other materials such as steel, PVC or wood.

Keywords: city noise, building facades, sound mitigation, sound transmission loss, framed façade system

Procedia PDF Downloads 27
397 The Incoherence of the Philosophers as a Defense of Philosophy against Theology

Authors: Edward R. Moad

Abstract:

Al-Ghazali’s Tahāfat al Falāsifa is widely construed as an attack on philosophy in favor of theological fideism. Consequently, he has been blamed for ‘death of philosophy’ in the Muslim world. ‘Falsifa’ however is not philosophy itself, but rather a range of philosophical doctrines mainly influenced by or inherited form Greek thought. In these terms, this work represents a defense of philosophy against what we could call ‘falsifical’ fideism. In the introduction, Ghazali describes his target audience as, not the falasifa, but a group of pretenders engaged in taqlid to a misconceived understanding of falasifa, including the belief that they were capable of demonstrative certainty in the field of metaphysics. He promises to use falsifa standards of logic (with which he independently agrees), to show that that the falasifa failed to demonstratively prove many of their positions. Whether or not he succeeds in that, the exercise of subjecting alleged proofs to critical scrutiny is quintessentially philosophical, while uncritical adherence to a doctrine, in the name of its being ‘philosophical’, is decidedly unphilosophical. If we are to blame the intellectual decline of the Muslim world on someone’s ‘bad’ way of thinking, rather than more material historical circumstances (which is already a mistake), then blame more appropriately rests with modernist Muslim thinkers who, under the influence of orientalism (and like Ghazali’s philosophical pretenders) mistook taqlid to the falasifa as philosophy itself. The discussion of the Tahāfut takes place in the context of an epistemic (and related social) hierarchy envisioned by the falasifa, corresponding to the faculties of the sense, the ‘estimative imagination’ (wahm), and the pure intellect, along with the respective forms of discourse – rhetoric, dialectic, and demonstration – appropriate to each category of that order. Al-Farabi in his Book of Letters describes a relation between dialectic and demonstration on the one hand, and theology and philosophy on the other. The latter two are distinguished by method rather than subject matter. Theology is that which proceeds dialectically, while philosophy is (or aims to be?) demonstrative. Yet, Al-Farabi tells us, dialectic precedes philosophy like ‘nourishment for the tree precedes its fruit.’ That is, dialectic is part of the process, by which we interrogate common and imaginative notions in the pursuit of clearly understood first principles that we can then deploy in the demonstrative argument. Philosophy is, therefore, something we aspire to through, and from a discursive condition of, dialectic. This stands in apparent contrast to the understanding of Ibn Sina, for whom one arrives at the knowledge of first principles through contact with the Active Intellect. It also stands in contrast to that of Ibn Rushd, who seems to think our knowledge of first principles can only come through reading Aristotle. In conclusion, based on Al-Farabi’s framework, Ghazali’s Tahafut is a truly an exercise in philosophy, and an effort to keep the door open for true philosophy in the Muslim mind, against the threat of a kind of developing theology going by the name of falsifa.

Keywords: philosophy, incoherence, theology, Tahafut

Procedia PDF Downloads 136
396 Thermal and Visual Comfort Assessment in Office Buildings in Relation to Space Depth

Authors: Elham Soltani Dehnavi

Abstract:

In today’s compact cities, bringing daylighting and fresh air to buildings is a significant challenge, but it also presents opportunities to reduce energy consumption in buildings by reducing the need for artificial lighting and mechanical systems. Simple adjustments to building form can contribute to their efficiency. This paper examines how the relationship between the width and depth of the rooms in office buildings affects visual and thermal comfort, and consequently energy savings. Based on these evaluations, we can determine the best location for sedentary areas in a room. We can also propose improvements to occupant experience and minimize the difference between the predicted and measured performance in buildings by changing other design parameters, such as natural ventilation strategies, glazing properties, and shading. This study investigates the condition of spatial daylighting and thermal comfort for a range of room configurations using computer simulations, then it suggests the best depth for optimizing both daylighting and thermal comfort, and consequently energy performance in each room type. The Window-to-Wall Ratio (WWR) is 40% with 0.8m window sill and 0.4m window head. Also, there are some fixed parameters chosen according to building codes and standards, and the simulations are done in Seattle, USA. The simulation results are presented as evaluation grids using the thresholds for different metrics such as Daylight Autonomy (DA), spatial Daylight Autonomy (sDA), Annual Sunlight Exposure (ASE), and Daylight Glare Probability (DGP) for visual comfort, and Predicted Mean Vote (PMV), Predicted Percentage of Dissatisfied (PPD), occupied Thermal Comfort Percentage (occTCP), over-heated percent, under-heated percent, and Standard Effective Temperature (SET) for thermal comfort that are extracted from Grasshopper scripts. The simulation tools are Grasshopper plugins such as Ladybug, Honeybee, and EnergyPlus. According to the results, some metrics do not change much along the room depth and some of them change significantly. So, we can overlap these grids in order to determine the comfort zone. The overlapped grids contain 8 metrics, and the pixels that meet all 8 mentioned metrics’ thresholds define the comfort zone. With these overlapped maps, we can determine the comfort zones inside rooms and locate sedentary areas there. Other parts can be used for other tasks that are not used permanently or need lower or higher amounts of daylight and thermal comfort is less critical to user experience. The results can be reflected in a table to be used as a guideline by designers in the early stages of the design process.

Keywords: occupant experience, office buildings, space depth, thermal comfort, visual comfort

Procedia PDF Downloads 155
395 Valorization of Lignocellulosic Wastes– Evaluation of Its Toxicity When Used in Adsorption Systems

Authors: Isabel Brás, Artur Figueirinha, Bruno Esteves, Luísa P. Cruz-Lopes

Abstract:

The agriculture lignocellulosic by-products are receiving increased attention, namely in the search for filter materials that retain contaminants from water. These by-products, specifically almond and hazelnut shells are abundant in Portugal once almond and hazelnuts production is a local important activity. Hazelnut and almond shells have as main constituents lignin, cellulose and hemicelluloses, water soluble extractives and tannins. Along the adsorption of heavy metals from contaminated waters, water soluble compounds can leach from shells and have a negative impact in the environment. Usually, the chemical characterization of treated water by itself may not show environmental impact caused by the discharges when parameters obey to legal quality standards for water. Only biological systems can detect the toxic effects of the water constituents. Therefore, the evaluation of toxicity by biological tests is very important when deciding the suitability for safe water discharge or for irrigation applications. The main purpose of the present work was to assess the potential impacts of waters after been treated for heavy metal removal by hazelnut and almond shells adsorption systems, with short term acute toxicity tests. To conduct the study, water at pH 6 with 25 mg.L-1 of lead, was treated with 10 g of shell per litre of wastewater, for 24 hours. This procedure was followed for each bark. Afterwards the water was collected for toxicological assays; namely bacterial resistance, seed germination, Lemna minor L. test and plant grow. The effect in isolated bacteria strains was determined by disc diffusion method and the germination index of seed was evaluated using lettuce, with temperature and humidity germination control for 7 days. For aquatic higher organism, Lemnas were used with 4 days contact time with shell solutions, in controlled light and temperature. For terrestrial higher plants, biomass production was evaluated after 14 days of tomato germination had occurred in soil, with controlled humidity, light and temperature. Toxicity tests of water treated with shells revealed in some extent effects in the tested organisms, with the test assays showing a close behaviour as the control, leading to the conclusion that its further utilization may not be considered to create a serious risk to the environment.

Keywords: lignocellulosic wastes, adsorption, acute toxicity tests, risk assessment

Procedia PDF Downloads 347
394 Effect of 8-OH-DPAT on the Behavioral Indicators of Stress and on the Number of Astrocytes after Exposure to Chronic Stress

Authors: Ivette Gonzalez-Rivera, Diana B. Paz-Trejo, Oscar Galicia-Castillo, David N. Velazquez-Martinez, Hugo Sanchez-Castillo

Abstract:

Prolonged exposure to stress can cause disorders related with dysfunction in the prefrontal cortex such as generalized anxiety and depression. These disorders involve alterations in neurotransmitter systems; the serotonergic system—a target of the drugs that are commonly used as a treatment to these disorders—is one of them. Recent studies suggest that 5-HT1A receptors play a pivotal role in the serotonergic system regulation and in stress responses. In the same way, there is increasing evidence that astrocytes are involved in the pathophysiology of stress. The aim of this study was to examine the effects of 8-OH-DPAT, a selective agonist of 5-HT1A receptors, in the behavioral signs of anxiety and anhedonia as well as in the number of astrocytes in the medial prefrontal cortex (mPFC) after exposure to chronic stress. They used 50 male Wistar rats of 250-350 grams housed in standard laboratory conditions and treated in accordance with the ethical standards of use and care of laboratory animals. A protocol of chronic unpredictable stress was used for 10 consecutive days during which the presentation of stressors such as motion restriction, water deprivation, wet bed, among others, were used. 40 rats were subjected to the stress protocol and then were divided into 4 groups of 10 rats each, which were administered 8-OH-DPAT (Tocris, USA) intraperitoneally with saline as vehicle in doses 0.0, 0.3, 1.0 and 2.0 mg/kg respectively. Another 10 rats were not subjected to the stress protocol or the drug. Subsequently, all the rats were measured in an open field test, a forced swimming test, sucrose consume, and a cero maze test. At the end of this procedure, the animals were sacrificed, the brain was removed and the tissue of the mPFC (Bregma: 4.20, 3.70, 2.70, 2.20) was processed in immunofluorescence staining for astrocytes (Anti-GFAP antibody - astrocyte maker, ABCAM). Statistically significant differences were found in the behavioral tests of all groups, showing that the stress group with saline administration had more indicators of anxiety and anhedonia than the control group and the groups with administration of 8-OH-DPAT. Also, a dose dependent effect of 8-OH-DPAT was found on the number of astrocytes in the mPFC. The results show that 8-OH-DPAT can modulate the effect of stress in both behavioral and anatomical level. Also they indicate that 5-HT1A receptors and astrocytes play an important role in the stress response and may modulate the therapeutic effect of serotonergic drugs, so they should be explored as a fundamental part in the treatment of symptoms of stress and in the understanding of the mechanisms of stress responses.

Keywords: anxiety, prefrontal cortex, serotonergic system, stress

Procedia PDF Downloads 301
393 Evaluation of Occupational Doses in Interventional Radiology

Authors: Fernando Antonio Bacchim Neto, Allan Felipe Fattori Alves, Maria Eugênia Dela Rosa, Regina Moura, Diana Rodrigues De Pina

Abstract:

Interventional Radiology is the radiology modality that provides the highest dose values to medical staff. Recent researches show that personal dosimeters may underestimate dose values in interventional physicians, especially in extremities (hands and feet) and eye lens. The aim of this work was to study radiation exposure levels of medical staff in different interventional radiology procedures and estimate the annual maximum numbers of procedures (AMN) that each physician could perform without exceed the annual limits of dose established by normative. For this purpose LiF:Mg,Ti (TLD-100) dosimeters were positioned in different body regions of the interventional physician (eye lens, thyroid, chest, gonads, hand and foot) above the radiological protection vests as lead apron and thyroid shield. Attenuation values for lead protection vests were based on international guidelines. Based on these data were chosen as 90% attenuation of the lead vests and 60% attenuation of the protective glasses. 25 procedures were evaluated: 10 diagnostics, 10 angioplasty, and 5-aneurysm treatment. The AMN of diagnostic procedures was 641 for the primary interventional radiologist and 930 for the assisting interventional radiologist. For the angioplasty procedures, the AMN for primary interventional radiologist was 445 and for assisting interventional radiologist was 1202. As for the procedures of aneurism treatment, the AMN for the primary interventional radiologist was 113 and for the assisting interventional radiologist were 215. All AMN were limited by the eye lens doses already considering the use of protective glasses. In all categories evaluated, the higher dose values are found in gonads and in the lower regions of professionals, both for the primary interventionist and for the assisting, but the eyes lens dose limits are smaller than these regions. Additional protections as mobile barriers, which can be positioned between the interventionist and the patient, can decrease the exposures in the eye lens, providing a greater protection for the medical staff. The alternation of professionals to perform each type of procedure can reduce the dose values received by them over a period. The analysis of dose profiles proposed in this work showed that personal dosimeters positioned in chest might underestimate dose values in other body parts of the interventional physician, especially in extremities and eye lens. As each body region of the interventionist is subject to different levels of exposure, dose distribution in each region provides a better approach to what actions are necessary to ensure the radiological protection of medical staff.

Keywords: interventional radiology, radiation protection, occupationally exposed individual, hemodynamic

Procedia PDF Downloads 363
392 Informational Habits and Ideology as Predictors for Political Efficacy: A Survey Study of the Brazilian Political Context

Authors: Pedro Cardoso Alves, Ana Lucia Galinkin, José Carlos Ribeiro

Abstract:

Political participation, can be a somewhat tricky subject to define, not in small part due to the constant changes in the concept fruit of the effort to include new forms of participatory behavior that go beyond traditional institutional channels. With the advent of the internet and mobile technologies, defining political participation has become an even more complicated endeavor, given de amplitude of politicized behaviors that are expressed throughout these mediums, be it in the very organization of social movements, in the propagation of politicized texts, videos and images, or in the micropolitical behaviors that are expressed in daily interaction. In fact, the very frontiers that delimit physical and digital spaces have become ever more diluted due to technological advancements, leading to a hybrid existence that is simultaneously physical and digital, not limited, as it once was, to the temporal limitations of classic communications. Moving away from those institutionalized actions of traditional political behavior, an idea of constant and fluid participation, which occurs in our daily lives through conversations, posts, tweets and other digital forms of expression, is discussed. This discussion focuses on the factors that precede more direct forms of political participation, interpreting the relation between informational habits, ideology, and political efficacy. Though some of the informational habits can be considered political participation, by some authors, a distinction is made to establish a logical flow of behaviors leading to participation, that is, one must gather and process information before acting on it. To reach this objective, a quantitative survey is currently being applied in Brazilian social media, evaluating feelings of political efficacy, social and economic issue-based ideological stances and informational habits pertaining to collection, fact-checking, and diversity of sources and ideological positions present in the participant’s political information network. The measure being used for informational habits relies strongly on a mix of information literacy and political sophistication concepts, bringing a more up-to-date understanding of information and knowledge production and processing in contemporary hybrid (physical-digital) environments. Though data is still being collected, preliminary analysis point towards a strong correlation between information habits and political efficacy, while ideology shows a weaker influence over efficacy. Moreover, social ideology and economic ideology seem to have a strong correlation in the sample, such intermingling between social and economic ideals is generally considered a red flag for political polarization.

Keywords: political efficacy, ideology, information literacy, cyberpolitics

Procedia PDF Downloads 215
391 Assessment of the Impact of Social Compliance Certification on Abolition of Forced Labour and Discrimination in the Garment Manufacturing Units in Bengaluru: A Perspective of Women Sewing Operators

Authors: Jonalee Das Bajpai, Sandeep Shastri

Abstract:

The Indian Textile and Garment Industry is one of the major contributors to the country’s economy. This industry is also one of the largest labour intensive industries after agriculture and livestock. This Indian garment industry caters to both the domestic and international market. Although this industry comes under the purview of Indian Labour Laws and other voluntary work place standards yet, this industry is often criticized for the undue exploitation of the workers. This paper explored the status of forced labour and discrimination at work place in the garment manufacturing units in Bengaluru. This study is conducted from the perspective of women sewing operators as majority of operators in Bengaluru are women. The research also explored to study the impact of social compliance certification in abolishing forced labour and discrimination at work place. Objectives of the Research: 1. To study the impact of 'Social Compliance Certification' on abolition of forced labour among the women workforce. 2. To study the impact of 'Social Compliance Certification' on abolition of discrimination at workplace among the women workforce. Sample Size and Data Collection Techniques: The main backbone of the data which is the primary data was collected through a structured questionnaire. The questionnaire attempted to explore the extent of prevalence of forced labour and discrimination against women workers from the perspective of women workers themselves. The sample size for the same was 600 (n) women sewing operators from the garment industry with minimum one year of work experience. Three hundred samples were selected from units with Social Compliance Certification like SA8000, WRAP, BSCI, ETI and so on. Other three hundred samples were selected from units without Social Compliance Certification. Out of these three hundred samples, one hundred and fifty samples were selected from units with Buyer’s Code of Conduct and another one hundred and fifty were from domestic units that do not come under the purview of any such certification. The responses of the survey were further authenticated through on sight visit and personal interactions. Comparative analysis of the workplace environment between units with Social Compliance certification, units with Buyer’s Code of Conduct and domestic units that do not come under the purview of any such voluntary workplace environment enabled to analyze the impact of Social Compliance certification on abolition of workplace environment and discrimination at workplace. Correlation analysis has been conducted to measure the relationship between impact of forced labour and discrimination at workplace on the level of job satisfaction. The result displayed that abolition of forced labour and abolition of discrimination at workplace have a higher level of job satisfaction among the women workers.

Keywords: discrimination, garment industry, forced labour, social compliance certification

Procedia PDF Downloads 174
390 Clinical Efficacy and Tolerability of Dropsordry™ in Spanish Perimenopausal Women with Urgency Urinary Incontinence (UUI)

Authors: J. A. Marañón, L. Lozano C. De Los Santos, L. Martínez-Campesino, E. Caballero-Garrido, F. Galán-Estella

Abstract:

Urinary incontinence (UI) is a significant health problem with considerable social and economic impact. An estimated 30% of women aged 30 to 60 years old have urinary incontinence (UI), while more than 50% of community-dwelling older women have the condition. Stress urinary incontinence and overactive bladder are the common types of incontinence The prevalence of stress and mixed (stress and urge) incontinence is higher than urge incontinence, but the latter is more likely to require treatment. In women, moderate and severe have a prevalence ranging from about 12% to 17% The objectives of this study was to examine the effect of the supplementation of tablets containing Dropsordry in women with urge urinary incontinence (UUI). Dropsordry is a novel active containing phytoestrogens from SOLGEN, the high genistin soy bean extract and pyrogallol plus polyphenols from standarized pumpkin seed extract,. The study was a single-center, not randomiized open prospective, study. 28 women with urinary incontinence ≥45 years were enrolled in this study (45-62 y. old age . Mean 52 y old). Items related to UI symptoms, were previously collected (T0) and these ítems were reviewed at the final of the study – 8 weeks. (T2). The presence of UI was previously diagnosed using the International Continence Society standards (ICS). Relationships between presence of UI and potential related factors as diabetes were also explored. Daily urinary test control was performed during the 8 weeks of treatment. Daily dosage was 1 g/ day (500 mg twice per day) from 0 to 4 week (T1), following a 500 mg/day daily intake from 4 to 8 week (T2). After eight weeks of treatment, the urgency grade score was reduced a 24,7%. The total urge episodes was reduced a 46%. Surprisingly there was no a significant change in daytime urinations (< 5%), however nocturia was reduced a 69,35%. Strenght Urinary Incontinence (SUI) was also tested showing a remarkably 52,17% reduction. Moreover the use of daily pantyliners was reduced a 66,25%. In addition, it was performed a panel test survey with quests when subjects of the study were enrolled (T0) and the same quests was performed after 8 weeks of supplementation (T2). 100% of the enrolled women fullfilled the ICIQ-SF quest (Spanish versión) and they were also questioned about the effects they noticed in response to taking the supplement and the change in quality of life. Interestingly no side effects were reported. There was a 96,2% of subjective satisfaction and a 85,8% objective score in the improvement of quality of life. CONCLUSION: the combination of High genistin isoflavones and pumpkin seed pyrogallol in Dropsordry tablets seems to be a safe and highly effective supplementation for the relieve of the urinary incontinence symptoms and a better quality of life in perimenopause women .

Keywords: isoflavones, pumpkin, menopause, incontinence, genistin

Procedia PDF Downloads 369
389 Determination of the Relative Humidity Profiles in an Internal Micro-Climate Conditioned Using Evaporative Cooling

Authors: M. Bonello, D. Micallef, S. P. Borg

Abstract:

Driven by increased comfort standards, but at the same time high energy consciousness, energy-efficient space cooling has become an essential aspect of building design. Its aims are simple, aiming at providing satisfactory thermal comfort for individuals in an interior space using low energy consumption cooling systems. In this context, evaporative cooling is both an energy-efficient and an eco-friendly cooling process. In the past two decades, several academic studies have been performed to determine the resulting thermal comfort produced by an evaporative cooling system, including studies on temperature profiles, air speed profiles, effect of clothing and personnel activity. To the best knowledge of the authors, no studies have yet considered the analysis of relative humidity (RH) profiles in a space cooled using evaporative cooling. Such a study will determine the effect of different humidity levels on a person's thermal comfort and aid in the consequent improvement designs of such future systems. Under this premise, the research objective is to characterise the resulting different RH profiles in a chamber micro-climate using the evaporative cooling system in which the inlet air speed, temperature and humidity content are varied. The chamber shall be modelled using Computational Fluid Dynamics (CFD) in ANSYS Fluent. Relative humidity shall be modelled using a species transport model while the k-ε RNG formulation is the proposed turbulence model that is to be used. The model shall be validated with measurements taken using an identical test chamber in which tests are to be conducted under the different inlet conditions mentioned above, followed by the verification of the model's mesh and time step. The verified and validated model will then be used to simulate other inlet conditions which would be impractical to conduct in the actual chamber. More details of the modelling and experimental approach will be provided in the full paper The main conclusions from this work are two-fold: the micro-climatic relative humidity spatial distribution within the room is important to consider in the context of investigating comfort at occupant level; and the investigation of a human being's thermal comfort (based on Predicted Mean Vote – Predicted Percentage Dissatisfied [PMV-PPD] values) and its variation with different locations of relative humidity values. The study provides the necessary groundwork for investigating the micro-climatic RH conditions of environments cooled using evaporative cooling. Future work may also target the analysis of ways in which evaporative cooling systems may be improved to better the thermal comfort of human beings, specifically relating to the humidity content around a sedentary person.

Keywords: chamber micro-climate, evaporative cooling, relative humidity, thermal comfort

Procedia PDF Downloads 136
388 The Role of High-Intensity Focused Ultrasound (HIFU) in the Treatment of Fibroadenomas: A Systematic Review

Authors: Ahmed Gonnah, Omar Masoud, Mohamed Abdel-Wahab, Ahmed ElMosalamy, Abdulrahman Al-Naseem

Abstract:

Introduction: Fibroadenomas are solid, mobile, and non-tender benign breast lumps, with the highest prevalence amongst young women aged between 15 and 35. Symptoms can include discomfort, and they can become problematic, particularly when they enlarge, resulting in many referrals for biopsies, with fibroadenomas accounting for 30-75% of the cases. Diagnosis is based on triple assessment that involves a clinical examination, ultrasound imaging and mammography, as well as core needle biopsies. Current management includes observation for 6-12 months, with the indication of definitive surgery, in cases that are older than 35 years or with fibroadenoma persistence. Serious adverse effects of surgery might include nipple-areolar distortion, scarring and damage to the breast tissue, as well as the risks associated with surgery and anesthesia, making it a non-feasible option. Methods: A literature search was performed on the databases EMBASE. MEDLINE/PubMed, Google scholar and Ovid, for English language papers published between 1st of January 2000 and 17th of March 2021. A structured protocol was employed to devise a comprehensive search strategy with keywords and Boolean operators defined by the research question. The keywords used for the search were ‘HIFU’, ‘High-Intensity Focused Ultrasound’, ‘Fibroadenoma’, ‘Breast’, ‘Lesion’. This review was carried out in accordance with the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) guidelines. Results: Recently, a thermal ablative technique, High Intensity Focused Ultrasound (HIFU), was found to be a safe, non-invasive, and technically successful alternative, having displayed promising outcomes in reducing the volume of fibroadenomas, pain experienced by patients, and the length of hospitalization. Quality of life improvement was also evidenced, exhibited by the disappearance of symptoms, and enhanced physical activity post-intervention, in addition to patients’ satisfaction with the cosmetic results and future recommendation of the procedure to other patients. Conclusion: Overall, HIFU is a well-tolerated treatment associated with a low risk of complications that can potentially include erythema, skin discoloration and bruising, with the majority of this self-resolving shortly after the procedure.

Keywords: ultrasound, HIFU, breast, efficacy, side effects, fibroadenoma

Procedia PDF Downloads 185
387 Measurement of Fatty Acid Changes in Post-Mortem Belowground Carcass (Sus-scrofa) Decomposition: A Semi-Quantitative Methodology for Determining the Post-Mortem Interval

Authors: Nada R. Abuknesha, John P. Morgan, Andrew J. Searle

Abstract:

Information regarding post-mortem interval (PMI) in criminal investigations is vital to establish a time frame when reconstructing events. PMI is defined as the time period that has elapsed between the occurrence of death and the discovery of the corpse. Adipocere, commonly referred to as ‘grave-wax’, is formed when post-mortem adipose tissue is converted into a solid material that is heavily comprised of fatty acids. Adipocere is of interest to forensic anthropologists, as its formation is able to slow down the decomposition process. Therefore, analysing the changes in the patterns of fatty acids during the early decomposition process may be able to estimate the period of burial, and hence the PMI. The current study concerned the investigation of the fatty acid composition and patterns in buried pig fat tissue. This was in an attempt to determine whether particular patterns of fatty acid composition can be shown to be associated with the duration of the burial, and hence may be used to estimate PMI. The use of adipose tissue from the abdominal region of domestic pigs (Sus-scrofa), was used to model the human decomposition process. 17 x 20cm piece of pork belly was buried in a shallow artificial grave, and weekly samples (n=3) from the buried pig fat tissue were collected over an 11-week period. Marker fatty acids: palmitic (C16:0), oleic (C18:1n-9) and linoleic (C18:2n-6) acid were extracted from the buried pig fat tissue and analysed as fatty acid methyl esters using the gas chromatography system. Levels of the marker fatty acids were quantified from their respective standards. The concentrations of C16:0 (69.2 mg/mL) and C18:1n-9 (44.3 mg/mL) from time zero exhibited significant fluctuations during the burial period. Levels rose (116 and 60.2 mg/mL, respectively) and fell starting from the second week to reach 19.3 and 18.3 mg/mL, respectively at week 6. Levels showed another increase at week 9 (66.3 and 44.1 mg/mL, respectively) followed by gradual decrease at week 10 (20.4 and 18.5 mg/mL, respectively). A sharp increase was observed in the final week (131.2 and 61.1 mg/mL, respectively). Conversely, the levels of C18:2n-6 remained more or less constant throughout the study. In addition to fluctuations in the concentrations, several new fatty acids appeared in the latter weeks. Other fatty acids which were detectable in the time zero sample, were lost in the latter weeks. There are several probable opportunities to utilise fatty acid analysis as a basic technique for approximating PMI: the quantification of marker fatty acids and the detection of selected fatty acids that either disappear or appear during the burial period. This pilot study indicates that this may be a potential semi-quantitative methodology for determining the PMI. Ideally, the analysis of particular fatty acid patterns in the early stages of decomposition could be an additional tool to the already available techniques or methods in improving the overall processes in estimating PMI of a corpse.

Keywords: adipocere, fatty acids, gas chromatography, post-mortem interval

Procedia PDF Downloads 103
386 Modeling of the Biodegradation Performance of a Membrane Bioreactor to Enhance Water Reuse in Agri-food Industry - Poultry Slaughterhouse as an Example

Authors: masmoudi Jabri Khaoula, Zitouni Hana, Bousselmi Latifa, Akrout Hanen

Abstract:

Mathematical modeling has become an essential tool for sustainable wastewater management, particularly for the simulation and the optimization of complex processes involved in activated sludge systems. In this context, the activated sludge model (ASM3h) was used for the simulation of a Biological Membrane Reactor (MBR) as it includes the integration of biological wastewater treatment and physical separation by membrane filtration. In this study, the MBR with a useful volume of 12.5 L was fed continuously with poultry slaughterhouse wastewater (PSWW) for 50 days at a feed rate of 2 L/h and for a hydraulic retention time (HRT) of 6.25h. Throughout its operation, High removal efficiency was observed for the removal of organic pollutants in terms of COD with 84% of efficiency. Moreover, the MBR has generated a treated effluent which fits with the limits of discharge into the public sewer according to the Tunisian standards which were set in March 2018. In fact, for the nitrogenous compounds, average concentrations of nitrate and nitrite in the permeat reached 0.26±0.3 mg. L-1 and 2.2±2.53 mg. L-1, respectively. The simulation of the MBR process was performed using SIMBA software v 5.0. The state variables employed in the steady state calibration of the ASM3h were determined using physical and respirometric methods. The model calibration was performed using experimental data obtained during the first 20 days of the MBR operation. Afterwards, kinetic parameters of the model were adjusted and the simulated values of COD, N-NH4+and N- NOx were compared with those reported from the experiment. A good prediction was observed for the COD, N-NH4+and N- NOx concentrations with 467 g COD/m³, 110.2 g N/m³, 3.2 g N/m³ compared to the experimental data which were 436.4 g COD/m³, 114.7 g N/m³ and 3 g N/m³, respectively. For the validation of the model under dynamic simulation, the results of the experiments obtained during the second treatment phase of 30 days were used. It was demonstrated that the model simulated the conditions accurately by yielding a similar pattern on the variation of the COD concentration. On the other hand, an underestimation of the N-NH4+ concentration was observed during the simulation compared to the experimental results and the measured N-NO3 concentrations were lower than the predicted ones, this difference could be explained by the fact that the ASM models were mainly designed for the simulation of biological processes in the activated sludge systems. In addition, more treatment time could be required by the autotrophic bacteria to achieve a complete and stable nitrification. Overall, this study demonstrated the effectiveness of mathematical modeling in the prediction of the performance of the MBR systems with respect to organic pollution, the model can be further improved for the simulation of nutrients removal for a longer treatment period.

Keywords: activated sludge model (ASM3h), membrane bioreactor (MBR), poultry slaughter wastewater (PSWW), reuse

Procedia PDF Downloads 28
385 Modelling the Behavior of Commercial and Test Textiles against Laundering Process by Statistical Assessment of Their Performance

Authors: M. H. Arslan, U. K. Sahin, H. Acikgoz-Tufan, I. Gocek, I. Erdem

Abstract:

Various exterior factors have perpetual effects on textile materials during wear, use and laundering in everyday life. In accordance with their frequency of use, textile materials are required to be laundered at certain intervals. The medium in which the laundering process takes place have inevitable detrimental physical and chemical effects on textile materials caused by the unique parameters of the process inherently existing. Connatural structures of various textile materials result in many different physical, chemical and mechanical characteristics. Because of their specific structures, these materials have different behaviors against several exterior factors. By modeling the behavior of commercial and test textiles as group-wise against laundering process, it is possible to disclose the relation in between these two groups of materials, which will lead to better understanding of their behaviors in terms of similarities and differences against the washing parameters of the laundering. Thus, the goal of the current research is to examine the behavior of two groups of textile materials as commercial textiles and as test textiles towards the main washing machine parameters during laundering process such as temperature, load quantity, mechanical action and level of water amount by concentrating on shrinkage, pilling, sewing defects, collar abrasion, the other defects other than sewing, whitening and overall properties of textiles. In this study, cotton fabrics were preferred as commercial textiles due to the fact that garments made of cotton are the most demanded products in the market by the textile consumers in daily life. Full factorial experimental set-up was used to design the experimental procedure. All profiles always including all of the commercial and the test textiles were laundered for 20 cycles by commercial home laundering machine to investigate the effects of the chosen parameters. For the laundering process, a modified version of ‘‘IEC 60456 Test Method’’ was utilized. The amount of detergent was altered as 0.5% gram per liter depending on varying load quantity levels. Datacolor 650®, EMPA Photographic Standards for Pilling Test and visual examination were utilized to test and characterize the textiles. Furthermore, in the current study the relation in between commercial and test textiles in terms of their performance was deeply investigated by the help of statistical analysis performed by MINITAB® package program modeling their behavior against the parameters of the laundering process. In the experimental work, the behaviors of both groups of textiles towards washing machine parameters were visually and quantitatively assessed in dry state.

Keywords: behavior against washing machine parameters, performance evaluation of textiles, statistical analysis, commercial and test textiles

Procedia PDF Downloads 330
384 Stability Indicating RP – HPLC Method Development, Validation and Kinetic Study for Amiloride Hydrochloride and Furosemide in Pharmaceutical Dosage Form

Authors: Jignasha Derasari, Patel Krishna M, Modi Jignasa G.

Abstract:

Chemical stability of pharmaceutical molecules is a matter of great concern as it affects the safety and efficacy of the drug product.Stability testing data provides the basis to understand how the quality of a drug substance and drug product changes with time under the influence of various environmental factors. Besides this, it also helps in selecting proper formulation and package as well as providing proper storage conditions and shelf life, which is essential for regulatory documentation. The ICH guideline states that stress testing is intended to identify the likely degradation products which further help in determination of the intrinsic stability of the molecule and establishing degradation pathways, and to validate the stability indicating procedures. A simple, accurate and precise stability indicating RP- HPLC method was developed and validated for simultaneous estimation of Amiloride Hydrochloride and Furosemide in tablet dosage form. Separation was achieved on an Phenomenexluna ODS C18 (250 mm × 4.6 mm i.d., 5 µm particle size) by using a mobile phase consisting of Ortho phosphoric acid: Acetonitrile (50:50 %v/v) at a flow rate of 1.0 ml/min (pH 3.5 adjusted with 0.1 % TEA in Water) isocratic pump mode, Injection volume 20 µl and wavelength of detection was kept at 283 nm. Retention time for Amiloride Hydrochloride and Furosemide was 1.810 min and 4.269 min respectively. Linearity of the proposed method was obtained in the range of 40-60 µg/ml and 320-480 µg/ml and Correlation coefficient was 0.999 and 0.998 for Amiloride hydrochloride and Furosemide, respectively. Forced degradation study was carried out on combined dosage form with various stress conditions like hydrolysis (acid and base hydrolysis), oxidative and thermal conditions as per ICH guideline Q2 (R1). The RP- HPLC method has shown an adequate separation for Amiloride hydrochloride and Furosemide from its degradation products. Proposed method was validated as per ICH guidelines for specificity, linearity, accuracy; precision and robustness for estimation of Amiloride hydrochloride and Furosemide in commercially available tablet dosage form and results were found to be satisfactory and significant. The developed and validated stability indicating RP-HPLC method can be used successfully for marketed formulations. Forced degradation studies help in generating degradants in much shorter span of time, mostly a few weeks can be used to develop the stability indicating method which can be applied later for the analysis of samples generated from accelerated and long term stability studies. Further, kinetic study was also performed for different forced degradation parameters of the same combination, which help in determining order of reaction.

Keywords: amiloride hydrochloride, furosemide, kinetic study, stability indicating RP-HPLC method validation

Procedia PDF Downloads 439
383 Security Issues in Long Term Evolution-Based Vehicle-To-Everything Communication Networks

Authors: Mujahid Muhammad, Paul Kearney, Adel Aneiba

Abstract:

The ability for vehicles to communicate with other vehicles (V2V), the physical (V2I) and network (V2N) infrastructures, pedestrians (V2P), etc. – collectively known as V2X (Vehicle to Everything) – will enable a broad and growing set of applications and services within the intelligent transport domain for improving road safety, alleviate traffic congestion and support autonomous driving. The telecommunication research and industry communities and standardization bodies (notably 3GPP) has finally approved in Release 14, cellular communications connectivity to support V2X communication (known as LTE – V2X). LTE – V2X system will combine simultaneous connectivity across existing LTE network infrastructures via LTE-Uu interface and direct device-to-device (D2D) communications. In order for V2X services to function effectively, a robust security mechanism is needed to ensure legal and safe interaction among authenticated V2X entities in the LTE-based V2X architecture. The characteristics of vehicular networks, and the nature of most V2X applications, which involve human safety makes it significant to protect V2X messages from attacks that can result in catastrophically wrong decisions/actions include ones affecting road safety. Attack vectors include impersonation attacks, modification, masquerading, replay, MiM attacks, and Sybil attacks. In this paper, we focus our attention on LTE-based V2X security and access control mechanisms. The current LTE-A security framework provides its own access authentication scheme, the AKA protocol for mutual authentication and other essential cryptographic operations between UEs and the network. V2N systems can leverage this protocol to achieve mutual authentication between vehicles and the mobile core network. However, this protocol experiences technical challenges, such as high signaling overhead, lack of synchronization, handover delay and potential control plane signaling overloads, as well as privacy preservation issues, which cannot satisfy the adequate security requirements for majority of LTE-based V2X services. This paper examines these challenges and points to possible ways by which they can be addressed. One possible solution, is the implementation of the distributed peer-to-peer LTE security mechanism based on the Bitcoin/Namecoin framework, to allow for security operations with minimal overhead cost, which is desirable for V2X services. The proposed architecture can ensure fast, secure and robust V2X services under LTE network while meeting V2X security requirements.

Keywords: authentication, long term evolution, security, vehicle-to-everything

Procedia PDF Downloads 146
382 Smart BIM Documents - the Development of the Ontology-Based Tool for Employer Information Requirements (OntEIR), and its Transformation into SmartEIR

Authors: Shadan Dwairi

Abstract:

Defining proper requirements is one of the key factors for a successful construction projects. Although there have been many attempts put forward in assist in identifying requirements, but still this area is under developed. In Buildings Information Modelling (BIM) projects. The Employer Information Requirements (EIR) is the fundamental requirements document and a necessary ingredient in achieving a successful BIM project. The provision on full and clear EIR is essential to achieving BIM Level-2. As Defined by PAS 1192-2, EIR is a “pre-tender document that sets out the information to be delivered and the standards and processes to be adopted by the supplier as part of the project delivery process”. It also notes that “EIR should be incorporated into tender documentation to enable suppliers to produce an initial BIM Execution Plan (BEP)”. The importance of effective definition of EIR lies in its contribution to a better productivity during the construction process in terms of cost and time, in addition to improving the quality of the built asset. Proper and clear information is a key aspect of the EIR, in terms of the information it contains and more importantly the information the client receives at the end of the project that will enable the effective management and operation of the asset, where typically about 60%-80% of the cost is spent. This paper reports on the research done in developing the Ontology-based tool for Employer Information Requirements (OntEIR). OntEIR has proven the ability to produce a full and complete set of EIRs, which ensures that the clients’ information needs for the final model delivered by BIM is clearly defined from the beginning of the process. It also reports on the work being done into transforming OntEIR into a smart tool for Defining Employer Information Requirements (smartEIR). smartEIR transforms the OntEIR tool into enabling it to develop custom EIR- tailored for the: Project Type, Project Requirements, and the Client Capabilities. The initial idea behind smartEIR is moving away from the notion “One EIR fits All”. smartEIR utilizes the links made in OntEIR and creating a 3D matrix that transforms it into a smart tool. The OntEIR tool is based on the OntEIR framework that utilizes both Ontology and the Decomposition of Goals to elicit and extract the complete set of requirements needed for a full and comprehensive EIR. A new ctaegorisation system for requirements is also introduced in the framework and tool, which facilitates the understanding and enhances the clarification of the requirements especially for novice clients. Findings of the evaluation of the tool that was done with experts in the industry, showed that the OntEIR tool contributes towards effective and efficient development of EIRs that provide a better understanding of the information requirements as requested by BIM, and support the production of a complete BIM Execution Plan (BEP) and a Master Information Delivery Plan (MIDP).

Keywords: building information modelling, employer information requirements, ontology, web-based, tool

Procedia PDF Downloads 106
381 Method of Complex Estimation of Text Perusal and Indicators of Reading Quality in Different Types of Commercials

Authors: Victor N. Anisimov, Lyubov A. Boyko, Yazgul R. Almukhametova, Natalia V. Galkina, Alexander V. Latanov

Abstract:

Modern commercials presented on billboards, TV and on the Internet contain a lot of information about the product or service in text form. However, this information cannot always be perceived and understood by consumers. Typical sociological focus group studies often cannot reveal important features of the interpretation and understanding information that has been read in text messages. In addition, there is no reliable method to determine the degree of understanding of the information contained in a text. Only the fact of viewing a text does not mean that consumer has perceived and understood the meaning of this text. At the same time, the tools based on marketing analysis allow only to indirectly estimate the process of reading and understanding a text. Therefore, the aim of this work is to develop a valid method of recording objective indicators in real time for assessing the fact of reading and the degree of text comprehension. Psychophysiological parameters recorded during text reading can form the basis for this objective method. We studied the relationship between multimodal psychophysiological parameters and the process of text comprehension during reading using the method of correlation analysis. We used eye-tracking technology to record eye movements parameters to estimate visual attention, electroencephalography (EEG) to assess cognitive load and polygraphic indicators (skin-galvanic reaction, SGR) that reflect the emotional state of the respondent during text reading. We revealed reliable interrelations between perceiving the information and the dynamics of psychophysiological parameters during reading the text in commercials. Eye movement parameters reflected the difficulties arising in respondents during perceiving ambiguous parts of text. EEG dynamics in rate of alpha band were related with cumulative effect of cognitive load. SGR dynamics were related with emotional state of the respondent and with the meaning of text and type of commercial. EEG and polygraph parameters together also reflected the mental difficulties of respondents in understanding text and showed significant differences in cases of low and high text comprehension. We also revealed differences in psychophysiological parameters for different type of commercials (static vs. video, financial vs. cinema vs. pharmaceutics vs. mobile communication, etc.). Conclusions: Our methodology allows to perform multimodal evaluation of text perusal and the quality of text reading in commercials. In general, our results indicate the possibility of designing an integral model to estimate the comprehension of reading the commercial text in percent scale based on all noticed markers.

Keywords: reading, commercials, eye movements, EEG, polygraphic indicators

Procedia PDF Downloads 142
380 The Challenge of Assessing Social AI Threats

Authors: Kitty Kioskli, Theofanis Fotis, Nineta Polemi

Abstract:

The European Union (EU) directive Artificial Intelligence (AI) Act in Article 9 requires that risk management of AI systems includes both technical and human oversight, while according to NIST_AI_RFM (Appendix C) and ENISA AI Framework recommendations, claim that further research is needed to understand the current limitations of social threats and human-AI interaction. AI threats within social contexts significantly affect the security and trustworthiness of the AI systems; they are interrelated and trigger technical threats as well. For example, lack of explainability (e.g. the complexity of models can be challenging for stakeholders to grasp) leads to misunderstandings, biases, and erroneous decisions. Which in turn impact the privacy, security, accountability of the AI systems. Based on the NIST four fundamental criteria for explainability it can also classify the explainability threats into four (4) sub-categories: a) Lack of supporting evidence: AI systems must provide supporting evidence or reasons for all their outputs. b) Lack of Understandability: Explanations offered by systems should be comprehensible to individual users. c) Lack of Accuracy: The provided explanation should accurately represent the system's process of generating outputs. d) Out of scope: The system should only function within its designated conditions or when it possesses sufficient confidence in its outputs. Biases may also stem from historical data reflecting undesired behaviors. When present in the data, biases can permeate the models trained on them, thereby influencing the security and trustworthiness of the of AI systems. Social related AI threats are recognized by various initiatives (e.g., EU Ethics Guidelines for Trustworthy AI), standards (e.g. ISO/IEC TR 24368:2022 on AI ethical concerns, ISO/IEC AWI 42105 on guidance for human oversight of AI systems) and EU legislation (e.g. the General Data Protection Regulation 2016/679, the NIS 2 Directive 2022/2555, the Directive on the Resilience of Critical Entities 2022/2557, the EU AI Act, the Cyber Resilience Act). Measuring social threats, estimating the risks to AI systems associated to these threats and mitigating them is a research challenge. In this paper it will present the efforts of two European Commission Projects (FAITH and THEMIS) from the HorizonEurope programme that analyse the social threats by building cyber-social exercises in order to study human behaviour, traits, cognitive ability, personality, attitudes, interests, and other socio-technical profile characteristics. The research in these projects also include the development of measurements and scales (psychometrics) for human-related vulnerabilities that can be used in estimating more realistically the vulnerability severity, enhancing the CVSS4.0 measurement.

Keywords: social threats, artificial Intelligence, mitigation, social experiment

Procedia PDF Downloads 33
379 Life Cycle Assessment-Based Environmental Assessment of the Production and Maintenance of Wooden Windows

Authors: Pamela Del Rosario, Elisabetta Palumbo, Marzia Traverso

Abstract:

The building sector plays an important role in addressing pressing environmental issues such as climate change and resource scarcity. The energy performance of buildings is considerably affected by the external envelope. In fact, a considerable proportion of the building energy demand is due to energy losses through the windows. Nevertheless, according to literature, to pay attention only to the contribution of windows to the building energy performance, i.e., their influence on energy use during building operation, could result in a partial evaluation. Hence, it is important to consider not only the building energy performance but also the environmental performance of windows, and this not only during the operational stage but along its complete life cycle. Life Cycle Assessment (LCA) according to ISO 14040:2006 and ISO 14044:2006+A1:2018 is one of the most adopted and robust methods to evaluate the environmental performance of products throughout their complete life cycle. This life-cycle based approach avoids the shift of environmental impacts of a life cycle stage to another, allowing to allocate them to the stage in which they originated and to adopt measures that optimize the environmental performance of the product. Moreover, the LCA method is widely implemented in the construction sector to assess whole buildings as well as construction products and materials. LCA is regulated by the European Standards EN 15978:2011, at the building level, and EN 15804:2012+A2:2019, at the level of construction products and materials. In this work, the environmental performance of wooden windows was assessed by implementing the LCA method and adopting primary data. More specifically, the emphasis is given to embedded and operational impacts. Furthermore, correlations are made between these environmental impacts and aspects such as type of wood and window transmittance. In the particular case of the operational impacts, special attention is set on the definition of suitable maintenance scenarios that consider the potential climate influence on the environmental impacts. For this purpose, a literature review was conducted, and expert consultation was carried out. The study underlined the variability of the embedded environmental impacts of wooden windows by considering different wood types and transmittance values. The results also highlighted the need to define appropriate maintenance scenarios for precise assessment results. It was found that both the service life and the window maintenance requirements in terms of treatment and its frequency are highly dependent not only on the wood type and its treatment during the manufacturing process but also on the weather conditions of the place where the window is installed. In particular, it became evident that maintenance-related environmental impacts were the highest for climate regions with the lowest temperatures and the greatest amount of precipitation.

Keywords: embedded impacts, environmental performance, life cycle assessment, LCA, maintenance stage, operational impacts, wooden windows

Procedia PDF Downloads 207
378 Railway Ballast Volumes Automated Estimation Based on LiDAR Data

Authors: Bahar Salavati Vie Le Sage, Ismaïl Ben Hariz, Flavien Viguier, Sirine Noura Kahil, Audrey Jacquin, Maxime Convert

Abstract:

The ballast layer plays a key role in railroad maintenance and the geometry of the track structure. Ballast also holds the track in place as the trains roll over it. Track ballast is packed between the sleepers and on the sides of railway tracks. An imbalance in ballast volume on the tracks can lead to safety issues as well as a quick degradation of the overall quality of the railway segment. If there is a lack of ballast in the track bed during the summer, there is a risk that the rails will expand and buckle slightly due to the high temperatures. Furthermore, the knowledge of the ballast quantities that will be excavated during renewal works is important for efficient ballast management. The volume of excavated ballast per meter of track can be calculated based on excavation depth, excavation width, volume of track skeleton (sleeper and rail) and sleeper spacing. Since 2012, SNCF has been collecting 3D points cloud data covering its entire railway network by using 3D laser scanning technology (LiDAR). This vast amount of data represents a modelization of the entire railway infrastructure, allowing to conduct various simulations for maintenance purposes. This paper aims to present an automated method for ballast volume estimation based on the processing of LiDAR data. The estimation of abnormal volumes in ballast on the tracks is performed by analyzing the cross-section of the track. Further, since the amount of ballast required varies depending on the track configuration, the knowledge of the ballast profile is required. Prior to track rehabilitation, excess ballast is often present in the ballast shoulders. Based on 3D laser scans, a Digital Terrain Model (DTM) was generated and automatic extraction of the ballast profiles from this data is carried out. The surplus in ballast is then estimated by performing a comparison between this ballast profile obtained empirically, and a geometric modelization of the theoretical ballast profile thresholds as dictated by maintenance standards. Ideally, this excess should be removed prior to renewal works and recycled to optimize the output of the ballast renewal machine. Based on these parameters, an application has been developed to allow the automatic measurement of ballast profiles. We evaluated the method on a 108 kilometers segment of railroad LiDAR scans, and the results show that the proposed algorithm detects ballast surplus that amounts to values close to the total quantities of spoil ballast excavated.

Keywords: ballast, railroad, LiDAR , cloud point, track ballast, 3D point

Procedia PDF Downloads 76
377 On implementing Sumak Kawsay in Post Bellum Principles: The Reconstruction of Natural Damage in the Aftermath of War

Authors: Lisa Tragbar

Abstract:

In post-war scenarios, reconstruction is a principle towards creating a Just Peace in order to restore a stable post-war society. Just peace theorists explore normative behaviour after war, including the duties and responsibilities of different actors and peacebuilding strategies to achieve a lasting, positive peace. Environmental peace ethicists have argued for including the role of nature in the Ethics of War and Peace. This text explores the question of why and how to rethink the value of nature in post-war scenarios. The aim is to include the rights of nature within a maximalist account of reconstruction by highlighting sumak kawsay in the post-war period. Destruction of nature is usually considered collateral damage in war scenarios. Common universal standards for post-war reconstruction are restitution, compensation and reparation programmes, which is mostly anthropocentric approach. The problem of reconstruction in the aftermath of war is the instrumental value of nature. The responsibility to rebuild needs to be revisited within a non-anthropocentric context. There is an ongoing debate about a minimalist or maximalist approach to post-war reconstruction. While Michael Walzer argues for minimalist in-and-out interventions, Alex Bellamy argues for maximalist strategies such as the responsibility to protect, a UN-concept on how face mass atrocity crimes and how to reconstruct peace. While supporting the tradition of maximalist responsibility to rebuild, these normative post-Bellum concepts do not yet sufficiently consider the rights of nature in the aftermath of war. While reconstruction of infrastructures seems important and necessary, concepts that strengthen the intrinsic value of nature in post-bellum measures must also be included. Peace is not Just Peace without a thriving nature that provides the conditions and resources to live and guarantee human rights. Ecuador's indigenous philosophy of life can contribute to the restoration of nature after war by changing the perspective on the value of nature. The sumak kawsay includes the de-hierarchisation of humans and nature and the principle of reciprocity towards nature. Transferring this idea of life and interconnectedness to post-war reconstruction practices, post bellum perpetrators have restorative obligations not only to people but also to nature. This maximalist approach would include both a restitutive principle, by restoring the balance between humans and nature, and a retributive principle, by punishing the perpetrators through compensatory duties to nature. A maximalist approach to post-war reconstruction that takes into account the rights of nature expands the normative post-war questions to include a more complex field of responsibilities. After a war, Just Peace is restored once not only human rights but also the rights of nature are secured. A minimalist post-bellum approach to reconstruction does not locate future problems at their source and does not offer a solution for the inclusion of obligations to nature. There is a lack of obligations towards nature after a war, which can be changed through a different perspective: The indigenous philosophy of life provides the necessary principles for a comprehensive reconstruction of Just Peace.

Keywords: normative ethics, peace, post-war, sumak kawsay, applied ethics

Procedia PDF Downloads 58
376 Computerized Scoring System: A Stethoscope to Understand Consumer's Emotion through His or Her Feedback

Authors: Chen Yang, Jun Hu, Ping Li, Lili Xue

Abstract:

Most companies pay careful attention to consumer feedback collection, so it is popular to find the ‘feedback’ button of all kinds of mobile apps. Yet it is much more changeling to analyze these feedback texts and to catch the true feelings of a consumer regarding either a problem or a complimentary of consumers who hands out the feedback. Especially to the Chinese content, it is possible that; in one context the Chinese feedback expresses positive feedback, but in the other context, the same Chinese feedback may be a negative one. For example, in Chinese, the feedback 'operating with loudness' works well with both refrigerator and stereo system. Apparently, this feedback towards a refrigerator shows negative feedback; however, the same feedback is positive towards a stereo system. By introducing Bradley, M. and Lang, P.'s Affective Norms for English Text (ANET) theory and Bucci W.’s Referential Activity (RA) theory, we, usability researchers at Pingan, are able to decipher the feedback and to find the hidden feelings behind the content. We subtract 2 disciplines ‘valence’ and ‘dominance’ out of 3 of ANET and 2 disciplines ‘concreteness’ and ‘specificity’ out of 4 of RA to organize our own rating system with a scale of 1 to 5 points. This rating system enables us to judge the feelings/emotion behind each feedback, and it works well with both single word/phrase and a whole paragraph. The result of the rating reflects the strength of the feeling/emotion of the consumer when he/she is typing the feedback. In our daily work, we first require a consumer to answer the net promoter score (NPS) before writing the feedback, so we can determine the feedback is positive or negative. Secondly, we code the feedback content according to company problematic list, which contains 200 problematic items. In this way, we are able to collect the data that how many feedbacks left by the consumer belong to one typical problem. Thirdly, we rate each feedback based on the rating system mentioned above to illustrate the strength of the feeling/emotion when our consumer writes the feedback. In this way, we actually obtain two kinds of data 1) the portion, which means how many feedbacks are ascribed into one problematic item and 2) the severity, how strong the negative feeling/emotion is when the consumer is writing this feedback. By crossing these two, and introducing the portion into X-axis and severity into Y-axis, we are able to find which typical problem gets the high score in both portion and severity. The higher the score of a problem has, the more urgent a problem is supposed to be solved as it means more people write stronger negative feelings in feedbacks regarding this problem. Moreover, by introducing hidden Markov model to program our rating system, we are able to computerize the scoring system and are able to process thousands of feedback in a short period of time, which is efficient and accurate enough for the industrial purpose.

Keywords: computerized scoring system, feeling/emotion of consumer feedback, referential activity, text mining

Procedia PDF Downloads 145
375 The Performance Evaluation of the Modular Design of Hybrid Wall with Surface Heating and Cooling System

Authors: Selcen Nur Eri̇kci̇ Çeli̇k, Burcu İbaş Parlakyildiz, Gülay Zorer Gedi̇k

Abstract:

Reducing the use of mechanical heating and cooling systems in buildings, which accounts for approximately 30-40% of total energy consumption in the world has a major impact in terms of energy conservation. Formations of buildings that have sustainable and low energy utilization, structural elements with mechanical systems should be evaluated with a holistic approach. In point of reduction of building energy consumption ratio, wall elements that are vertical building elements and have an area broadly (m2) have proposed as a regulation with a different system. In the study, designing surface heating and cooling energy with a hybrid type of modular wall system and the integration of building elements will be evaluated. The design of wall element; - Identification of certain standards in terms of architectural design and size, -Elaboration according to the area where the wall elements (interior walls, exterior walls) -Solution of the joints, -Obtaining the surface in terms of building compatible with both conceptual structural put emphasis on upper stages, these elements will be formed. The durability of the product to the various forces, stability and resistance are so much substantial that are used the establishment of ready-wall element section and the planning of structural design. All created ready-wall alternatives will be paid attention at some parameters; such as adapting to performance-cost by optimum level and size that can be easily processed and reached. The restrictions such as the size of the zoning regulations, building function, structural system, wheelbase that are imposed by building laws, should be evaluated. The building aims to intend to function according to a certain standardization system and construction of wall elements will be used. The scope of performance criteria determined on the wall elements, utilization (operation, maintenance) and renovation phase, alternative material options will be evaluated with interim materials located in the contents. Design, implementation and technical combination of modular wall elements in the use phase and installation details together with the integration of energy saving, heat-saving and useful effects on the environmental aspects will be discussed in detail. As a result, the ready-wall product with surface heating and cooling modules will be created and defined as hybrid wall and will be compared with the conventional system in terms of thermal comfort. After preliminary architectural evaluations, certain decisions for all architectural design processes (pre and post design) such as the implementation and performance in use, maintenance, renewal will be evaluated in the results.

Keywords: modular ready-wall element, hybrid, architectural design, thermal comfort, energy saving

Procedia PDF Downloads 228
374 Nuclear Materials and Nuclear Security in India: A Brief Overview

Authors: Debalina Ghoshal

Abstract:

Nuclear security is the ‘prevention and detection of, and response to unauthorised removal, sabotage, unauthorised access, illegal transfer or other malicious acts involving nuclear or radiological material or their associated facilities.’ Ever since the end of Cold War, nuclear materials security has remained a concern for global security. However, with the increase in terrorist attacks not just in India especially, security of nuclear materials remains a priority. Therefore, India has made continued efforts to tighten its security on nuclear materials to prevent nuclear theft and radiological terrorism. Nuclear security is different from nuclear safety. Physical security is also a serious concern and India had been careful of the physical security of its nuclear materials. This is more so important since India is expanding its nuclear power capability to generate electricity for economic development. As India targets 60,000 MW of electricity production by 2030, it has a range of reactors to help it achieve its goal. These include indigenous Pressurised Heavy Water Reactors, now standardized at 700 MW per reactor Light Water Reactors, and the indigenous Fast Breeder Reactors that can generate more fuel for the future and enable the country to utilise its abundant thorium resource. Nuclear materials security can be enhanced through two important ways. One is through proliferation resistant technologies and diplomatic efforts to take non proliferation initiatives. The other is by developing technical means to prevent any leakage in nuclear materials in the hands of asymmetric organisations. New Delhi has already implemented IAEA Safeguards on their civilian nuclear installations. Moreover, the IAEA Additional Protocol has also been ratified by India in order to enhance its transparency of nuclear material and strengthen nuclear security. India is a party to the IAEA Conventions on Nuclear Safety and Security, and in particular the 1980 Convention on the Physical Protection of Nuclear Material and its amendment in 2005, Code of Conduct in Safety and Security of Radioactive Sources, 2006 which enables the country to provide for the highest international standards on nuclear and radiological safety and security. India's nuclear security approach is driven by five key components: Governance, Nuclear Security Practice and Culture, Institutions, Technology and International Cooperation. However, there is still scope for further improvements to strengthen nuclear materials and nuclear security. The NTI Report, ‘India’s improvement reflects its first contribution to the IAEA Nuclear Security Fund etc. in the future, India’s nuclear materials security conditions could be further improved by strengthening its laws and regulations for security and control of materials, particularly for control and accounting of materials, mitigating the insider threat, and for the physical security of materials during transport. India’s nuclear materials security conditions also remain adversely affected due to its continued increase in its quantities of nuclear material, and high levels of corruption among public officials.’ This paper would study briefly the progress made by India in nuclear and nuclear material security and the step ahead for India to further strengthen this.

Keywords: India, nuclear security, nuclear materials, non proliferation

Procedia PDF Downloads 325