Search results for: single segmental baffle
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 4505

Search results for: single segmental baffle

155 Regulatory Governance as a De-Parliamentarization Process: A Contextual Approach to Global Constitutionalism and Its Effects on New Arab Legislatures

Authors: Abderrahim El Maslouhi

Abstract:

The paper aims to analyze an often-overlooked dimension of global constitutionalism, which is the rise of the regulatory state and its impact on parliamentary dynamics in transition regimes. In contrast to Majone’s technocratic vision of convergence towards a single regulatory system based on competence and efficiency, national transpositions of regulatory governance and, in general, the relationship to global standards primarily depend upon a number of distinctive parameters. These include policy formation process, speed of change, depth of parliamentary tradition and greater or lesser vulnerability to the normative conditionality of donors, interstate groupings and transnational regulatory bodies. Based on a comparison between three post-Arab Spring countries -Morocco, Tunisia, and Egypt, whose constitutions have undergone substantive review in the period 2011-2014- and some European Union state members, the paper intends, first, to assess the degree of permeability to global constitutionalism in different contexts. A noteworthy divide emerges from this comparison. Whereas European constitutions still seem impervious to the lexicon of global constitutionalism, the influence of the latter is obvious in the recently drafted constitutions in Morocco, Tunisia, and Egypt. This is evidenced by their reference to notions such as ‘governance’, ‘regulators’, ‘accountability’, ‘transparency’, ‘civil society’, and ‘participatory democracy’. Second, the study will provide a contextual account of internal and external rationales underlying the constitutionalization of regulatory governance in the cases examined. Unlike European constitutionalism, where parliamentarism and the tradition of representative government function as a structural mechanism that moderates the de-parliamentarization effect induced by global constitutionalism, Arab constitutional transitions have led to a paradoxical situation; contrary to the public demands for further parliamentarization, the 2011 constitution-makers have opted for a de-parliamentarization pattern. This is particularly reflected in the procedures established by constitutions and regular legislation, to handle the interaction between lawmakers and regulatory bodies. Once the ‘constitutional’ and ‘independent’ nature of these agencies is formally endorsed, the birth of these ‘fourth power’ entities, which are neither elected nor directly responsible to elected officials, will raise the question of their accountability. Third, the paper shows that, even in the three selected countries, the de-parliamentarization intensity is significantly variable. By contrast to the radical stance of the Moroccan and Egyptian constituents who have shown greater concern to shield regulatory bodies from legislatures’ scrutiny, the Tunisian case indicates a certain tendency to provide lawmakers with some essential control instruments (e. g. exclusive appointment power, adversarial discussion of regulators’ annual reports, dismissal power, later held unconstitutional). In sum, the comparison reveals that the transposition of the regulatory state model and, more generally, sensitivity to the legal implications of global conditionality essentially relies on the evolution of real-world power relations at both national and international levels.

Keywords: Arab legislatures, de-parliamentarization, global constitutionalism, normative conditionality, regulatory state

Procedia PDF Downloads 108
154 Operational Characteristics of the Road Surface Improvement

Authors: Iuri Salukvadze

Abstract:

Construction takes importance role in the history of mankind, there is not a single thing-product in our lives in which the builder’s work was not to be materialized, because to create all of it requires setting up factories, roads, and bridges, etc. The function of the Republic of Georgia, as part of the connecting Europe-Asia transport corridor, is significantly increased. In the context of transit function a large part of the cargo traffic belongs to motor transport, hence the improvement of motor roads transport infrastructure is rather important and rise the new, increased operational demands for existing as well as new motor roads. Construction of the durable road surface is related to rather large values, but because of high transport-operational properties, such as high-speed, less fuel consumption, less depreciation of tires, etc. If the traffic intensity is high, therefore the reimbursement of expenses occurs rapidly and accordingly is increasing income. If the traffic intensity is relatively small, it is recommended to use lightened structures of road carpet in order to pay for capital investments amounted to no more than normative one. The road carpet is divided into the following basic types: asphaltic concrete and cement concrete. Asphaltic concrete is the most perfect type of road carpet. It is arranged in two or three layers on rigid foundation and will be compacted. Asphaltic concrete is artificial building material, which due stratum will be selected and measured from stone skeleton and sand, interconnected by bitumen and a mixture of mineral powder. Less strictly selected similar material is called as bitumen-mineral mixture. Asphaltic concrete is non-rigid building material and well durable on vertical loadings; it is less resistant to the impact of horizontal forces. The cement concrete is monolithic and durable material, it is well durable the horizontal loads and is less resistant related to vertical loads. The cement concrete consists from strictly selected, measured stone material and sand, the binder is cement. The cement concrete road carpet represents separate slabs of sizes from 3 ÷ 5 op to 6 ÷ 8 meters. The slabs are reinforced by a rather complex system. Between the slabs are arranged seams that are designed for avoiding of additional stresses due temperature fluctuations on the length of slabs. For the joint behavior of separate slabs, they are connected by metal rods. Rods provide the changes in the length of slabs and distribute to the slab vertical forces and bending moments. The foundation layers will be extremely durable, for that is required high-quality stone material, cement, and metal. The qualification work aims to: in order for improvement of traffic conditions on motor roads to prolong operational conditions and improving their characteristics. The work consists from three chapters, 80 pages, 5 tables and 5 figures. In the work are stated general concepts as well as carried out by various companies using modern methods tests and their results. In the chapter III are stated carried by us tests related to this issue and specific examples to improving the operational characteristics.

Keywords: asphalt, cement, cylindrikal sample of asphalt, building

Procedia PDF Downloads 195
153 Spatio-Temporal Dynamic of Woody Vegetation Assessment Using Oblique Landscape Photographs

Authors: V. V. Fomin, A. P. Mikhailovich, E. M. Agapitov, V. E. Rogachev, E. A. Kostousova, E. S. Perekhodova

Abstract:

Ground-level landscape photos can be used as a source of objective data on woody vegetation and vegetation dynamics. We proposed a method for processing, analyzing, and presenting ground photographs, which has the following advantages: 1) researcher has to form holistic representation of the study area in form of a set of interlapping ground-level landscape photographs; 2) it is necessary to define or obtain characteristics of the landscape, objects, and phenomena present on the photographs; 3) it is necessary to create new or supplement existing textual descriptions and annotations for the ground-level landscape photographs; 4) single or multiple ground-level landscape photographs can be used to develop specialized geoinformation layers, schematic maps or thematic maps; 5) it is necessary to determine quantitative data that describes both images as a whole, and displayed objects and phenomena, using algorithms for automated image analysis. It is suggested to match each photo with a polygonal geoinformation layer, which is a sector consisting of areas corresponding with parts of the landscape visible in the photos. Calculation of visibility areas is performed in a geoinformation system within a sector using a digital model of a study area relief and visibility analysis functions. Superposition of the visibility sectors corresponding with various camera viewpoints allows matching landscape photos with each other to create a complete and wholesome representation of the space in question. It is suggested to user-defined data or phenomenons on the images with the following superposition over the visibility sector in the form of map symbols. The technology of geoinformation layers’ spatial superposition over the visibility sector creates opportunities for image geotagging using quantitative data obtained from raster or vector layers within the sector with the ability to generate annotations in natural language. The proposed method has proven itself well for relatively open and clearly visible areas with well-defined relief, for example, in mountainous areas in the treeline ecotone. When the polygonal layers of visibility sectors for a large number of different points of photography are topologically superimposed, a layer of visibility of sections of the entire study area is formed, which is displayed in the photographs. Also, as a result of this overlapping of sectors, areas that did not appear in the photo will be assessed as gaps. According to the results of this procedure, it becomes possible to obtain information about the photos that display a specific area and from which points of photography it is visible. This information may be obtained either as a query on the map or as a query for the attribute table of the layer. The method was tested using repeated photos taken from forty camera viewpoints located on Ray-Iz mountain massif (Polar Urals, Russia) from 1960 until 2023. It has been successfully used in combination with other ground-based and remote sensing methods of studying the climate-driven dynamics of woody vegetation in the Polar Urals. Acknowledgment: This research was collaboratively funded by the Russian Ministry for Science and Education project No. FEUG-2023-0002 (image representation) and Russian Science Foundation project No. 24-24-00235 (automated textual description).

Keywords: woody, vegetation, repeated, photographs

Procedia PDF Downloads 25
152 Decreased Tricarboxylic Acid (TCA) Cycle Staphylococcus aureus Increases Survival to Innate Immunity

Authors: Trenten Theis, Trevor Daubert, Kennedy Kluthe, Austin Nuxoll

Abstract:

Staphylococcus aureus is a gram-positive bacterium responsible for an estimated 23,000 deaths in the United States and 25,000 deaths in the European Union annually. Recurring S. aureus bacteremia is associated with biofilm-mediated infections and can occur in 5 - 20% of cases, even with the use of antibiotics. Despite these infections being caused by drug-susceptible pathogens, they are surprisingly difficult to eradicate. One potential explanation for this is the presence of persister cells—a dormant type of cell that shows a high tolerance to antibiotic treatment. Recent studies have shown a connection between low intracellular ATP and persister cell formation. Specifically, this decrease in ATP, and therefore increase in persister cell formation, is due to an interrupted tricarboxylic acid (TCA) cycle. However, S. aureus persister cells’ role in pathogenesis remains unclear. Initial studies have shown that a fumC (TCA cycle gene) knockout survives challenge from aspects of the innate immune system better than wild-type S. aureus. Specifically, challenges from two antimicrobial peptides--LL-37 and hBD-3—show a log increase in survival of the fumC::N∑ strain compared to wild type S. aureus after 18 hours. Furthermore, preliminary studies show that the fumC knockout has a log more survival within a macrophage. These data lead us to hypothesize that the fumC knockout is better suited to other aspects of the innate immune system compared to wild-type S. aureus. To further investigate the mechanism for increased survival of fumC::N∑ within a macrophage, we tested bacterial growth in the presence of reactive oxygen species (ROS), reactive nitrogen species (RNS), and a low pH. Preliminary results suggest that the fumC knockout has increased growth compared to wild-type S. aureus in the presence of all three antimicrobial factors; however, no difference was observed in any single factor alone. To investigate survival within a host, a nine-day biofilm-associated catheter infection was performed on 6–8-week-old male and female C57Bl/6 mice. Although both sexes struggled to clear the infection, female mice were trending toward more frequently clearing the HG003 wild-type infection compared to the fumC::N∑ infection. One possible reason for the inability to reduce the bacterial burden is that biofilms are largely composed of persister cells. To test this hypothesis further, flow cytometry in conjunction with a persister cell marker was used to measure persister cells within a biofilm. Cap5A (a known persister cell marker) expression was found to be increased in a maturing biofilm, with the lowest levels of expression seen in immature biofilms and the highest expression exhibited by the 48-hour biofilm. Additionally, bacterial cells in a biofilm state closely resemble persister cells and exhibit reduced membrane potential compared to cells in planktonic culture, further suggesting biofilms are largely made up of persister cells. These data may provide an explanation as to why infections caused by antibiotic-susceptible strains remain difficult to treat.

Keywords: antibiotic tolerance, Staphylococcus aureus, host-pathogen interactions, microbial pathogenesis

Procedia PDF Downloads 155
151 Iran’s Sexual and Reproductive Rights Roll-Back: An Overview of Iran’s New Population Policies

Authors: Raha Bahreini

Abstract:

This paper discusses the roll-back of women’s sexual and reproductive rights in the Islamic Republic of Iran, which has come in the wake of a striking shift in the country’s official population policies. Since the late 1980s, Iran has won worldwide praise for its sexual and reproductive health and services, which have contributed to a steady decline in the country’s fertility rate–from 7.0 births per women in 1980 to 5.5 in 1988, 2.8 in 1996 and 1.85 in 2014. This is owed to a significant increase in the voluntary use of modern contraception in both rural and urban areas. In 1976, only 37 per cent of women were using at least one method of contraception; by 2014 this figure had reportedly risen to a high of nearly 79 per cent for married girls and women living in urban areas and 73.78 per cent for those living in rural areas. Such progress may soon be halted. In July 2012, Iran’s Supreme Leader Ayatollah Sayed Ali Khamenei denounced Iran’s family planning policies as an imitation of Western lifestyle. He exhorted the authorities to increase Iran’s population to 150 to 200 million (from around 78.5 million), including by cutting subsidies for contraceptive methods and dismantling the state’s Family and Population Planning Programme. Shortly thereafter, Iran’s Minister of Health and Medical Education announced the scrapping of the budget for the state-funded Family and Population Planning Programme. Iran’s Parliament subsequently introduced two bills; the Comprehensive Population and Exaltation of Family Bill (Bill 315), and the Bill to Increase Fertility Rates and Prevent Population Decline (Bill 446). Bill 446 outlaws voluntary tubectomies, which are believed to be the second most common method of modern contraception in Iran, and blocks access to information about contraception, denying women the opportunity to make informed decisions about the number and spacing of their children. Coupled with the elimination of state funding for Iran’s Family and Population Programme, the move would undoubtedly result in greater numbers of unwanted pregnancies, forcing more women to seek illegal and unsafe abortions. Bill 315 proposes various discriminatory measures in the areas of employment, divorce, and protection from domestic violence in order to promote a culture wherein wifedom and child-bearing is seen as women’s primary duty. The Bill, for example, instructs private and public entities to prioritize, in sequence, men with children, married men without children and married women with children when hiring for certain jobs. It also bans the recruitment of single individuals as family law lawyers, public and private school teachers and members of the academic boards of universities and higher education institutes. The paper discusses the consequences of these initiatives which would, if continued, set the human rights of women and girls in Iran back by decades, leaving them with a future shaped by increased inequality, discrimination, poor health, limited choices and restricted freedoms, in breach of Iran’s international human rights obligations.

Keywords: family planning and reproductive health, gender equality and empowerment of women, human rights, population growth

Procedia PDF Downloads 276
150 Modelling Pest Immigration into Rape Seed Crops under Past and Future Climate Conditions

Authors: M. Eickermann, F. Ronellenfitsch, J. Junk

Abstract:

Oilseed rape (Brassica napus L.) is one of the most important crops throughout Europe, but pressure due to pest insects and pathogens can reduce yield amount substantially. Therefore, the usage of pesticide applications is outstanding in this crop. In addition, climate change effects can interact with phenology of the host plant and their pests and can apply additional pressure on the yield. Next to the pollen beetle, Meligethes aeneus L., the seed-damaging pest insects, cabbage seed weevil (Ceutorhynchus obstrictus Marsham) and the brassica pod midge (Dasineura brassicae Winn.) are of main economic impact to the yield. While females of C. obstrictus are infesting oilseed rape by depositing single eggs into young pods, the females of D. brassicae are using this local damage in the pod for their own oviposition, while depositing batches of 20-30 eggs. Without a former infestation by the cabbage seed weevil, a significant yield reduction by the brassica pod midge can be denied. Based on long-term, multisided field experiments, a comprehensive data-set on pest migration to crops of B. napus has been built up in the last ten years. Five observational test sides, situated in different climatic regions in Luxembourg were controlled between February until the end of May twice a week. Pest migration was recorded by using yellow water pan-traps. Caught insects were identified in the laboratory according to species specific identification keys. By a combination of pest observations and corresponding meteorological observations, the set-up of models to predict the migration periods of the seed-damaging pests was possible. This approach is the basis for a computer-based decision support tool, to assist the farmer in identifying the appropriate time point of pesticide application. In addition, the derived algorithms of that decision support tool can be combined with climate change projections in order to assess the future potential threat caused by the seed-damaging pest species. Regional climate change effects for Luxembourg have been intensively studied in recent years. Significant changes to wetter winters and drier summers, as well as a prolongation of the vegetation period mainly caused by higher spring temperature, have also been reported. We used the COSMO-CLM model to perform a time slice experiment for Luxembourg with a spatial resolution of 1.3 km. Three ten year time slices were calculated: The reference time span (1991-2000), the near (2041-2050) and the far future (2091-2100). Our results projected a significant shift of pest migration to an earlier onset of the year. In addition, a prolongation of the possible migration period could be observed. Because D. brassiace is depending on the former oviposition activity by C. obstrictus to infest its host plant successfully, the future dependencies of both pest species will be assessed. Based on this approach the future risk potential of both seed-damaging pests is calculated and the status as pest species is characterized.

Keywords: CORDEX projections, decision support tool, Brassica napus, pests

Procedia PDF Downloads 348
149 Zinc Oxide Varistor Performance: A 3D Network Model

Authors: Benjamin Kaufmann, Michael Hofstätter, Nadine Raidl, Peter Supancic

Abstract:

ZnO varistors are the leading overvoltage protection elements in today’s electronic industry. Their highly non-linear current-voltage characteristics, very fast response times, good reliability and attractive cost of production are unique in this field. There are challenges and questions unsolved. Especially, the urge to create even smaller, versatile and reliable parts, that fit industry’s demands, brings manufacturers to the limits of their abilities. Although, the varistor effect of sintered ZnO is known since the 1960’s, and a lot of work was done on this field to explain the sudden exponential increase of conductivity, the strict dependency on sinter parameters, as well as the influence of the complex microstructure, is not sufficiently understood. For further enhancement and down-scaling of varistors, a better understanding of the microscopic processes is needed. This work attempts a microscopic approach to investigate ZnO varistor performance. In order to cope with the polycrystalline varistor ceramic and in order to account for all possible current paths through the material, a preferably realistic model of the microstructure was set up in the form of three-dimensional networks where every grain has a constant electric potential, and voltage drop occurs only at the grain boundaries. The electro-thermal workload, depending on different grain size distributions, was investigated as well as the influence of the metal-semiconductor contact between the electrodes and the ZnO grains. A number of experimental methods are used, firstly, to feed the simulations with realistic parameters and, secondly, to verify the obtained results. These methods are: a micro 4-point probes method system (M4PPS) to investigate the current-voltage characteristics between single ZnO grains and between ZnO grains and the metal electrode inside the varistor, micro lock-in infrared thermography (MLIRT) to detect current paths, electron back scattering diffraction and piezoresponse force microscopy to determine grain orientations, atom probe to determine atomic substituents, Kelvin probe force microscopy for investigating grain surface potentials. The simulations showed that, within a critical voltage range, the current flow is localized along paths which represent only a tiny part of the available volume. This effect could be observed via MLIRT. Furthermore, the simulations exhibit that the electric power density, which is inversely proportional to the number of active current paths, since this number determines the electrical active volume, is dependent on the grain size distribution. M4PPS measurements showed that the electrode-grain contacts behave like Schottky diodes and are crucial for asymmetric current path development. Furthermore, evaluation of actual data suggests that current flow is influenced by grain orientations. The present results deepen the knowledge of influencing microscopic factors on ZnO varistor performance and can give some recommendations on fabrication for obtaining more reliable ZnO varistors.

Keywords: metal-semiconductor contact, Schottky diode, varistor, zinc oxide

Procedia PDF Downloads 257
148 Effect of Amiodarone on the Thyroid Gland of Adult Male Albino Rat and the Possible Protective Role of Vitamin E Supplementation: A Histological and Ultrastructural Study

Authors: Ibrahim Abdulla Labib, Medhat Mohamed Morsy, Gamal Hosny, Hanan Dawood Yassa, Gaber Hassan

Abstract:

Amiodarone is a very effective drug, widely used for arrhythmia. Unfortunately it has many side effects involving many organs especially thyroid gland. The current work was conducted to elucidate the effect of amiodarone on the thyroid gland and the possible protective role of vitamin E. Fifty adult male albino rats weighed 200 – 250 grams were divided into five groups; ten rats each. Group I (Control): Five rats were sacrificed after three weeks and five rats were sacrificed after six weeks. Group II (Sham control): Each rat received sunflower oil orally; the solvent of vitamin E for three weeks. Group III (Amiodarone-treated): each rat received an oral dose of amiodarone; 150 mg/kg/day for three weeks. Group IV (Recovery): Each rat received amiodarone as group III then the drug was stopped for three weeks to evaluate recovery. Group V (Amiodarone + Vitamin E-treated): Each rat received amiodarone as group III followed by 100 mg/kg/day vitamin E orally for three weeks. Thyroid gland of the sacrificed animals were dissected out and prepared for light and electron microscopic studies. Amiodarone administration resulted in loss of normal follicular architecture as many follicles appeared either shrunken, empty or contained scanty pale colloid. Some follicles appeared lined by more than one layer of cells while others showed interruption of their membrane. Masson's Trichrome stained sections showed increased collagen fibers in between the thyroid follicles. Ultrastructurally, the apical border of the follicular cells showed few irregular detached microvilli. The nuclei of the follicular cells were almost irregular with chromatin condensation. The cytoplasm of most follicular cells revealed numerous dilated rough endoplasmic reticulum with numerous lysosomes. After three weeks of stopping amiodarone, the follicles were nearly regular in outline. Some follicles were filled with homogenous eosinophilic colloid and others had shrunken pale colloid or were empty. Some few follicles showed exfoliated cells in their lumina and others were still lined by more than one layer of follicular cells. Moderate amounts of collagen fibers were observed in-between thyroid follicles. Ultrastructurally, many follicular cells had rounded euchromatic nucleui, moderate number of lysosomes and moderately dilated rough endoplasmic reticulum. However, few follicular cells still showing irregular nucleui, dilated rough endoplasmic reticulum and many cytoplasmic vacuoles. Administration of vitamin E with amiodarone for three weeks resulted in obvious structural improvement. Most of the follicles were lined by a single layer of cuboidal cells and the lumina were filled with homogenous eosinophilic colloid with very few vacuolations. The majority of follicular cells had rounded nuclei with occasional detection of ballooned cells and dark nuclei. Scanty collagen fibers were detected among thyroid follicles. Ultrastructurally, most follicular cells exhibited rounded euchromatic nuclei with few short microvilli were projecting into the colloid. Few lysosomes were also noticed. It was concluded that amiodarone administration leads to many adverse histological changes in the thyroid gland. Some of these changes are reversible during the recovery period however concomitant vitamin E administration with amiodarone has a major protective role in preventing many of these changes.

Keywords: amiodarone, recovery, ultrastructure, vitamin E.

Procedia PDF Downloads 326
147 Microsimulation of Potential Crashes as a Road Safety Indicator

Authors: Vittorio Astarita, Giuseppe Guido, Vincenzo Pasquale Giofre, Alessandro Vitale

Abstract:

Traffic microsimulation has been used extensively to evaluate consequences of different traffic planning and control policies in terms of travel time delays, queues, pollutant emissions, and every other common measured performance while at the same time traffic safety has not been considered in common traffic microsimulation packages as a measure of performance for different traffic scenarios. Vehicle conflict techniques that were introduced at intersections in the early traffic researches carried out at the General Motor laboratory in the USA and in the Swedish traffic conflict manual have been applied to vehicles trajectories simulated in microscopic traffic simulators. The concept is that microsimulation can be used as a base for calculating the number of conflicts that will define the safety level of a traffic scenario. This allows engineers to identify unsafe road traffic maneuvers and helps in finding the right countermeasures that can improve safety. Unfortunately, most commonly used indicators do not consider conflicts between single vehicles and roadside obstacles and barriers. A great number of vehicle crashes take place with roadside objects or obstacles. Only some recent proposed indicators have been trying to address this issue. This paper introduces a new procedure based on the simulation of potential crash events for the evaluation of safety levels in microsimulation traffic scenarios, which takes into account also potential crashes with roadside objects and barriers. The procedure can be used to define new conflict indicators. The proposed simulation procedure generates with the random perturbation of vehicle trajectories a set of potential crashes which can be evaluated accurately in terms of DeltaV, the energy of the impact, and/or expected number of injuries or casualties. The procedure can also be applied to real trajectories giving birth to new surrogate safety performance indicators, which can be considered as “simulation-based”. The methodology and a specific safety performance indicator are described and applied to a simulated test traffic scenario. Results indicate that the procedure is able to evaluate safety levels both at the intersection level and in the presence of roadside obstacles. The procedure produces results that are expressed in the same unity of measure for both vehicle to vehicle and vehicle to roadside object conflicts. The total energy for a square meter of all generated crash can be used and is shown on the map, for the test network, after the application of a threshold to evidence the most dangerous points. Without any detailed calibration of the microsimulation model and without any calibration of the parameters of the procedure (standard values have been used), it is possible to identify dangerous points. A preliminary sensitivity analysis has shown that results are not dependent on the different energy thresholds and different parameters of the procedure. This paper introduces a specific new procedure and the implementation in the form of a software package that is able to assess road safety, also considering potential conflicts with roadside objects. Some of the principles that are at the base of this specific model are discussed. The procedure can be applied on common microsimulation packages once vehicle trajectories and the positions of roadside barriers and obstacles are known. The procedure has many calibration parameters and research efforts will have to be devoted to make confrontations with real crash data in order to obtain the best parameters that have the potential of giving an accurate evaluation of the risk of any traffic scenario.

Keywords: road safety, traffic, traffic safety, traffic simulation

Procedia PDF Downloads 109
146 A Digital Environment for Developing Mathematical Abilities in Children with Autism Spectrum Disorder

Authors: M. Isabel Santos, Ana Breda, Ana Margarida Almeida

Abstract:

Research on academic abilities of individuals with autism spectrum disorder (ASD) underlines the importance of mathematics interventions. Yet the proposal of digital applications for children and youth with ASD continues to attract little attention, namely, regarding the development of mathematical reasoning, being the use of the digital technologies an area of great interest for individuals with this disorder and its use is certainly a facilitative strategy in the development of their mathematical abilities. The use of digital technologies can be an effective way to create innovative learning opportunities to these students and to develop creative, personalized and constructive environments, where they can develop differentiated abilities. The children with ASD often respond well to learning activities involving information presented visually. In this context, we present the digital Learning Environment on Mathematics for Autistic children (LEMA) that was a research project conducive to a PhD in Multimedia in Education and was developed by the Thematic Line Geometrix, located in the Department of Mathematics, in a collaboration effort with DigiMedia Research Center, of the Department of Communication and Art (University of Aveiro, Portugal). LEMA is a digital mathematical learning environment which activities are dynamically adapted to the user’s profile, towards the development of mathematical abilities of children aged 6–12 years diagnosed with ASD. LEMA has already been evaluated with end-users (both students and teacher’s experts) and based on the analysis of the collected data readjustments were made, enabling the continuous improvement of the prototype, namely considering the integration of universal design for learning (UDL) approaches, which are of most importance in ASD, due to its heterogeneity. The learning strategies incorporated in LEMA are: (i) provide options to custom choice of math activities, according to user’s profile; (ii) integrates simple interfaces with few elements, presenting only the features and content needed for the ongoing task; (iii) uses a simple visual and textual language; (iv) uses of different types of feedbacks (auditory, visual, positive/negative reinforcement, hints with helpful instructions including math concept definitions, solved math activities using split and easier tasks and, finally, the use of videos/animations that show a solution to the proposed activity); (v) provides information in multiple representation, such as text, video, audio and image for better content and vocabulary understanding in order to stimulate, motivate and engage users to mathematical learning, also helping users to focus on content; (vi) avoids using elements that distract or interfere with focus and attention; (vii) provides clear instructions and orientation about tasks to ease the user understanding of the content and the content language, in order to stimulate, motivate and engage the user; and (viii) uses buttons, familiarly icons and contrast between font and background. Since these children may experience little sensory tolerance and may have an impaired motor skill, besides the user to have the possibility to interact with LEMA through the mouse (point and click with a single button), the user has the possibility to interact with LEMA through Kinect device (using simple gesture moves).

Keywords: autism spectrum disorder, digital technologies, inclusion, mathematical abilities, mathematical learning activities

Procedia PDF Downloads 93
145 Technology of Electrokinetic Disintegration of Virginia Fanpetals (Sida hermaphrodita) Biomass in a Biogas Production System

Authors: Mirosław Krzemieniewski, Marcin Zieliński, Marcin Dębowski

Abstract:

Electrokinetic disintegration is one of the high-voltage electric methods. The design of systems is exceptionally simple. Biomass flows through a system of pipes with alongside mounted electrodes that generate an electric field. Discharges in the electric field deform cell walls and lead to their successive perforation, thereby making their contents easily available to bacteria. The spark-over occurs between electrode surface and pipe jacket which is the second pole and closes the circuit. The value of voltage ranges from 10 to 100kV. Electrodes are supplied by normal “power grid” monophase electric current (230V, 50Hz). Next, the electric current changes into direct current of 24V in modules serving for particular electrodes, and this current directly feeds the electrodes. The installation is completely safe because the value of generated current does not exceed 250mA and because conductors are grounded. Therefore, there is no risk of electric shock posed to the personnel, even in the case of failure or incorrect connection. Low values of the electric current mean small energy consumption by the electrode which is extremely low – only 35W per electrode – compared to other methods of disintegration. Pipes with electrodes with diameter of DN150 are made of acid-proof steel and connected from both sides with 90º elbows ended with flanges. The available S and U types of pipes enable very convenient fitting with system construction in the existing installations and rooms or facilitate space management in new applications. The system of pipes for electrokinetic disintegration may be installed horizontally, vertically, askew, on special stands or also directly on the wall of a room. The number of pipes and electrodes is determined by operating conditions as well as the quantity of substrate, type of biomass, content of dry matter, method of disintegration (single or circulatory), mounting site etc. The most effective method involves pre-treatment of substrate that may be pumped through the disintegration system on the way to the fermentation tank or recirculated in a buffered intermediate tank (substrate mixing tank). Biomass structure destruction in the process of electrokinetic disintegration causes shortening of substrate retention time in the tank and acceleration of biogas production. A significant intensification of the fermentation process was observed in the systems operating in the technical scale, with the greatest increase in biogas production reaching 18%. The secondary, but highly significant for the energetic balance, effect is a tangible decrease of energy input by agitators in tanks. It is due to reduced viscosity of the biomass after disintegration, and may result in energy savings reaching even 20-30% of the earlier noted consumption. Other observed phenomena include reduction in the layer of surface scum, reduced sewage capability for foaming and successive decrease in the quantity of bottom sludge banks. Considering the above, the system for electrokinetic disintegration seems a very interesting and valuable solutions meeting the offer of specialist equipment for the processing of plant biomass, including Virginia fanpetals, before the process of methane fermentation.

Keywords: electrokinetic disintegration, biomass, biogas production, fermentation, Virginia fanpetals

Procedia PDF Downloads 329
144 Identification of Failures Occurring on a System on Chip Exposed to a Neutron Beam for Safety Applications

Authors: S. Thomet, S. De-Paoli, F. Ghaffari, J. M. Daveau, P. Roche, O. Romain

Abstract:

In this paper, we present a hardware module dedicated to understanding the fail reason of a System on Chip (SoC) exposed to a particle beam. Impact of Single-Event Effects (SEE) on processor-based SoCs is a concern that has increased in the past decade, particularly for terrestrial applications with automotive safety increasing requirements, as well as consumer and industrial domains. The SEE created by the impact of a particle on an SoC may have consequences that can end to instability or crashes. Specific hardening techniques for hardware and software have been developed to make such systems more reliable. SoC is then qualified using cosmic ray Accelerated Soft-Error Rate (ASER) to ensure the Soft-Error Rate (SER) remains in mission profiles. Understanding where errors are occurring is another challenge because of the complexity of operations performed in an SoC. Common techniques to monitor an SoC running under a beam are based on non-intrusive debug, consisting of recording the program counter and doing some consistency checking on the fly. To detect and understand SEE, we have developed a module embedded within the SoC that provide support for recording probes, hardware watchpoints, and a memory mapped register bank dedicated to software usage. To identify CPU failure modes and the most important resources to probe, we have carried out a fault injection campaign on the RTL model of the SoC. Probes are placed on generic CPU registers and bus accesses. They highlight the propagation of errors and allow identifying the failure modes. Typical resulting errors are bit-flips in resources creating bad addresses, illegal instructions, longer than expected loops, or incorrect bus accesses. Although our module is processor agnostic, it has been interfaced to a RISC-V by probing some of the processor registers. Probes are then recorded in a ring buffer. Associated hardware watchpoints are allowing to do some control, such as start or stop event recording or halt the processor. Finally, the module is also providing a bank of registers where the firmware running on the SoC can log information. Typical usage is for operating system context switch recording. The module is connected to a dedicated debug bus and is interfaced to a remote controller via a debugger link. Thus, a remote controller can interact with the monitoring module without any intrusiveness on the SoC. Moreover, in case of CPU unresponsiveness, or system-bus stall, the recorded information can still be recovered, providing the fail reason. A preliminary version of the module has been integrated into a test chip currently being manufactured at ST in 28-nm FDSOI technology. The module has been triplicated to provide reliable information on the SoC behavior. As the primary application domain is automotive and safety, the efficiency of the module will be evaluated by exposing the test chip under a fast-neutron beam by the end of the year. In the meantime, it will be tested with alpha particles and electromagnetic fault injection (EMFI). We will report in the paper on fault-injection results as well as irradiation results.

Keywords: fault injection, SoC fail reason, SoC soft error rate, terrestrial application

Procedia PDF Downloads 200
143 Quantum Chemical Prediction of Standard Formation Enthalpies of Uranyl Nitrates and Its Degradation Products

Authors: Mohamad Saab, Florent Real, Francois Virot, Laurent Cantrel, Valerie Vallet

Abstract:

All spent nuclear fuel reprocessing plants use the PUREX process (Plutonium Uranium Refining by Extraction), which is a liquid-liquid extraction method. The organic extracting solvent is a mixture of tri-n-butyl phosphate (TBP) and hydrocarbon solvent such as hydrogenated tetra-propylene (TPH). By chemical complexation, uranium and plutonium (from spent fuel dissolved in nitric acid solution), are separated from fission products and minor actinides. During a normal extraction operation, uranium is extracted in the organic phase as the UO₂(NO₃)₂(TBP)₂ complex. The TBP solvent can form an explosive mixture called red oil when it comes in contact with nitric acid. The formation of this unstable organic phase originates from the reaction between TBP and its degradation products on the one hand, and nitric acid, its derivatives and heavy metal nitrate complexes on the other hand. The decomposition of the red oil can lead to violent explosive thermal runaway. These hazards are at the origin of several accidents such as the two in the United States in 1953 and 1975 (Savannah River) and, more recently, the one in Russia in 1993 (Tomsk). This raises the question of the exothermicity of reactions that involve TBP and all other degradation products, and calls for a better knowledge of the underlying chemical phenomena. A simulation tool (Alambic) is currently being developed at IRSN that integrates thermal and kinetic functions related to the deterioration of uranyl nitrates in organic and aqueous phases, but not of the n-butyl phosphate. To include them in the modeling scheme, there is an urgent need to obtain the thermodynamic and kinetic functions governing the deterioration processes in liquid phase. However, little is known about the thermodynamic properties, like standard enthalpies of formation, of the n-butyl phosphate molecules and of the UO₂(NO₃)₂(TBP)₂ UO₂(NO₃)₂(HDBP)(TBP) and UO₂(NO₃)₂(HDBP)₂ complexes. In this work, we propose to estimate the thermodynamic properties with Quantum Methods (QM). Thus, in the first part of our project, we focused on the mono, di, and tri-butyl complexes. Quantum chemical calculations have been performed to study several reactions leading to the formation of mono-(H₂MBP), di-(HDBP), and TBP in gas and liquid phases. In the gas phase, the optimal structures of all species were optimized using the B3LYP density functional. Triple-ζ def2-TZVP basis sets were used for all atoms. All geometries were optimized in the gas-phase, and the corresponding harmonic frequencies were used without scaling to compute the vibrational partition functions at 298.15 K and 0.1 Mpa. Accurate single point energies were calculated using the efficient localized LCCSD(T) method to the complete basis set limit. Whenever species in the liquid phase are considered, solvent effects are included with the COSMO-RS continuum model. The standard enthalpies of formation of TBP, HDBP, and H2MBP are finally predicted with an uncertainty of about 15 kJ mol⁻¹. In the second part of this project, we have investigated the fundamental properties of three organic species that mostly contribute to the thermal runaway: UO₂(NO₃)₂(TBP)₂, UO₂(NO₃)₂(HDBP)(TBP), and UO₂(NO₃)₂(HDBP)₂ using the same quantum chemical methods that were used for TBP and its derivatives in both the gas and the liquid phase. We will discuss the structures and thermodynamic properties of all these species.

Keywords: PUREX process, red oils, quantum chemical methods, hydrolysis

Procedia PDF Downloads 166
142 „Real and Symbolic in Poetics of Multiplied Screens and Images“

Authors: Kristina Horvat Blazinovic

Abstract:

In the context of a work of art, one can talk about the idea-concept-term-intention expressed by the artist by using various forms of repetition (external, material, visible repetition). Such repetitions of elements (images in space or moving visual and sound images in time) suggest a "covert", "latent" ("dressed") repetition – i.e., "hidden", "latent" term-intention-idea. Repeating in this way reveals a "deeper truth" that the viewer needs to decode and which is hidden "under" the technical manifestation of the multiplied images. It is not only images, sounds, and screens that are repeated - something else is repeated through them as well, even if, in some cases, the very idea of repetition is repeated. This paper examines serial images and single-channel or multi-channel artwork in the field of video/film art and video installations, which in a way implies the concept of repetition and multiplication. Moving or static images and screens (as multi-screens) are repeated in time and space. The categories of the real and the symbolic partly refer to the Lacan registers of reality, i.e., the Imaginary - Symbolic – Real trinity that represents the orders within which human subjectivity is established. Authors such as Bruce Nauman, VALIE EXPORT, Ragnar Kjartansson, Wolf Vostell, Shirin Neshat, Paul Sharits, Harun Farocki, Dalibor Martinis, Andy Warhol, Douglas Gordon, Bill Viola, Frank Gillette, and Ira Schneider, and Marina Abramovic problematize, in different ways, the concept and procedures of multiplication - repetition, but not in the sense of "copying" and "repetition" of reality or the original, but of repeated repetitions of the simulacrum. Referential works of art are often connected by the theme of the traumatic. Repetitions of images and situations are a response to the traumatic (experience) - repetition itself is a symptom of trauma. On the other hand, repeating and multiplying traumatic images results in a new traumatic effect or cancels it. Reflections on repetition as a temporal and spatial phenomenon are in line with the chapters that link philosophical considerations of space and time and experience temporality with their manifestation in works of art. The observations about time and the relation of perception and memory are according to Henry Bergson and his conception of duration (durée) as "quality of quantity." The video works intended to be displayed as a video loop, express the idea of infinite duration ("pure time," according to Bergson). The Loop wants to be always present - to fixate in time. Wholeness is unrecognizable because the intention is to make the effect infinitely cyclic. Reflections on time and space end with considerations about the occurrence and effects of time and space intervals as places and moments "between" – the points of connection and separation, of continuity and stopping - by reference to the "interval theory" of Soviet filmmaker DzigaVertov. The scale of opportunities that can be explored in interval mode is wide. Intervals represent the perception of time and space in the form of pauses, interruptions, breaks (e.g., emotional, dramatic, or rhythmic) denote emptiness or silence, distance, proximity, interstitial space, or a gap between various states.

Keywords: video installation, performance, repetition, multi-screen, real and symbolic, loop, video art, interval, video time

Procedia PDF Downloads 141
141 The Prevalence of Soil Transmitted Helminths among Newly Arrived Expatriate Labors in Jeddah, Saudi Arabia

Authors: Mohammad Al-Refai, Majed Wakid

Abstract:

Introduction: Soil-transmitted diseases (STD) are caused by intestinal worms that are transmitted via various routes into the human body resulting in various clinical manifestations. The intestinal worms causing these infections are known as soil transmitted helminths (STH), including Hook worms, Ascaris lumbricoides (A. lumbricoides), Trichuris trichiura (T. trichiura), and Strongyloides sterocoralis (S. sterocoralis). Objectives: The aim of this study was to investigate the prevalence of STH among newly arrived expatriate labors in Jeddah city, Saudi Arabia, using three different techniques (direct smears, sedimentation concentration, and real-time PCR). Methods: A total of 188 stool specimens were collected and investigated at the parasitology laboratory in the Special Infectious Agents Unit at King Fahd Medical Research Center, King Abdulaziz University in Jeddah, Saudi Arabia. Microscopic examination of wet mount preparations using normal saline and Lugols Iodine was carried out, followed by the formal ether sedimentation method. In addition, real-time PCR was used as a molecular tool to detect several STH and hookworm speciation. Results: Out of 188 stool specimens analyzed, in addition to STH parasite, several other types were detected. 9 samples (4.79%) were positive for Entamoeba coli, 7 samples (3.72%) for T. trichiura, 6 samples (3.19%) for Necator americanus, 4 samples (2.13%) for S. sterocoralis, 4 samples (2.13%) for A. lumbricoides, 4 samples (2.13%) for E. histolytica, 3 samples (1.60%) for Blastocystis hominis, 2 samples (1.06%) for Ancylostoma duodenale, 2 samples (1.06%) for Giardia lamblia, 1 sample (0.53%) for Iodamoeba buetschlii, 1 sample (0.53%) for Hymenolepis nana, 1 sample (0.53%) for Endolimax nana, and 1 sample (0.53%) for Heterophyes heterophyes. Out of the 35 infected cases, 26 revealed single infection, 8 with double infections, and only one triple infection of different STH species and other intestinal parasites. Higher rates of STH infections were detected among housemaids (11 cases) followed by drivers (7 cases) when compared to other occupations. According to educational level, illiterate participants represent the majority of infected workers (12 cases). The majority of workers' positive cases were from the Philippines. In comparison between laboratory techniques, out of the 188 samples screened for STH, real-time PCR was able to detect the DNA in (19/188) samples followed by Ritchie sedimentation technique (18/188), and direct wet smear (7/188). Conclusion: STH infections are a major public health issue to healthcare systems around the world. Communities must be educated on hygiene practices and the severity of such parasites to human health. As far as drivers and housemaids come to close contact with families, including children and elderlies. This may put family members at risk of developing serious side effects related to STH, especially as the majority of workers were illiterate, lacking the basic hygiene knowledge and practices. We recommend the official authority in Jeddah and around the kingdom of Saudi Arabia to revise the standard screening tests for newly arrived workers and enforce regular follow-up inspections to minimize the chances of the spread of STH from expatriate workers to the public.

Keywords: expatriate labors, Jeddah, prevalence, soil transmitted helminths

Procedia PDF Downloads 110
140 Closing the Gap: Efficient Voxelization with Equidistant Scanlines and Gap Detection

Authors: S. Delgado, C. Cerrada, R. S. Gómez

Abstract:

This research introduces an approach to voxelizing the surfaces of triangular meshes with efficiency and accuracy. Our method leverages parallel equidistant scan-lines and introduces a Gap Detection technique to address the limitations of existing approaches. We present a comprehensive study showcasing the method's effectiveness, scalability, and versatility in different scenarios. Voxelization is a fundamental process in computer graphics and simulations, playing a pivotal role in applications ranging from scientific visualization to virtual reality. Our algorithm focuses on enhancing the voxelization process, especially for complex models and high resolutions. One of the major challenges in voxelization in the Graphics Processing Unit (GPU) is the high cost of discovering the same voxels multiple times. These repeated voxels incur in costly memory operations with no useful information. Our scan-line-based method ensures that each voxel is detected exactly once when processing the triangle, enhancing performance without compromising the quality of the voxelization. The heart of our approach lies in the use of parallel, equidistant scan-lines to traverse the interiors of triangles. This minimizes redundant memory operations and avoids revisiting the same voxels, resulting in a significant performance boost. Moreover, our method's computational efficiency is complemented by its simplicity and portability. Written as a single compute shader in Graphics Library Shader Language (GLSL), it is highly adaptable to various rendering pipelines and hardware configurations. To validate our method, we conducted extensive experiments on a diverse set of models from the Stanford repository. Our results demonstrate not only the algorithm's efficiency, but also its ability to produce 26 tunnel free accurate voxelizations. The Gap Detection technique successfully identifies and addresses gaps, ensuring consistent and visually pleasing voxelized surfaces. Furthermore, we introduce the Slope Consistency Value metric, quantifying the alignment of each triangle with its primary axis. This metric provides insights into the impact of triangle orientation on scan-line based voxelization methods. It also aids in understanding how the Gap Detection technique effectively improves results by targeting specific areas where simple scan-line-based methods might fail. Our research contributes to the field of voxelization by offering a robust and efficient approach that overcomes the limitations of existing methods. The Gap Detection technique fills a critical gap in the voxelization process. By addressing these gaps, our algorithm enhances the visual quality and accuracy of voxelized models, making it valuable for a wide range of applications. In conclusion, "Closing the Gap: Efficient Voxelization with Equidistant Scan-lines and Gap Detection" presents an effective solution to the challenges of voxelization. Our research combines computational efficiency, accuracy, and innovative techniques to elevate the quality of voxelized surfaces. With its adaptable nature and valuable innovations, this technique could have a positive influence on computer graphics and visualization.

Keywords: voxelization, GPU acceleration, computer graphics, compute shaders

Procedia PDF Downloads 41
139 The Role of a Biphasic Implant Based on a Bioactive Silk Fibroin for Osteochondral Tissue Regeneration

Authors: Lizeth Fuentes-Mera, Vanessa Perez-Silos, Nidia K. Moncada-Saucedo, Alejandro Garcia-Ruiz, Alberto Camacho, Jorge Lara-Arias, Ivan Marino-Martinez, Victor Romero-Diaz, Adolfo Soto-Dominguez, Humberto Rodriguez-Rocha, Hang Lin, Victor Pena-Martinez

Abstract:

Biphasic scaffolds in cartilage tissue engineering have been designed to influence not only the recapitulation of the osteochondral architecture but also to take advantage of the healing ability of bone to promote the implant integration with the surrounding tissue and then bone restoration and cartilage regeneration. This study reports the development and characterization of a biphasic scaffold based on the assembly of a cartilage phase constituted by fibroin biofunctionalized with bovine cartilage matrix; cellularized with differentiated pre-chondrocytes from adipose tissue stem cells (autologous) and well attached to a bone phase (bone bovine decellularized) to mimic the structure of the nature of native tissue and to promote the cartilage regeneration in a model of joint damage in pigs. Biphasic scaffolds were assembled by fibroin crystallization with methanol. The histological and ultrastructural architectures were evaluated by optical and scanning electron microscopy respectively. Mechanical tests were conducted to evaluate Young's modulus of the implant. For the biological evaluation, pre-chondrocytes were loaded onto the scaffolds and cellular adhesion, proliferation, and gene expression analysis of cartilage extracellular matrix components was performed. The scaffolds that were cellularized and matured for 10 days were implanted into critical 3 mm in diameter and 9-mm in depth osteochondral defects in a porcine model (n=4). Three treatments were applied per knee: Group 1: monophasic cellular scaffold (MS) (single chondral phase), group 2: biphasic scaffold, cellularized only in the chondral phase (BS1), group 3: BS cellularized in both bone and chondral phases (BS2). Simultaneously, a control without treatment was evaluated. After 4 weeks of surgery, integration and regeneration tissues were analyzed by x-rays, histology and immunohistochemistry evaluation. The mechanical assessment showed that the acellular biphasic composites exhibited Young's modulus of 805.01 kPa similar to native cartilage (400-800 kPa). In vitro biological studies revealed the chondroinductive ability of the biphasic implant, evidenced by an increase in sulfated glycosaminoglycan (GAGs) and type II collagen, both secreted by the chondrocytes cultured on the scaffold during 28 days. No evidence of adverse or inflammatory reactions was observed in the in vivo trial; however, In group 1, the defects were not reconstructed. In group 2 and 3 a good integration of the implant with the surrounding tissue was observed. Defects in group 2 were fulfilled by hyaline cartilage and normal bone. Group 3 defects showed fibrous repair tissue. In conclusion; our findings demonstrated the efficacy of biphasic and bioactive scaffold based on silk fibroin, which entwined chondroinductive features and biomechanical capability with appropriate integration with the surrounding tissue, representing a promising alternative for osteochondral tissue-engineering applications.

Keywords: biphasic scaffold, extracellular cartilage matrix, silk fibroin, osteochondral tissue engineering

Procedia PDF Downloads 124
138 Early Impact Prediction and Key Factors Study of Artificial Intelligence Patents: A Method Based on LightGBM and Interpretable Machine Learning

Authors: Xingyu Gao, Qiang Wu

Abstract:

Patents play a crucial role in protecting innovation and intellectual property. Early prediction of the impact of artificial intelligence (AI) patents helps researchers and companies allocate resources and make better decisions. Understanding the key factors that influence patent impact can assist researchers in gaining a better understanding of the evolution of AI technology and innovation trends. Therefore, identifying highly impactful patents early and providing support for them holds immeasurable value in accelerating technological progress, reducing research and development costs, and mitigating market positioning risks. Despite the extensive research on AI patents, accurately predicting their early impact remains a challenge. Traditional methods often consider only single factors or simple combinations, failing to comprehensively and accurately reflect the actual impact of patents. This paper utilized the artificial intelligence patent database from the United States Patent and Trademark Office and the Len.org patent retrieval platform to obtain specific information on 35,708 AI patents. Using six machine learning models, namely Multiple Linear Regression, Random Forest Regression, XGBoost Regression, LightGBM Regression, Support Vector Machine Regression, and K-Nearest Neighbors Regression, and using early indicators of patents as features, the paper comprehensively predicted the impact of patents from three aspects: technical, social, and economic. These aspects include the technical leadership of patents, the number of citations they receive, and their shared value. The SHAP (Shapley Additive exPlanations) metric was used to explain the predictions of the best model, quantifying the contribution of each feature to the model's predictions. The experimental results on the AI patent dataset indicate that, for all three target variables, LightGBM regression shows the best predictive performance. Specifically, patent novelty has the greatest impact on predicting the technical impact of patents and has a positive effect. Additionally, the number of owners, the number of backward citations, and the number of independent claims are all crucial and have a positive influence on predicting technical impact. In predicting the social impact of patents, the number of applicants is considered the most critical input variable, but it has a negative impact on social impact. At the same time, the number of independent claims, the number of owners, and the number of backward citations are also important predictive factors, and they have a positive effect on social impact. For predicting the economic impact of patents, the number of independent claims is considered the most important factor and has a positive impact on economic impact. The number of owners, the number of sibling countries or regions, and the size of the extended patent family also have a positive influence on economic impact. The study primarily relies on data from the United States Patent and Trademark Office for artificial intelligence patents. Future research could consider more comprehensive data sources, including artificial intelligence patent data, from a global perspective. While the study takes into account various factors, there may still be other important features not considered. In the future, factors such as patent implementation and market applications may be considered as they could have an impact on the influence of patents.

Keywords: patent influence, interpretable machine learning, predictive models, SHAP

Procedia PDF Downloads 18
137 Smart Mobility Planning Applications in Meeting the Needs of the Urbanization Growth

Authors: Caroline Atef Shoukry Tadros

Abstract:

Massive Urbanization growth threatens the sustainability of cities and the quality of city life. This raised the need for an alternate model of sustainability, so we need to plan the future cities in a smarter way with smarter mobility. Smart Mobility planning applications are solutions that use digital technologies and infrastructure advances to improve the efficiency, sustainability, and inclusiveness of urban transportation systems. They can contribute to meeting the needs of Urbanization growth by addressing the challenges of traffic congestion, pollution, accessibility, and safety in cities. Some example of a Smart Mobility planning application are Mobility-as-a-service: This is a service that integrates different transport modes, such as public transport, shared mobility, and active mobility, into a single platform that allows users to plan, book, and pay for their trips. This can reduce the reliance on private cars, optimize the use of existing infrastructure, and provide more choices and convenience for travelers. MaaS Global is a company that offers mobility-as-a-service solutions in several cities around the world. Traffic flow optimization: This is a solution that uses data analytics, artificial intelligence, and sensors to monitor and manage traffic conditions in real-time. This can reduce congestion, emissions, and travel time, as well as improve road safety and user satisfaction. Waycare is a platform that leverages data from various sources, such as connected vehicles, mobile applications, and road cameras, to provide traffic management agencies with insights and recommendations to optimize traffic flow. Logistics optimization: This is a solution that uses smart algorithms, blockchain, and IoT to improve the efficiency and transparency of the delivery of goods and services in urban areas. This can reduce the costs, emissions, and delays associated with logistics, as well as enhance the customer experience and trust. ShipChain is a blockchain-based platform that connects shippers, carriers, and customers and provides end-to-end visibility and traceability of the shipments. Autonomous vehicles: This is a solution that uses advanced sensors, software, and communication systems to enable vehicles to operate without human intervention. This can improve the safety, accessibility, and productivity of transportation, as well as reduce the need for parking space and infrastructure maintenance. Waymo is a company that develops and operates autonomous vehicles for various purposes, such as ride-hailing, delivery, and trucking. These are some of the ways that Smart Mobility planning applications can contribute to meeting the needs of the Urbanization growth. However, there are also various opportunities and challenges related to the implementation and adoption of these solutions, such as the regulatory, ethical, social, and technical aspects. Therefore, it is important to consider the specific context and needs of each city and its stakeholders when designing and deploying Smart Mobility planning applications.

Keywords: smart mobility planning, smart mobility applications, smart mobility techniques, smart mobility tools, smart transportation, smart cities, urbanization growth, future smart cities, intelligent cities, ICT information and communications technologies, IoT internet of things, sensors, lidar, digital twin, ai artificial intelligence, AR augmented reality, VR virtual reality, robotics, cps cyber physical systems, citizens design science

Procedia PDF Downloads 43
136 Prompt Photons Production in Compton Scattering of Quark-Gluon and Annihilation of Quark-Antiquark Pair Processes

Authors: Mohsun Rasim Alizada, Azar Inshalla Ahmdov

Abstract:

Prompt photons are perhaps the most versatile tools for studying the dynamics of relativistic collisions of heavy ions. The study of photon radiation is of interest that in most hadron interactions, photons fly out as a background to other studied signals. The study of the birth of prompt photons in nucleon-nucleon collisions was previously carried out in experiments on Relativistic Heavy Ion Collider (RHIC) and the Large Hadron Collider (LHC). Due to the large energy of colliding nucleons, in addition to prompt photons, many different elementary particles are born. However, the birth of additional elementary particles makes it difficult to determine the accuracy of the effective section of the birth of prompt photons. From this point of view, the experiments planned on the Nuclotron-based Ion Collider Facility (NICA) complex will have a great advantage, since the energy obtained for colliding heavy ions will reduce the number of additionally born elementary particles. Of particular importance is the study of the processes of birth of prompt photons to determine the gluon leaving hadrons since the photon carries information about a rigid subprocess. At present, paper production of prompt photon in Compton scattering of quark-gluon and annihilation of quark–antiquark processes is investigated. The matrix elements Compton scattering of quark-gluon and annihilation of quark-antiquark pair processes has been written. The Square of matrix elements of processes has been calculated in FeynCalc. The phase volume of subprocesses has been determined. Expression to calculate the differential cross-section of subprocesses has been obtained: Given the resulting expressions for the square of the matrix element in the differential section expression, we see that the differential section depends not only on the energy of colliding protons, but also on the mass of quarks, etc. Differential cross-section of subprocesses is estimated. It is shown that the differential cross-section of subprocesses decreases with the increasing energy of colliding protons. Asymmetry coefficient with polarization of colliding protons is determined. The calculation showed that the squares of the matrix element of the Compton scattering process without and taking into account the polarization of colliding protons are identical. The asymmetry coefficient of this subprocess is zero, which is consistent with the literary data. It is known that in any single polarization processes with a photon, squares of matrix elements without taking into account and taking into account the polarization of the original particle must coincide, that is, the terms in the square of the matrix element with the degree of polarization are equal to zero. The coincidence of the squares of the matrix elements indicates that the parity of the system is preserved. The asymmetry coefficient of annihilation of quark–antiquark pair process linearly decreases from positive unit to negative unit with increasing the production of the polarization degrees of colliding protons. Thus, it was obtained that the differential cross-section of the subprocesses decreases with the increasing energy of colliding protons. The value of the asymmetry coefficient is maximal when the polarization of colliding protons is opposite and minimal when they are directed equally. Taking into account the polarization of only the initial quarks and gluons in Compton scattering does not contribute to the differential section of the subprocess.

Keywords: annihilation of a quark-antiquark pair, coefficient of asymmetry, Compton scattering, effective cross-section

Procedia PDF Downloads 125
135 Sustainability Communications Across Multi-Stakeholder Groups: A Critical Review of the Findings from the Hospitality and Tourism Sectors

Authors: Frederica Pettit

Abstract:

Contribution: Stakeholder involvement in CSR is essential to ensuring pro-environmental attitudes and behaviours across multi-stakeholder groups. Despite increased awareness of the benefits surrounding a collaborative approach to sustainability communications, its success is limited by difficulties engaging with active online conversations with stakeholder groups. Whilst previous research defines the effectiveness of sustainability communications; this paper contributes to knowledge through the development of a theoretical framework that explores the processes to achieving pro-environmental attitudes and behaviours in stakeholder groups. The research will also consider social media as an opportunity to communicate CSR information to all stakeholder groups. Approach: A systematic review was chosen to investigate the effectiveness of the types of sustainability communications used in the hospitality and tourism industries. The systematic review was completed using Web of Science and Scopus using the search terms “sustainab* communicat*” “effective or effectiveness,” and “hospitality or tourism,” limiting the results to peer-reviewed research. 133 abstracts were initially read, with articles being excluded for irrelevance, duplicated articles, non-empirical studies, and language. A total of 45 papers were included as part of the systematic review. 5 propositions were created based on the results of the systematic review, helping to develop a theoretical framework of the processes needed for companies to encourage pro-environmental behaviours across multi-stakeholder groups. Results: The theoretical framework developed in the paper determined the processes necessary for companies to achieve pro-environmental behaviours in stakeholders. The processes to achieving pro-environmental attitudes and behaviours are stakeholder-focused, identifying the need for communications to be specific to their targeted audience. Collaborative communications that enable stakeholders to engage with CSR information and provide feedback lead to a higher awareness of CSR shared visions and pro-environmental attitudes and behaviours. These processes should also aim to improve their relationships with stakeholders through transparency of CSR, CSR strategies that match stakeholder values and ethics whilst prioritizing sustainability as part of their job role. Alternatively, companies can prioritize pro-environmental behaviours using choice editing by mainstreaming sustainability as the only option. In recent years, there has been extensive research on social media as a viable source of sustainability communications, with benefits including direct interactions with stakeholders, the ability to enforce the authenticity of CSR activities and encouragement of pro-environmental behaviours. Despite this, there are challenges to implementing CSR, including difficulties controlling stakeholder criticisms, negative stakeholder influences and comments left on social media platforms. Conclusion: A lack of engagement with CSR information is a reoccurring reason for preventing pro-environmental attitudes and behaviours across stakeholder groups. Traditional CSR strategies contribute to this due to their inability to engage with their intended audience. Hospitality and tourism companies are improving stakeholder relationships through collaborative processes which reduce single-use plastic consumption. A collaborative approach to communications can lead to stakeholder satisfaction, leading to changes in attitudes and behaviours. Different sources of communications are accessed by different stakeholder groups, identifying the need for targeted sustainability messaging, creating benefits such as direct interactions with stakeholders, the ability to enforce the authenticity of CSR activities, and encouraging engagement with sustainability information.

Keywords: hospitality, pro-environmental attitudes and behaviours, sustainability communication, social media

Procedia PDF Downloads 103
134 The Shared Breath Project: Inhabiting Each Other’s Words and Being

Authors: Beverly Redman

Abstract:

With the Theatre Season of 2020-2021 cancelled due to COVID-19 at Purdue University, Fort Wayne, IN, USA, faculty directors found themselves scrambling to create theatre production opportunities for their students in the Department of Theatre. Redman, Chair of the Department, found her community to be suffering from anxieties brought on by a confluence of issues: the global-scale Covid-19 Pandemic, the United States’ Black Lives Matter protests erupting in cities all across the country and the coming Presidential election, arguably the most important and most contentious in the country’s history. Redman wanted to give her students the opportunity to speak not only on these issues but also to be able to record who they were at this time in their personal lives, as well as in this broad socio-political context. She also wanted to invite them into an experience of feeling empathy, too, at a time when empathy in this world seems to be sorely lacking. Returning to a mode of Devising Theatre she had used with community groups in the past, in which storytelling and re-enactment of participants’ life events combined with oral history documentation practices, Redman planned The Shared Breath Project. The process involved three months of workshops, in which participants alternated between theatre exercises and oral history collection and documentation activities as a way of generating original material for a theatre production. The goal of the first half of the project was for each participant to produce a solo piece in the form of a monologue after many generations of potential material born out of gammes, improvisations, interviews and the like. Along the way, many film and audio clips recorded the process of each person’s written documentation—documentation prepared by the subject him or herself but also by others in the group assigned to listen, watch and record. Then, in the second half of the project—and only once each participant had taken their own contributions from raw improvisatory self-presentations and through the stages of composition and performative polish, participants then exchanged their pieces. The second half of the project involved taking on each other’s words, mannerisms, gestures, melodic and rhythmic speech patterns and inhabiting them through the rehearsal process as their own, thus the title, The Shared Breath Project. Here, in stage two the acting challenges evolved to be those of capturing the other and becoming the other through accurate mimicry that embraces Denis Diderot’s concept of the Paradox of Acting, in that the actor is both seeming and being simultaneous. This paper shares the carefully documented process of making the live-streamed theatre production that resulted from these workshops, writing processes and rehearsals, and forming, The Shared Breath Project, which ultimately took the students’ Realist, life-based pieces and edited them into a single unified theatre production. The paper also utilizes research on the Paradox of Acting, putting a Post-Structuralist spin on Diderot’s theory. Here, the paper suggests the limitations of inhabiting the other by allowing that the other is always already a thing impenetrable but nevertheless worthy of unceasing empathetic, striving and delving in an epoch in which slow, careful attention to our fellows is in short supply.

Keywords: otherness, paradox of acting, oral history theatre, devised theatre, political theatre, community-based theatre, peoples’ theatre

Procedia PDF Downloads 158
133 Capturing Healthcare Expert’s Knowledge Digitally: A Scoping Review of Current Approaches

Authors: Sinead Impey, Gaye Stephens, Declan O’Sullivan

Abstract:

Mitigating organisational knowledge loss presents challenges for knowledge managers. Expert knowledge is embodied in people and captured in ‘routines, processes, practices and norms’ as well as in the paper system. These knowledge stores have limitations in so far as they make knowledge diffusion beyond geography or over time difficult. However, technology could present a potential solution by facilitating the capture and management of expert knowledge in a codified and sharable format. Before it can be digitised, however, the knowledge of healthcare experts must be captured. Methods: As a first step in a larger project on this topic, a scoping review was conducted to identify how expert healthcare knowledge is captured digitally. The aim of the review was to identify current healthcare knowledge capture practices, identify gaps in the literature, and justify future research. The review followed a scoping review framework. From an initial 3,430 papers retrieved, 22 were deemed relevant and included in the review. Findings: Two broad approaches –direct and indirect- with themes and subthemes emerged. ‘Direct’ describes a process whereby knowledge is taken directly from subject experts. The themes identified were: ‘Researcher mediated capture’ and ‘Digital mediated capture’. The latter was further distilled into two sub-themes: ‘Captured in specified purpose platforms (SPP)’ and ‘Captured in a virtual community of practice (vCoP)’. ‘Indirect’ processes rely on extracting new knowledge using artificial intelligence techniques from previously captured data. Using this approach, the theme ‘Generated using artificial intelligence methods’ was identified. Although presented as distinct themes, some papers retrieved discuss combining more than one approach to capture knowledge. While no approach emerged as superior, two points arose from the literature. Firstly, human input was evident across themes, even with indirect approaches. Secondly, a range of challenges common among approaches was highlighted. These were (i) ‘Capturing an expert’s knowledge’- Difficulties surrounding capturing an expert’s knowledge related to identifying the ‘expert’ say from the very experienced and how to capture their tacit or difficult to articulate knowledge. (ii) ‘Confirming quality of knowledge’- Once captured, challenges noted surrounded how to validate knowledge captured and, therefore, quality. (iii) ‘Continual knowledge capture’- Once knowledge is captured, validated, and used in a system; however, the process is not complete. Healthcare is a knowledge-rich environment with new evidence emerging frequently. As such, knowledge needs to be reviewed, updated, or removed (redundancy) as appropriate. Although some methods were proposed to address this, such as plausible reasoning or case-based reasoning, conclusions could not be drawn from the papers retrieved. It was, therefore, highlighted as an area for future research. Conclusion: The results described two broad approaches – direct and indirect. Three themes were identified: ‘Researcher mediated capture (Direct)’; ‘Digital mediated capture (Direct)’ and ‘Generated using artificial intelligence methods (Indirect)’. While no single approach was deemed superior, common challenges noted among approaches were: ‘capturing an expert’s knowledge’, ‘confirming quality of knowledge’, and ‘continual knowledge capture’. However, continual knowledge capture was not fully explored in the papers retrieved and was highlighted as an important area for future research. Acknowledgments: This research is partially funded by the ADAPT Centre under the SFI Research Centres Programme (Grant 13/RC/2106) and is co-funded under the European Regional Development Fund.

Keywords: expert knowledge, healthcare, knowledge capture and knowledge management

Procedia PDF Downloads 112
132 Species Profiling of White Grub Beetles and Evaluation of Pre and Post Sown Application of Insecticides against White Grub Infesting Soybean

Authors: Ajay Kumar Pandey, Mayank Kumar

Abstract:

White grub (Coleoptera: Scarabaeidae) is a major destructive pest in western Himalayan region of Uttarakhand. Beetles feed on apple, apricot, plum, walnut etc. during night while, second and third instar grubs feed on live roots of cultivated as well as non-cultivated crops. Collection and identification of scarab beetles through light trap was carried out at Crop Research Centre, Govind Ballab Pant University Pantnagar, Udham Singh Nagar (Uttarakhand) during 2018. Field trials were also conducted in 2018 to evaluate pre and post sown application of different insecticides against the white grub infesting soybean. The insecticides like Carbofuran 3 Granule (G) (750 g a.i./ha), Clothianidin 50 Water Dispersal Granule (WG) (120 g a.i./ha), Fipronil 0.3 G (50 g a.i./ha), Thiamethoxam 25 WG (80 g a.i./ha), Imidacloprid 70 WG (300 g a.i./ha), Chlorantraniliprole 0.4% G(100 g a.i./ha) and mixture of Fipronil 40% and Imidacloprid 40% WG (300 g a.i./ha) were applied at the time of sowing in pre sown experiment while same dosage of insecticides were applied in standing soybean crop during (first fortnight of July). Commutative plant mortality data were recorded after 20, 40, 60 days intervals and compared with untreated control. Total 23 species of white grub beetles recorded on the light trap and Holotrichia serrata Fabricious (Coleoptera: Melolonthinae) was found to be predominant species by recording 20.6% relative abundance out of the total light trap catch (i.e. 1316 beetles) followed by Phyllognathus sp. (14.6% relative abundance). H. rosettae and Heteronychus lioderus occupied third and fourth rank with 11.85% and 9.65% relative abundance, respectively. The emergence of beetles of predominant species started from 15th March, 2018. In April, average light trap catch was 382 white grub beetles, however, peak emergence of most of the white grub species was observed from June to July, 2018 i.e. 336 beetles in June followed by 303 beetles in the July. On the basis of the emergence pattern of white grub beetles, it may be concluded that the Peak Emergence Period (PEP) for the beetles of H. serrata was second fortnight of April for the total period of 15 days. In May, June and July relatively low population of H. serrata was observed. A decreasing trend in light trap catch was observed and went on till September during the study. No single beetle of H. serrata was observed on light trap from September onwards. The cumulative plant mortality data in both the experiments revealed that all the insecticidal treatments were significantly superior in protection-wise (6.49-16.82% cumulative plant mortality) over untreated control where highest plant mortality was 17.28 to 39.65% during study. The mixture of Fipronil 40% and Imidacloprid 40% WG applied at the rate of 300 g a.i. per ha proved to be most effective having lowest plant mortality i.e. 9.29 and 10.94% in pre and post sown crop, followed by Clothianidin 50 WG (120 g a.i. per ha) where the plant mortality was 10.57 and 11.93% in pre and post sown treatments, respectively. Both treatments were found significantly at par among each other. Production-wise, all the insecticidal treatments were found statistically superior (15.00-24.66 q per ha grain yields) over untreated control where the grain yield was 8.25 & 9.13 q per ha. Treatment Fipronil 40% + Imidacloprid 40% WG applied at the rate of 300 g a.i. per ha proved to be most effective and significantly superior over Imidacloprid 70WG applied at the rate of 300 g a.i. per ha.

Keywords: bio efficacy, insecticide, soybean, white grub

Procedia PDF Downloads 103
131 Digital Holographic Interferometric Microscopy for the Testing of Micro-Optics

Authors: Varun Kumar, Chandra Shakher

Abstract:

Micro-optical components such as microlenses and microlens array have numerous engineering and industrial applications for collimation of laser diodes, imaging devices for sensor system (CCD/CMOS, document copier machines etc.), for making beam homogeneous for high power lasers, a critical component in Shack-Hartmann sensor, fiber optic coupling and optical switching in communication technology. Also micro-optical components have become an alternative for applications where miniaturization, reduction of alignment and packaging cost are necessary. The compliance with high-quality standards in the manufacturing of micro-optical components is a precondition to be compatible on worldwide markets. Therefore, high demands are put on quality assurance. For quality assurance of these lenses, an economical measurement technique is needed. For cost and time reason, technique should be fast, simple (for production reason), and robust with high resolution. The technique should provide non contact, non-invasive and full field information about the shape of micro- optical component under test. The interferometric techniques are noncontact type and non invasive and provide full field information about the shape of the optical components. The conventional interferometric technique such as holographic interferometry or Mach-Zehnder interferometry is available for characterization of micro-lenses. However, these techniques need more experimental efforts and are also time consuming. Digital holography (DH) overcomes the above described problems. Digital holographic microscopy (DHM) allows one to extract both the amplitude and phase information of a wavefront transmitted through the transparent object (microlens or microlens array) from a single recorded digital hologram by using numerical methods. Also one can reconstruct the complex object wavefront at different depths due to numerical reconstruction. Digital holography provides axial resolution in nanometer range while lateral resolution is limited by diffraction and the size of the sensor. In this paper, Mach-Zehnder based digital holographic interferometric microscope (DHIM) system is used for the testing of transparent microlenses. The advantage of using the DHIM is that the distortions due to aberrations in the optical system are avoided by the interferometric comparison of reconstructed phase with and without the object (microlens array). In the experiment, first a digital hologram is recorded in the absence of sample (microlens array) as a reference hologram. Second hologram is recorded in the presence of microlens array. The presence of transparent microlens array will induce a phase change in the transmitted laser light. Complex amplitude of object wavefront in presence and absence of microlens array is reconstructed by using Fresnel reconstruction method. From the reconstructed complex amplitude, one can evaluate the phase of object wave in presence and absence of microlens array. Phase difference between the two states of object wave will provide the information about the optical path length change due to the shape of the microlens. By the knowledge of the value of the refractive index of microlens array material and air, the surface profile of microlens array is evaluated. The Sag of microlens and radius of curvature of microlens are evaluated and reported. The sag of microlens agrees well within the experimental limit as provided in the specification by the manufacturer.

Keywords: micro-optics, microlens array, phase map, digital holographic interferometric microscopy

Procedia PDF Downloads 470
130 Rethinking Urban Voids: An Investigation beneath the Kathipara Flyover, Chennai into a Transit Hub by Adaptive Utilization of Space

Authors: V. Jayanthi

Abstract:

Urbanization and pace of urbanization have increased tremendously in last few decades. More towns are now getting converted into cities. Urbanization trend is seen all over the world but is becoming most dominant in Asia. Today, the scale of urbanization in India is so huge that Indian cities are among the fastest-growing in the world, including Bangalore, Hyderabad, Pune, Chennai, Delhi, and Mumbai. Urbanization remains a single predominant factor that is continuously linked to the destruction of urban green spaces. With reference to Chennai as a case study, which is suffering from rapid deterioration of its green spaces, this paper sought to fill this gap by exploring key factors aside urbanization that is responsible for the destruction of green spaces. The paper relied on a research approach and triangulated data collection techniques such as interviews, focus group discussion, personal observation and retrieval of archival data. It was observed that apart from urbanization, problem of ownership of green space lands, low priority to green spaces, poor maintenance, enforcement of development controls, wastage of underpass spaces, and uncooperative attitudes of the general public, play a critical role in the destruction of urban green spaces. Therefore the paper narrows down to a point, that for a city to have a proper sustainable urban green space, broader city development plans are essential. Though rapid urbanization is an indicator of positive development, it is also accompanied by a host of challenges. Chennai lost a lot of greenery, as the city urbanized rapidly that led to a steep fall in vegetation cover. Environmental deterioration will be the big price we pay if Chennai continues to grow at the expense of greenery. Soaring skyscrapers, multistoried complexes, gated communities, and villas, frame the iconic skyline of today’s Chennai city which reveals that we overlook the importance of our green cover, which is important to balance our urban and lung spaces. Chennai, with a clumped landscape at the center of the city, is predicted to convert 36% of its total area into urban areas by 2026. One major issue is that a city designed and planned in isolation creates underused spaces all around the cities which are of negligence. These urban voids are dead, underused, unused spaces in the cities that are formed due to inefficient decision making, poor land management, and poor coordination. Urban voids have huge potential of creating a stronger urban fabric, exploited as public gathering spaces, pocket parks or plazas or just enhance public realm, rather than dumping of debris and encroachments. Flyovers need to justify their existence themselves by being more than just traffic and transport solutions. The vast, unused space below the Kathipara flyover is a case in point. This flyover connects three major routes: Tambaram, Koyambedu, and Adyar. This research will focus on the concept of urban voids, how these voids under the flyovers, can be used for place making process, how this space beneath flyovers which are neglected, can be a part of the urban realm through urban design and landscaping.

Keywords: landscape design, flyovers, public spaces, reclaiming lost spaces, urban voids

Procedia PDF Downloads 226
129 Ragging and Sludging Measurement in Membrane Bioreactors

Authors: Pompilia Buzatu, Hazim Qiblawey, Albert Odai, Jana Jamaleddin, Mustafa Nasser, Simon J. Judd

Abstract:

Membrane bioreactor (MBR) technology is challenged by the tendency for the membrane permeability to decrease due to ‘clogging’. Clogging includes ‘sludging’, the filling of the membrane channels with sludge solids, and ‘ragging’, the aggregation of short filaments to form long rag-like particles. Both sludging and ragging demand manual intervention to clear out the solids, which is time-consuming, labour-intensive and potentially damaging to the membranes. These factors impact on costs more significantly than membrane surface fouling which, unlike clogging, is largely mitigated by the chemical clean. However, practical evaluation of MBR clogging has thus far been limited. This paper presents the results of recent work attempting to quantify sludging and clogging based on simple bench-scale tests. Results from a novel ragging simulation trial indicated that rags can be formed within 24-36 hours from dispersed < 5 mm-long filaments at concentrations of 5-10 mg/L under gently agitated conditions. Rag formation occurred for both a cotton wool standard and samples taken from an operating municipal MBR, with between 15% and 75% of the added fibrous material forming a single rag. The extent of rag formation depended both on the material type or origin – lint from laundering operations forming zero rags – and the filament length. Sludging rates were quantified using a bespoke parallel-channel test cell representing the membrane channels of an immersed flat sheet MBR. Sludge samples were provided from two local MBRs, one treating municipal and the other industrial effluent. Bulk sludge properties measured comprised mixed liquor suspended solids (MLSS) concentration, capillary suction time (CST), particle size, soluble COD (sCOD) and rheology (apparent viscosity μₐ vs shear rate γ). The fouling and sludging propensity of the sludge was determined using the test cell, ‘fouling’ being quantified as the pressure incline rate against flux via the flux step test (for which clogging was absent) and sludging by photographing the channel and processing the image to determine the ratio of the clogged to unclogged regions. A substantial difference in rheological and fouling behaviour was evident between the two sludge sources, the industrial sludge having a higher viscosity but less shear-thinning than the municipal. Fouling, as manifested by the pressure increase Δp/Δt, as a function of flux from classic flux-step experiments (where no clogging was evident), was more rapid for the industrial sludge. Across all samples of both sludge origins the expected trend of increased fouling propensity with increased CST and sCOD was demonstrated, whereas no correlation was observed between clogging rate and these parameters. The relative contribution of fouling and clogging was appraised by adjusting the clogging propensity via increasing the MLSS both with and without a commensurate increase in the COD. Results indicated that whereas for the municipal sludge the fouling propensity was affected by the increased sCOD, there was no associated increased in the sludging propensity (or cake formation). The clogging rate actually decreased on increasing the MLSS. Against this, for the industrial sludge the clogging rate dramatically increased with solids concentration despite a decrease in the soluble COD. From this was surmised that sludging did not relate to fouling.

Keywords: clogging, membrane bioreactors, ragging, sludge

Procedia PDF Downloads 152
128 Plasma Levels of Collagen Triple Helix Repeat Containing 1 (CTHRC1) as a Potential Biomarker in Interstitial Lung Disease

Authors: Rijnbout-St.James Willem, Lindner Volkhard, Scholand Mary Beth, Ashton M. Tillett, Di Gennaro Michael Jude, Smith Silvia Enrica

Abstract:

Introduction: Fibrosing lung diseases are characterized by changes in the lung interstitium and are classified based on etiology: 1) environmental/exposure-related, 2) autoimmune-related, 3) sarcoidosis, 4) interstitial pneumonia, and 4) idiopathic. Among interstitial lung diseases (ILD) idiopathic forms, idiopathic pulmonary fibrosis (IPF) is the most severe. Pathogenesis of IPF is characterized by an increased presence of proinflammatory mediators, resulting in alveolar injury, where injury to alveolar epithelium precipitates an increase in collagen deposition, subsequently thickening the alveolar septum and decreasing gas exchange. Identifying biomarkers implicated in the pathogenesis of lung fibrosis is key to developing new therapies and improving the efficacy of existing therapies. The transforming growth factor-beta (TGF-B1), a mediator of tissue repair associated with WNT5A signaling, is partially responsible for fibroblast proliferation in ILD and is the target of Pirfenidone, one of the antifibrotic therapies used for patients with IPF. Canonical TGF-B signaling is mediated by the proteins SMAD 2/3, which are, in turn, indirectly regulated by Collagen Triple Helix Repeat Containing 1 (CTHRC1). In this study, we tested the following hypotheses: 1) CTHRC1 is more elevated in the ILD cohort compared to unaffected controls, and 2) CTHRC1 is differently expressed among ILD types. Material and Methods: CTHRC1 levels were measured by ELISA in 171 plasma samples from the deidentified University of Utah ILD cohort. Data represent a cohort of 131 ILD-affected participants and 40 unaffected controls. CTHRC1 samples were categorized by a pulmonologist based on affectation status and disease subtypes: IPF (n = 45), sarcoidosis (4), nonspecific interstitial pneumonia (16), hypersensitivity pneumonitis (n = 7), interstitial pneumonia (n=13), autoimmune (n = 15), other ILD - a category that includes undifferentiated ILD diagnoses (n = 31), and unaffected controls (n = 40). We conducted a single-factor ANOVA of plasma CTHRC1 levels to test whether CTHRC1 variance among affected and non-affected participants is statistically significantly different. In-silico analysis was performed with Ingenuity Pathway Analysis® to characterize the role of CTHRC1 in the pathway of lung fibrosis. Results: Statistical analyses of CTHRC1 in plasma samples indicate that the average CTHRC1 level is significantly higher in ILD-affected participants than controls, with the autoimmune ILD being higher than other ILD types, thus supporting our hypotheses. In-silico analyses show that CTHRC1 indirectly activates and phosphorylates SMAD3, which in turn cross-regulates TGF-B1. CTHRC1 also may regulate the expression and transcription of TGFB-1 via WNT5A and its regulatory relationship with CTNNB1. Conclusion: In-silico pathway analyses demonstrate that CTHRC1 may be an important biomarker in ILD. Analysis of plasma samples indicates that CTHRC1 expression is positively associated with ILD affectation, with autoimmune ILD having the highest average CTHRC1 values. While characterizing CTHRC1 levels in plasma can help to differentiate among ILD types and predict response to Pirfenidone, the extent to which plasma CTHRC1 level is a function of ILD severity or chronicity is unknown.

Keywords: interstitial lung disease, CTHRC1, idiopathic pulmonary fibrosis, pathway analyses

Procedia PDF Downloads 163
127 Multi-Dimensional Experience of Processing Textual and Visual Information: Case Study of Allocations to Places in the Mind’s Eye Based on Individual’s Semantic Knowledge Base

Authors: Joanna Wielochowska, Aneta Wielochowska

Abstract:

Whilst the relationship between scientific areas such as cognitive psychology, neurobiology and philosophy of mind has been emphasized in recent decades of scientific research, concepts and discoveries made in both fields overlap and complement each other in their quest for answers to similar questions. The object of the following case study is to describe, analyze and illustrate the nature and characteristics of a certain cognitive experience which appears to display features of synaesthesia, or rather high-level synaesthesia (ideasthesia). The following research has been conducted on the subject of two authors, monozygotic twins (both polysynaesthetes) experiencing involuntary associations of identical nature. Authors made attempts to identify which cognitive and conceptual dependencies may guide this experience. Operating on self-introduced nomenclature, the described phenomenon- multi-dimensional processing of textual and visual information- aims to define a relationship that involuntarily and immediately couples the content introduced by means of text or image a sensation of appearing in a certain place in the mind’s eye. More precisely: (I) defining a concept introduced by means of textual content during activity of reading or writing, or (II) defining a concept introduced by means of visual content during activity of looking at image(s) with simultaneous sensation of being allocated to a given place in the mind’s eye. A place can be then defined as a cognitive representation of a certain concept. During the activity of processing information, a person has an immediate and involuntary feel of appearing in a certain place themselves, just like a character of a story, ‘observing’ a venue or a scenery from one or more perspectives and angles. That forms a unique and unified experience, constituting a background mental landscape of text or image being looked at. We came to a conclusion that semantic allocations to a given place could be divided and classified into the categories and subcategories and are naturally linked with an individual’s semantic knowledge-base. A place can be defined as a representation one’s unique idea of a given concept that has been established in their semantic knowledge base. A multi-level structure of selectivity of places in the mind’s eye, as a reaction to a given information (one stimuli), draws comparisons to structures and patterns found in botany. Double-flowered varieties of flowers and a whorl system (arrangement) which is characteristic to components of some flower species were given as an illustrative example. A composition of petals that fan out from one single point and wrap around a stem inspired an idea that, just like in nature, in philosophy of mind there are patterns driven by the logic specific to a given phenomenon. The study intertwines terms perceived through the philosophical lens, such as definition of meaning, subjectivity of meaning, mental atmosphere of places, and others. Analysis of this rare experience aims to contribute to constantly developing theoretical framework of the philosophy of mind and influence the way human semantic knowledge base and processing given content in terms of distinguishing between information and meaning is researched.

Keywords: information and meaning, information processing, mental atmosphere of places, patterns in nature, philosophy of mind, selectivity, semantic knowledge base, senses, synaesthesia

Procedia PDF Downloads 102
126 Two Component Source Apportionment Based on Absorption and Size Distribution Measurement

Authors: Tibor Ajtai, Noémi Utry, Máté Pintér, Gábor Szabó, Zoltán Bozóki

Abstract:

Beyond its climate and health related issues ambient light absorbing carbonaceous particulate matter (LAC) has also become a great scientific interest in terms of its regulations recently. It has been experimentally demonstrated in recent studies, that LAC is dominantly composed of traffic and wood burning aerosol particularly under wintertime urban conditions, when the photochemical and biological activities are negligible. Several methods have been introduced to quantitatively apportion aerosol fractions emitted by wood burning and traffic but most of them require costly and time consuming off-line chemical analysis. As opposed to chemical features, the microphysical properties of airborne particles such as optical absorption and size distribution can be easily measured on-line, with high accuracy and sensitivity, especially under highly polluted urban conditions. Recently a new method has been proposed for the apportionment of wood burning and traffic aerosols based on the spectral dependence of their absorption quantified by the Aerosol Angström Exponent (AAE). In this approach the absorption coefficient is deduced from transmission measurement on a filter accumulated aerosol sample and the conversion factor between the measured optical absorption and the corresponding mass concentration (the specific absorption cross section) are determined by on-site chemical analysis. The recently developed multi-wavelength photoacoustic instruments provide novel, in-situ approach towards the reliable and quantitative characterization of carbonaceous particulate matter. Therefore, it also opens up novel possibilities on the source apportionment through the measurement of light absorption. In this study, we demonstrate an in-situ spectral characterization method of the ambient carbon fraction based on light absorption and size distribution measurements using our state-of-the-art multi-wavelength photoacoustic instrument (4λ-PAS) and Single Mobility Particle Sizer (SMPS) The carbonaceous particulate selective source apportionment study was performed for ambient particulate matter in the city center of Szeged, Hungary where the dominance of traffic and wood burning aerosol has been experimentally demonstrated earlier. The proposed model is based on the parallel, in-situ measurement of optical absorption and size distribution. AAEff and AAEwb were deduced from the measured data using the defined correlation between the AOC(1064nm)/AOC(266nm) and N100/N20 ratios. σff(λ) and σwb(λ) were determined with the help of the independently measured temporal mass concentrations in the PM1 mode. Furthermore, the proposed optical source apportionment is based on the assumption that the light absorbing fraction of PM is exclusively related to traffic and wood burning. This assumption is indirectly confirmed here by the fact that the measured size distribution is composed of two unimodal size distributions identified to correspond to traffic and wood burning aerosols. The method offers the possibility of replacing laborious chemical analysis with simple in-situ measurement of aerosol size distribution data. The results by the proposed novel optical absorption based source apportionment method prove its applicability whenever measurements are performed at an urban site where traffic and wood burning are the dominant carbonaceous sources of emission.

Keywords: absorption, size distribution, source apportionment, wood burning, traffic aerosol

Procedia PDF Downloads 206