Search results for: technical trading signal
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 3921

Search results for: technical trading signal

771 Some Considerations about the Theory of Spatial-Motor Thinking Applied to a Traditional Fife Band in Brazil

Authors: Murilo G. Mendes

Abstract:

This text presents part of the results presented in the Ph.D. thesis that has used John Baily's theory and method as well as its ethnographic application in the context of the fife flutes of the Banda Cabaçal dos Irmãos Aniceto in the state of Ceará, northeast of Brazil. John Baily is a British ethnomusicologist dedicated to studying the relationships between music, musical gesture, and embodied cognition. His methodology became a useful tool to highlight historical-social aspects present in the group's instrumental music. Remaining indigenous and illiterate, these musicians played and transmitted their music from generation to generation, for almost two hundred years, without any nomenclature or systematization of the fingering performed on the flute. In other words, his music, free from any theorization, is learned, felt, perceived, and processed directly through hearing and through the relationship between the instrument's motor skills and its sound result. For this reason, Baily's assumptions became fundamental in the analysis processes. As the author's methodology recommends, classes were held with the natives and provided technical musical learning and some important concepts. Then, transcriptions and analyses of musical aspects were made from patterns of movement on the instrument incorporated by repetitions and/or by the intrinsic facility of the instrument. As a result, it was discovered how the group reconciled its indigenous origins with the demand requested by the public power and the interests of the local financial elite from the mid-twentieth century. The article is structured from the cultural context of the group, where local historical and social aspects influence the social and musical practices of the group. Then, will be present the methodological conceptions of John Baily and, finally, their application in the music of the Irmãos Aniceto. The conclusion points to the good results of identifying, through this methodology and analysis, approximations between discourse, historical-social factors, and musical text. Still, questions are raised about its application in other contexts.

Keywords: Banda Cabaçal dos Irmãos Aniceto, John Baily, pífano, spatial-motor thinking

Procedia PDF Downloads 129
770 Increased Energy Efficiency and Improved Product Quality in Processing of Lithium Bearing Ores by Applying Fluidized-Bed Calcination Systems

Authors: Edgar Gasafi, Robert Pardemann, Linus Perander

Abstract:

For the production of lithium carbonate or hydroxide out of lithium bearing ores, a thermal activation (calcination/decrepitation) is required for the phase transition in the mineral to enable an acid respectively soda leaching in the downstream hydrometallurgical section. In this paper, traditional processing in Lithium industry is reviewed, and opportunities to reduce energy consumption and improve product quality and recovery rate will be discussed. The conventional process approach is still based on rotary kiln calcination, a technology in use since the early days of lithium ore processing, albeit not significantly further developed since. A new technology, at least for the Lithium industry, is fluidized bed calcination. Decrepitation of lithium ore was investigated at Outotec’s Frankfurt Research Centre. Focusing on fluidized bed technology, a study of major process parameters (temperature and residence time) was performed at laboratory and larger bench scale aiming for optimal product quality for subsequent processing. The technical feasibility was confirmed for optimal process conditions on pilot scale (400 kg/h feed input) providing the basis for industrial process design. Based on experimental results, a comprehensive Aspen Plus flow sheet simulation was developed to quantify mass and energy flow for the rotary kiln and fluidized bed system. Results show a significant reduction in energy consumption and improved process performance in terms of temperature profile, product quality and plant footprint. The major conclusion is that a substantial reduction of energy consumption can be achieved in processing Lithium bearing ores by using fluidized bed based systems. At the same time and different from rotary kiln process, an accurate temperature and residence time control is ensured in fluidized-bed systems leading to a homogenous temperature profile in the reactor which prevents overheating and sintering of the solids and results in uniform product quality.

Keywords: calcination, decrepitation, fluidized bed, lithium, spodumene

Procedia PDF Downloads 225
769 Climate Change Adaptation in the U.S. Coastal Zone: Data, Policy, and Moving Away from Moral Hazard

Authors: Thomas Ruppert, Shana Jones, J. Scott Pippin

Abstract:

State and federal government agencies within the United States have recently invested substantial resources into studies of future flood risk conditions associated with climate change and sea-level rise. A review of numerous case studies has uncovered several key themes that speak to an overall incoherence within current flood risk assessment procedures in the U.S. context. First, there are substantial local differences in the quality of available information about basic infrastructure, particularly with regard to local stormwater features and essential facilities that are fundamental components of effective flood hazard planning and mitigation. Second, there can be substantial mismatch between regulatory Flood Insurance Rate Maps (FIRMs) as produced by the National Flood Insurance Program (NFIP) and other 'current condition' flood assessment approaches. This is of particular concern in areas where FIRMs already seem to underestimate extant flood risk, which can only be expected to become a greater concern if future FIRMs do not appropriately account for changing climate conditions. Moreover, while there are incentives within the NFIP’s Community Rating System (CRS) to develop enhanced assessments that include future flood risk projections from climate change, the incentive structures seem to have counterintuitive implications that would tend to promote moral hazard. In particular, a technical finding of higher future risk seems to make it easier for a community to qualify for flood insurance savings, with much of these prospective savings applied to individual properties that have the most physical risk of flooding. However, there is at least some case study evidence to indicate that recognition of these issues is prompting broader discussion about the need to move beyond FIRMs as a standalone local flood planning standard. The paper concludes with approaches for developing climate adaptation and flood resilience strategies in the U.S. that move away from the social welfare model being applied through NFIP and toward more of an informed risk approach that transfers much of the investment responsibility over to individual private property owners.

Keywords: climate change adaptation, flood risk, moral hazard, sea-level rise

Procedia PDF Downloads 104
768 What Happens When We Try to Bridge the Science-Practice Gap? An Example from the Brazilian Native Vegetation Protection Law

Authors: Alice Brites, Gerd Sparovek, Jean Paul Metzger, Ricardo Rodrigues

Abstract:

The segregation between science and policy in decision making process hinders nature conservation efforts worldwide. Scientists have been criticized for not producing information that leads to effective solutions for environmental problems. In an attempt to bridge this gap between science and practice, we conducted a project aimed at supporting the implementation of the Brazilian Native Vegetation Protection Law (NVPL) implementation in São Paulo State (SP), Brazil. To do so, we conducted multiple open meetings with the stakeholders involved in this discussion. Throughout this process, we raised stakeholders' demands for scientific information and brought feedbacks about our findings. However, our main scientific advice was not taken into account during the NVPL implementation in SP. The NVPL has a mechanism that exempts landholders who converted native vegetation without offending the legislation in place at the time of the conversion from restoration requirements. We found out that there were no accurate spatialized data for native vegetation cover before the 1960s. Thus, the initial benchmark for the mechanism application should be the 1965 Brazilian Forest Act. Even so, SP kept the 1934 Brazilian Forest Act as the initial legal benchmark for the law application. This decision implies the use of a probabilistic native vegetation map that has uncertainty and subjectivity as its intrinsic characteristics, thus its use can lead to legal queries, corruption, and an unfair benefit application. But why this decision was made even after the scientific advice was vastly divulgated? We raised some possible reasons to explain it. First, the decision was made during a government transition, showing that circumstantial political events can overshadow scientific arguments. Second, the debate about the NVPL in SP was not pacified and powerful stakeholders could benefit from the confusion created by this decision. Finally, the native vegetation protection mechanism is a complex issue, with many technical aspects that can be hard to understand for a non-specialized courtroom, such as the one that made the final decision at SP. This example shows that science and decision-makers still have a long way ahead to improve their way to interact and that science needs to find its way to be heard above the political buzz.

Keywords: Brazil, forest act, science-based dialogue, science-policy interface

Procedia PDF Downloads 119
767 The Challenges of Digital Crime Nowadays

Authors: Bendes Ákos

Abstract:

Digital evidence will be the most widely used type of evidence in the future. With the development of the modern world, more and more new types of crimes have evolved and transformed. For this reason, it is extremely important to examine these types of crimes in order to get a comprehensive picture of them, with which we can help the authorities work. In 1865, with early technologies, people were able to forge a picture of a quality that is not even recognized today. With the help of today's technology, authorities receive a lot of false evidence. Officials are not able to process such a large amount of data, nor do they have the necessary technical knowledge to get a real picture of the authenticity of the given evidence. The digital world has many dangers. Unfortunately, we live in an age where we must protect everything digitally: our phones, our computers, our cars, and all the smart devices that are present in our personal lives and this is not only a burden on us, since companies, state and public utilities institutions are also forced to do so. The training of specialists and experts is essential so that the authorities can manage the incoming digital evidence at some level. When analyzing evidence, it is important to be able to examine it from the moment it is created. Establishing authenticity is a very important issue during official procedures. After the proper acquisition of the evidence, it is essential to store it safely and use it professionally. After the proper acquisition of the evidence, it is essential to store it safely and use it professionally. Otherwise, they will not have sufficient probative value and in case of doubt, the court will always decide in favor of the defendant. One of the most common problems in the world of digital data and evidence is doubt, which is why it is extremely important to examine the above-mentioned problems. The most effective way to avoid digital crimes is to prevent them, for which proper education and knowledge are essential. The aim is to present the dangers inherent in the digital world and the new types of digital crimes. After the comparison of the Hungarian investigative techniques with international practice, modernizing proposals will be given. A sufficiently stable yet flexible legislation is needed that can monitor the rapid changes in the world and not regulate afterward but rather provide an appropriate framework. It is also important to be able to distinguish between digital and digitalized evidence, as the degree of probative force differs greatly. The aim of the research is to promote effective international cooperation and uniform legal regulation in the world of digital crimes.

Keywords: digital crime, digital law, cyber crime, international cooperation, new crimes, skepticism

Procedia PDF Downloads 60
766 Carrying Capacity Estimation for Small Hydro Plant Located in Torrential Rivers

Authors: Elena Carcano, James Ball, Betty Tiko

Abstract:

Carrying capacity refers to the maximum population that a given level of resources can sustain over a specific period. In undisturbed environments, the maximum population is determined by the availability and distribution of resources, as well as the competition for their utilization. This information is typically obtained through long-term data collection. In regulated environments, where resources are artificially modified, populations must adapt to changing conditions, which can lead to additional challenges due to fluctuations in resource availability over time and throughout development. An example of this is observed in hydropower plants, which alter water flow and impact fish migration patterns and behaviors. To assess how fish species can adapt to these changes, specialized surveys are conducted, which provide valuable information on fish populations, sample sizes, and density before and after flow modifications. In such situations, it is highly recommended to conduct hydrological and biological monitoring to gain insight into how flow reductions affect species adaptability and to prevent unfavorable exploitation conditions. This analysis involves several planned steps that help design appropriate hydropower production while simultaneously addressing environmental needs. Consequently, the study aims to strike a balance between technical assessment, biological requirements, and societal expectations. Beginning with a small hydro project that requires restoration, this analysis focuses on the lower tail of the Flow Duration Curve (FDC), where both hydrological and environmental goals can be met. The proposed approach involves determining the threshold condition that is tolerable for the most vulnerable species sampled (Telestes Muticellus) by identifying a low flow value from the long-term FDC. The results establish a practical connection between hydrological and environmental information and simplify the process by establishing a single reference flow value that represents the minimum environmental flow that should be maintained.

Keywords: carrying capacity, fish bypass ladder, long-term streamflow duration curve, eta-beta method, environmental flow

Procedia PDF Downloads 33
765 Revealing Single Crystal Quality by Insight Diffraction Imaging Technique

Authors: Thu Nhi Tran Caliste

Abstract:

X-ray Bragg diffraction imaging (“topography”)entered into practical use when Lang designed an “easy” technical setup to characterise the defects / distortions in the high perfection crystals produced for the microelectronics industry. The use of this technique extended to all kind of high quality crystals, and deposited layers, and a series of publications explained, starting from the dynamical theory of diffraction, the contrast of the images of the defects. A quantitative version of “monochromatic topography” known as“Rocking Curve Imaging” (RCI) was implemented, by using synchrotron light and taking advantage of the dramatic improvement of the 2D-detectors and computerised image processing. The rough data is constituted by a number (~300) of images recorded along the diffraction (“rocking”) curve. If the quality of the crystal is such that a one-to-onerelation between a pixel of the detector and a voxel within the crystal can be established (this approximation is very well fulfilled if the local mosaic spread of the voxel is < 1 mradian), a software we developped provides, from the each rocking curve recorded on each of the pixels of the detector, not only the “voxel” integrated intensity (the only data provided by the previous techniques) but also its “mosaic spread” (FWHM) and peak position. We will show, based on many examples, that this new data, never recorded before, open the field to a highly enhanced characterization of the crystal and deposited layers. These examples include the characterization of dislocations and twins occurring during silicon growth, various growth features in Al203, GaNand CdTe (where the diffraction displays the Borrmannanomalous absorption, which leads to a new type of images), and the characterisation of the defects within deposited layers, or their effect on the substrate. We could also observe (due to the very high sensitivity of the setup installed on BM05, which allows revealing these faint effects) that, when dealing with very perfect crystals, the Kato’s interference fringes predicted by dynamical theory are also associated with very small modifications of the local FWHM and peak position (of the order of the µradian). This rather unexpected (at least for us) result appears to be in keeping with preliminary dynamical theory calculations.

Keywords: rocking curve imaging, X-ray diffraction, defect, distortion

Procedia PDF Downloads 129
764 Controlling Drone Flight Missions through Natural Language Processors Using Artificial Intelligence

Authors: Sylvester Akpah, Selasi Vondee

Abstract:

Unmanned Aerial Vehicles (UAV) as they are also known, drones have attracted increasing attention in recent years due to their ubiquitous nature and boundless applications in the areas of communication, surveying, aerial photography, weather forecasting, medical delivery, surveillance amongst others. Operated remotely in real-time or pre-programmed, drones can fly autonomously or on pre-defined routes. The application of these aerial vehicles has successfully penetrated the world due to technological evolution, thus a lot more businesses are utilizing their capabilities. Unfortunately, while drones are replete with the benefits stated supra, they are riddled with some problems, mainly attributed to the complexities in learning how to master drone flights, collision avoidance and enterprise security. Additional challenges, such as the analysis of flight data recorded by sensors attached to the drone may take time and require expert help to analyse and understand. This paper presents an autonomous drone control system using a chatbot. The system allows for easy control of drones using conversations with the aid of Natural Language Processing, thus to reduce the workload needed to set up, deploy, control, and monitor drone flight missions. The results obtained at the end of the study revealed that the drone connected to the chatbot was able to initiate flight missions with just text and voice commands, enable conversation and give real-time feedback from data and requests made to the chatbot. The results further revealed that the system was able to process natural language and produced human-like conversational abilities using Artificial Intelligence (Natural Language Understanding). It is recommended that radio signal adapters be used instead of wireless connections thus to increase the range of communication with the aerial vehicle.

Keywords: artificial ntelligence, chatbot, natural language processing, unmanned aerial vehicle

Procedia PDF Downloads 139
763 Treatment of Municipal Wastewater by Means of Uv-Assisted Irradiation Technologies: Fouling Studies and Optimization of Operational Parameters

Authors: Tooba Aslam, Efthalia Chatzisymeon

Abstract:

UV-assisted irradiation technologies are well-established for water and wastewater treatment. UVC treatments are widely used at large-scale, while UVA irradiation has more often been applied in combination with a catalyst (e.g. TiO₂ or FeSO₄) in smaller-scale systems. A technical issue of these systems is the formation of fouling on the quartz sleeves that houses the lamps. This fouling can prevent complete irradiation, therefore reducing the efficiency of the process. This paper investigates the effects of operational parameters, such as the type of wastewater, irradiation source, H₂O₂ addition, and water pH on fouling formation and, ultimately, the treatment of municipal wastewater. Batch experiments have been performed at lab-scale while monitoring water quality parameters including: COD, TS, TSS, TDS, temperature, pH, hardness, alkalinity, turbidity, TOC, UV transmission, UV₂₅₄ absorbance, and metal concentrations. The residence time of the wastewater in the reactor was 5 days in order to observe any fouling formation on the quartz surface. Over this period, it was observed that chemical oxygen demand (COD) decreased by 30% and 59% during photolysis (Ultraviolet A) and photo-catalysis (UVA/Fe/H₂O₂), respectively. Higher fouling formation was observed with iron-rich and phosphorous-rich wastewater. The highest rate of fouling was developed with phosphorous-rich wastewater, followed by the iron-rich wastewater. Photo-catalysis (UVA/Fe/H₂O₂) had better removal efficiency than photolysis (UVA). This was attributed to the Photo-Fenton reaction, which was initiated under these operational conditions. Scanning electron microscope (SEM) measurements of fouling formed on the quartz sleeves showed that particles vary in size, shape, and structure; some have more distinct structures and are generally larger and have less compact structure than the others. Energy-dispersive X-ray spectroscopy (EDX) results showed that the major metals present in the fouling cake were iron, phosphorous, and calcium. In conclusion, iron-rich wastewaters are more suitable for UV-assisted treatment since fouling formation on quartz sleeves can be minimized by the formation of oxidizing agents during treatment, such as hydroxyl radicals.

Keywords: advanced oxidation processes, photo-fenton treatment, photo-catalysis, wastewater treatment

Procedia PDF Downloads 73
762 Women Entrepreneurship: An Era Facing Challenges

Authors: Neetika Mahajan, Awanish Shukla

Abstract:

Entrepreneurship is key a driver to economic development. It opens opportunities for business startups and has potential to expand employment opportunities for many. Entrepreneurship gives ‘Purpose thriving’ approach towards society with new technologies and zeal to develop and compete in the market. There are many more advantages of entrepreneurship like freedom to scope of work and independence in setting own goals. Women contribute to nearly 50 percent of India’s population, constitute nearly 10 percent of a total number of entrepreneurs in India. Women are found to be better risk calculators, more ambitious and less prone to self-confidence. However, It is a hard fact that life has not been easy for women aspiring professional success. Gender parity is the biggest threat faced by female aspirant seeking new businesses. More challenges like socio-cultural barriers, insufficient financial assistance, etc. have been faced by the women of our country. To uplift the status of women in the society, a number of initiatives have been taken up by the Government of India. Initiatives like National Mission for Empowerment of Women by (Ministry of Women And Child Development) and SKILL INDIA aim to increase the technical skills and knowledge of women for tapping employment opportunities and self-confidence. Trade Related Entrepreneurship Assistance and Development (TREAD) Scheme and Mahila Coir Yojana are proposed by the Ministry of MSMEs aiming to facilitate employment opportunities for women and entrepreneurship development. This paper will aim to bring out the gaps and barriers, which are still resisting the potential women come upfront and start a new business irrespective of a number of initiatives being put by government of India. The aim is also to identify focus areas where further intervention is required and proposing suitable interventions. The methodology to take forward this research will include primary and secondary data collection from on ground survey to track various kind of challenges faced by aspirant women entrepreneurs. Insight will be put towards initiations by the government of India towards women empowerment and assistance to entrepreneurship. Scientific quantitative tools will be used to analyze collected information. The final output of the research shall focus on achieving the respective aims and objectives.

Keywords: women entrepreneurship, government programs and schemes, key challenges, economic development

Procedia PDF Downloads 249
761 MiR-200a/ZEB1 Pathway in Liver Fibrogenesis of Biliary Atresia

Authors: Hai-Ying Liu, Yi-Hao Chen, Shu-Yin Pang, Feng-Hua Wang, Xiao-Fang Peng, Li-Yuan Yang, Zheng-Rong Chen, Yi Chen, Bing Zhu

Abstract:

Objective: Biliary atresia (BA) is characterized by progressive liver fibrosis. Epithelial-mesenchymal transition (EMT) has been implicated as a key mechanism in the pathogenesis of organ fibrosis. MiR-200a has been shown to repress EMT. We aim to explore the role of miR-200a in the fibrogenesis of BA. Methods: We obtained the plasma samples and liver samples from patients with BA or controls to examine the role of miR-200a. Histological liver fibrosis was assessed using the Ishak fibrosis scores. Reverse transcription quantitative polymerase chain reaction (RT-qPCR) was performed to detect the expression of miR-200a in plasma. We also evaluated the expression of miR-200a in liver tissues using tyramide signal amplification fluorescence in situ hybridization (TSA-FISH). The expression of EMT related proteins zinc finger E-box-binding homeobox 1 (ZEB1), E-cadherin and α-smooth muscle actin (α-SMA) in the liver sections were detected by immunohistochemical staining. Results: We found that the expression of miR-200a was both elevated in the plasma and liver tissues from BA patients compared with the controls. The hepatic expression of ZEB1 and α-SMA were markedly increased in the liver sections from BA patients compared to the controls, whereas E-cadherin was downregulated in the BA group. Simultaneously, we noted that the hepatic expression of miR-200a, E-cadherin and α-SMA were upregulated with the progression of liver fibrosis in the BA group, while ZEB1 was downregulated with the progression of liver fibrosis in BA patients. Conclusion: These findings suggest EMT has a critical effect on the fibrotic process of BA, and the interaction between miR-200a and ZEB1 may regulate EMT and eventually influence liver fibrogenesis of BA.

Keywords: biliary atresia, liver fibrosis, MicroRNA, epithelial-mesenchymal transition, zinc finger E-box-binding homeobox 1

Procedia PDF Downloads 355
760 Low-Density Lipoproteins Mediated Delivery of Paclitaxel and MRI Imaging Probes for Personalized Medicine Applications

Authors: Sahar Rakhshan, Simonetta Geninatti Crich, Diego Alberti, Rachele Stefania

Abstract:

The combination of imaging and therapeutic agents in the same smart nanoparticle is a promising option to perform a minimally invasive imaging guided therapy. In this study, Low density lipoproteins (LDL), one of the most attractive biodegradable and biocompatible nanoparticles, were used for the simultaneous delivery of Paclitaxel (PTX), a hydrophobic antitumour drug and an amphiphilic contrast agent, Gd-AAZTA-C17, in B16-F10 melanoma cell line. These cells overexpress LDL receptors, as assessed by Flow cytometry analysis. PTX and Gd-AAZTA-C17 loaded LDLs (LDL-PTX-Gd) have been prepared, characterized and their stability was assessed under 72 h incubation at 37 ◦C and compared to LDL loaded with Gd-AAZTA-C17 (LDL-Gd) and LDL-PTX. The cytotoxic effect of LDL-PTX-Gd was evaluated by MTT assay. The anti-tumour drug loaded into LDLs showed a significantly higher toxicity on B16-F10 cells with respect to the commercially available formulation Paclitaxel Kabi (PTX Kabi) used in clinical applications. It was possible to demonstrate a high uptake of LDL-Gd in B16-F10 cells. As a consequence of the high cell uptake, melanoma cells showed significantly high cytotoxic effect when incubated in the presence of PTX (LDL-PTX-Gd). Furthermore, B16-F10 have been used to perform Magnetic Resonance Imaging. By the analysis of the image signal intensity, it was possible to extrapolate the amount of internalized PTX indirectly by the decrease of relaxation times caused by Gd, proportional to its concentration. Finally, the treatment with PTX loaded LDL on B16-F10 tumour bearing mice resulted in a marked reduction of tumour growth compared to the administration of PTX Kabi alone. In conclusion, LDLs are selectively taken-up by tumour cells and can be successfully exploited for the selective delivery of Paclitaxel and imaging agents.

Keywords: low density lipoprotein, melanoma cell lines, MRI, paclitaxel, personalized medicine application, theragnostic System

Procedia PDF Downloads 121
759 An AI-generated Semantic Communication Platform in HCI Course

Authors: Yi Yang, Jiasong Sun

Abstract:

Almost every aspect of our daily lives is now intertwined with some degree of human-computer interaction (HCI). HCI courses draw on knowledge from disciplines as diverse as computer science, psychology, design principles, anthropology, and more. Our HCI courses, named the Media and Cognition course, are constantly updated to reflect state-of-the-art technological advancements such as virtual reality, augmented reality, and artificial intelligence-based interactions. For more than a decade, our course has used an interest-based approach to teaching, in which students proactively propose some research-based questions and collaborate with teachers, using course knowledge to explore potential solutions. Semantic communication plays a key role in facilitating understanding and interaction between users and computer systems, ultimately enhancing system usability and user experience. The advancements in AI-generated technology, which have gained significant attention from both academia and industry in recent years, are exemplified by language models like GPT-3 that generate human-like dialogues from given prompts. Our latest version of the Human-Computer Interaction course practices a semantic communication platform based on AI-generated techniques. The purpose of this semantic communication is twofold: to extract and transmit task-specific information while ensuring efficient end-to-end communication with minimal latency. An AI-generated semantic communication platform evaluates the retention of signal sources and converts low-retain ability visual signals into textual prompts. These data are transmitted through AI-generated techniques and reconstructed at the receiving end; on the other hand, visual signals with a high retain ability rate are compressed and transmitted according to their respective regions. The platform and associated research are a testament to our students' growing ability to independently investigate state-of-the-art technologies.

Keywords: human-computer interaction, media and cognition course, semantic communication, retainability, prompts

Procedia PDF Downloads 106
758 A Simplified, Low-Cost Mechanical Design for an Automated Motorized Mechanism to Clean Large Diameter Pipes

Authors: Imad Khan, Imran Shafi, Sarmad Farooq

Abstract:

Large diameter pipes, barrels, tubes, and ducts are used in a variety of applications covering civil and defense-related technologies. This may include heating/cooling networks, sign poles, bracing, casing, and artillery and tank gun barrels. These large diameter assemblies require regular inspection and cleaning to increase their life and reduce replacement costs. This paper describes the design, development, and testing results of an efficient yet simplified, low maintenance mechanical design controlled with minimal essential electronics using an electric motor for a non-technical staff. The proposed solution provides a simplified user interface and an automated cleaning mechanism that requires a single user to optimally clean pipes and barrels in the range of 105 mm to 203 mm caliber. The proposed system employs linear motion of specially designed brush along the barrel using a chain of specific strength and a pulley anchor attached to both ends of the barrel. A specially designed and manufactured gearbox is coupled with an AC motor to allow movement of contact brush with high torque to allow efficient cleaning. A suitably powered AC motor is fixed to the front adapter mounted on the muzzle side whereas the rear adapter has a pulley-based anchor mounted towards the breach block in case of a gun barrel. A mix of soft nylon and hard copper bristles-based large surface brush is connected through a strong steel chain to motor and anchor pulley. The system is equipped with limit switches to auto switch the direction when one end is reached on its operation. The testing results based on carefully established performance indicators indicate the superiority of the proposed user-friendly cleaning mechanism vis-à-vis its life cycle cost.

Keywords: pipe cleaning mechanism, limiting switch, pipe cleaning robot, large pipes

Procedia PDF Downloads 108
757 Suitable Site Selection of Small Dams Using Geo-Spatial Technique: A Case Study of Dadu Tehsil, Sindh

Authors: Zahid Khalil, Saad Ul Haque, Asif Khan

Abstract:

Decision making about identifying suitable sites for any project by considering different parameters is difficult. Using GIS and Multi-Criteria Analysis (MCA) can make it easy for those projects. This technology has proved to be an efficient and adequate in acquiring the desired information. In this study, GIS and MCA were employed to identify the suitable sites for small dams in Dadu Tehsil, Sindh. The GIS software is used to create all the spatial parameters for the analysis. The parameters that derived are slope, drainage density, rainfall, land use / land cover, soil groups, Curve Number (CN) and runoff index with a spatial resolution of 30m. The data used for deriving above layers include 30-meter resolution SRTM DEM, Landsat 8 imagery, and rainfall from National Centre of Environment Prediction (NCEP) and soil data from World Harmonized Soil Data (WHSD). Land use/Land cover map is derived from Landsat 8 using supervised classification. Slope, drainage network and watershed are delineated by terrain processing of DEM. The Soil Conservation Services (SCS) method is implemented to estimate the surface runoff from the rainfall. Prior to this, SCS-CN grid is developed by integrating the soil and land use/land cover raster. These layers with some technical and ecological constraints are assigned weights on the basis of suitability criteria. The pairwise comparison method, also known as Analytical Hierarchy Process (AHP) is taken into account as MCA for assigning weights on each decision element. All the parameters and group of parameters are integrated using weighted overlay in GIS environment to produce suitable sites for the Dams. The resultant layer is then classified into four classes namely, best suitable, suitable, moderate and less suitable. This study reveals a contribution to decision-making about suitable sites analysis for small dams using geospatial data with minimal amount of ground data. This suitability maps can be helpful for water resource management organizations in determination of feasible rainwater harvesting structures (RWH).

Keywords: Remote sensing, GIS, AHP, RWH

Procedia PDF Downloads 383
756 Design of a Standard Weather Data Acquisition Device for the Federal University of Technology, Akure Nigeria

Authors: Isaac Kayode Ogunlade

Abstract:

Data acquisition (DAQ) is the process by which physical phenomena from the real world are transformed into an electrical signal(s) that are measured and converted into a digital format for processing, analysis, and storage by a computer. The DAQ is designed using PIC18F4550 microcontroller, communicating with Personal Computer (PC) through USB (Universal Serial Bus). The research deployed initial knowledge of data acquisition system and embedded system to develop a weather data acquisition device using LM35 sensor to measure weather parameters and the use of Artificial Intelligence(Artificial Neural Network - ANN)and statistical approach(Autoregressive Integrated Moving Average – ARIMA) to predict precipitation (rainfall). The device is placed by a standard device in the Department of Meteorology, Federal University of Technology, Akure (FUTA) to know the performance evaluation of the device. Both devices (standard and designed) were subjected to 180 days with the same atmospheric condition for data mining (temperature, relative humidity, and pressure). The acquired data is trained in MATLAB R2012b environment using ANN, and ARIMAto predict precipitation (rainfall). Root Mean Square Error (RMSE), Mean Absolute Error (MAE), Correction Square (R2), and Mean Percentage Error (MPE) was deplored as standardize evaluation to know the performance of the models in the prediction of precipitation. The results from the working of the developed device show that the device has an efficiency of 96% and is also compatible with Personal Computer (PC) and laptops. The simulation result for acquired data shows that ANN models precipitation (rainfall) prediction for two months (May and June 2017) revealed a disparity error of 1.59%; while ARIMA is 2.63%, respectively. The device will be useful in research, practical laboratories, and industrial environments.

Keywords: data acquisition system, design device, weather development, predict precipitation and (FUTA) standard device

Procedia PDF Downloads 86
755 Nanoparticles Activated Inflammasome Lead to Airway Hyperresponsiveness and Inflammation in a Mouse Model of Asthma

Authors: Pureun-Haneul Lee, Byeong-Gon Kim, Sun-Hye Lee, An-Soo Jang

Abstract:

Background: Nanoparticles may pose adverse health effects due to particulate matter inhalation. Nanoparticle exposure induces cell and tissue damage, causing local and systemic inflammatory responses. The inflammasome is a major regulator of inflammation through its activation of pro-caspase-1, which cleaves pro-interleukin-1β (IL-1β) into its mature form and may signal acute and chronic immune responses to nanoparticles. Objective: The aim of the study was to identify whether nanoparticles exaggerates inflammasome pathway leading to airway inflammation and hyperresponsiveness in an allergic mice model of asthma. Methods: Mice were treated with saline (sham), OVA-sensitized and challenged (OVA), or titanium dioxide nanoparticles. Lung interleukin 1 beta (IL-1β), interleukin 18 (IL-18), NACHT, LRR and PYD domains-containing protein 3 (NLRP3) and caspase-1 levels were assessed with Western Blot. Caspase-1 was checked by immunohistochemical staining. Reactive oxygen species were measured for the marker 8-isoprostane and carbonyl by ELISA. Results: Airway inflammation and hyperresponsiveness increased in OVA-sensitized/challenged mice and these responses were exaggerated by TiO2 nanoparticles exposure. TiO2 nanoparticles treatment increased IL-1β and IL-18 protein expression in OVA-sensitized/challenged mice. TiO2 nanoparticles augmented the expression of NLRP3 and caspase-1 leading to the formation of an active caspase-1 in the lung. Lung caspase-1 expression was increased in OVA-sensitized/challenged mice and these responses were exaggerated by TiO2 nanoparticles exposure. Reactive oxygen species was increased in OVA-sensitized/challenged mice and in OVA-sensitized/challenged plus TiO2 exposed mice. Conclusion: Our data demonstrate that inflammasome pathway activates in asthmatic lungs following nanoparticles exposure, suggesting that targeting the inflammasome may help control nanoparticles-induced airway inflammation and responsiveness.

Keywords: bronchial asthma, inflammation, inflammasome, nanoparticles

Procedia PDF Downloads 370
754 Recirculated Sedimentation Method to Control Contamination for Algal Biomass Production

Authors: Ismail S. Bostanci, Ebru Akkaya

Abstract:

Microalgae-derived biodiesel, fertilizer or industrial chemicals' production with wastewater has great potential. Especially water from a municipal wastewater treatment plant is a very important nutrient source for biofuel production. Microalgae biomass production in open ponds system is lower cost culture systems. There are many hurdles for commercial algal biomass production in large scale. One of the important technical bottlenecks for microalgae production in open system is culture contamination. The algae culture contaminants can generally be described as invading organisms which could cause pond crash. These invading organisms can be competitors, parasites, and predators. Contamination is unavoidable in open systems. Potential contaminant organisms are already inoculated if wastewater is utilized for algal biomass cultivation. Especially, it is important to control contaminants to retain in acceptable level in order to reach true potential of algal biofuel production. There are several contamination management methods in algae industry, ranging from mechanical, chemical, biological and growth condition change applications. However, none of them are accepted as a suitable contamination control method. This experiment describes an innovative contamination control method, 'Recirculated Sedimentation Method', to manage contamination to avoid pond cash. The method can be used for the production of algal biofuel, fertilizer etc. and algal wastewater treatment. To evaluate the performance of the method on algal culture, an experiment was conducted for 90 days at a lab-scale raceway (60 L) reactor with the use of non-sterilized and non-filtered wastewater (secondary effluent and centrate of anaerobic digestion). The application of the method provided the following; removing contaminants (predators and diatoms) and other debris from reactor without discharging the culture (with microscopic evidence), increasing raceway tank’s suspended solids holding capacity (770 mg L-1), increasing ammonium removal rate (29.83 mg L-1 d-1), decreasing algal and microbial biofilm formation on inner walls of reactor, washing out generated nitrifier from reactor to prevent ammonium consumption.

Keywords: contamination control, microalgae culture contamination, pond crash, predator control

Procedia PDF Downloads 201
753 Radiation Protection and Licensing for an Experimental Fusion Facility: The Italian and European Approaches

Authors: S. Sandri, G. M. Contessa, C. Poggi

Abstract:

An experimental nuclear fusion device could be seen as a step toward the development of the future nuclear fusion power plant. If compared with other possible solutions to the energy problem, nuclear fusion has advantages that ensure sustainability and security. In particular considering the radioactivity and the radioactive waste produced, in a nuclear fusion plant the component materials could be selected in order to limit the decay period, making it possible the recycling in a new reactor after about 100 years from the beginning of the decommissioning. To achieve this and other pertinent goals many experimental machines have been developed and operated worldwide in the last decades, underlining that radiation protection and workers exposure are critical aspects of these facilities due to the high flux, high energy neutrons produced in the fusion reactions. Direct radiation, material activation, tritium diffusion and other related issues pose a real challenge to the demonstration that these devices are safer than the nuclear fission facilities. In Italy, a limited number of fusion facilities have been constructed and operated since 30 years ago, mainly at the ENEA Frascati Center, and the radiation protection approach, addressed by the national licensing requirements, shows that it is not always easy to respect the constraints for the workers' exposure to ionizing radiation. In the current analysis, the main radiation protection issues encountered in the Italian Fusion facilities are considered and discussed, and the technical and legal requirements are described. The licensing process for these kinds of devices is outlined and compared with that of other European countries. The following aspects are considered throughout the current study: i) description of the installation, plant and systems, ii) suitability of the area, buildings, and structures, iii) radioprotection structures and organization, iv) exposure of personnel, v) accident analysis and relevant radiological consequences, vi) radioactive wastes assessment and management. In conclusion, the analysis points out the needing of a special attention to the radiological exposure of the workers in order to demonstrate at least the same level of safety as that reached at the nuclear fission facilities.

Keywords: fusion facilities, high energy neutrons, licensing process, radiation protection

Procedia PDF Downloads 349
752 Impact of the Hayne Royal Commission on the Operating Model of Australian Financial Advice Firms

Authors: Mohammad Abu-Taleb

Abstract:

The final report of the Royal Commission into Australian financial services misconduct, released in February 2019, has had a significant impact on the financial advice industry. The recommendations released in the Commissioner’s final report include changes to ongoing fee arrangements, a new disciplinary system for financial advisers, and mandatory reporting of compliance concerns. This thesis aims to explore the impact of the Royal Commission’s recommendations on the operating model of financial advice firms in terms of advice products, processes, delivery models, and customer segments. Also, this research seeks to investigate whether the Royal Commission’s outcome has accelerated the use of enhanced technology solutions within the operating model of financial advice firms. And to identify the key challenges confronting financial advice firms whilst implementing the Commissioner’s recommendations across their operating models. In order to achieve the objectives of this thesis, a qualitative research design has been adopted through semi-structured in-depth interviews with 24 financial advisers and managers who are engaged in the operation of financial advice services. The study used the thematic analysis approach to interpret the qualitative data collected from the interviews. The findings of this thesis reveal that customer-centric operating models will become more prominent across the financial advice industry in response to the Commissioner’s final report. And the Royal Commission’s outcome has accelerated the use of advice technology solutions within the operating model of financial advice firms. In addition, financial advice firms have started more than before using simpler and more automated web-based advice services, which enable financial advisers to provide simple advice in a greater scale, and also to accelerate the use of robo-advice models and digital delivery to mass customers in the long term. Furthermore, the study identifies process and technology changes as, long with technical and interpersonal skills development, as the key challenges encountered financial advice firms whilst implementing the Commissioner’s recommendations across their operating models.

Keywords: hayne royal commission, financial planning advice, operating model, advice products, advice processes, delivery models, customer segments, digital advice solutions

Procedia PDF Downloads 84
751 Tea and Its Working Methodology in the Biomass Estimation of Poplar Species

Authors: Pratima Poudel, Austin Himes, Heidi Renninger, Eric McConnel

Abstract:

Populus spp. (poplar) are the fastest-growing trees in North America, making them ideal for a range of applications as they can achieve high yields on short rotations and regenerate by coppice. Furthermore, poplar undergoes biochemical conversion to fuels without complexity, making it one of the most promising, purpose-grown, woody perennial energy sources. Employing wood-based biomass for bioenergy offers numerous benefits, including reducing greenhouse gas (GHG) emissions compared to non-renewable traditional fuels, the preservation of robust forest ecosystems, and creating economic prospects for rural communities.In order to gain a better understanding of the potential use of poplar as a biomass feedstock for biofuel in the southeastern US, the conducted a techno-economic assessment (TEA). This assessment is an analytical approach that integrates technical and economic factors of a production system to evaluate its economic viability. the TEA specifically focused on a short rotation coppice system employing a single-pass cut-and-chip harvesting method for poplar. It encompassed all the costs associated with establishing dedicated poplar plantations, including land rent, site preparation, planting, fertilizers, and herbicides. Additionally, we performed a sensitivity analysis to evaluate how different costs can affect the economic performance of the poplar cropping system. This analysis aimed to determine the minimum average delivered selling price for one metric ton of biomass necessary to achieve a desired rate of return over the cropping period. To inform the TEA, data on the establishment, crop care activities, and crop yields were derived from a field study conducted at the Mississippi Agricultural and Forestry Experiment Station's Bearden Dairy Research Center in Oktibbeha County and Pontotoc Ridge-Flatwood Branch Experiment Station in Pontotoc County.

Keywords: biomass, populus species, sensitivity analysis, technoeconomic analysis

Procedia PDF Downloads 79
750 Nanowire Sensor Based on Novel Impedance Spectroscopy Approach

Authors: Valeriy M. Kondratev, Ekaterina A. Vyacheslavova, Talgat Shugabaev, Alexander S. Gudovskikh, Alexey D. Bolshakov

Abstract:

Modern sensorics imposes strict requirements on the biosensors characteristics, especially technological feasibility, and selectivity. There is a growing interest in the analysis of human health biological markers, which indirectly testifying the pathological processes in the body. Such markers are acids and alkalis produced by the human, in particular - ammonia and hydrochloric acid, which are found in human sweat, blood, and urine, as well as in gastric juice. Biosensors based on modern nanomaterials, especially low dimensional, can be used for this markers detection. Most classical adsorption sensors based on metal and silicon oxides are considered non-selective, because they identically change their electrical resistance (or impedance) under the action of adsorption of different target analytes. This work demonstrates a feasible frequency-resistive method of electrical impedance spectroscopy data analysis. The approach allows to obtain of selectivity in adsorption sensors of a resistive type. The method potential is demonstrated with analyzis of impedance spectra of silicon nanowires in the presence of NH3 and HCl vapors with concentrations of about 125 mmol/L (2 ppm) and water vapor. We demonstrate the possibility of unambiguous distinction of the sensory signal from NH3 and HCl adsorption. Moreover, the method is found applicable for analysis of the composition of ammonia and hydrochloric acid vapors mixture without water cross-sensitivity. Presented silicon sensor can be used to find diseases of the gastrointestinal tract by the qualitative and quantitative detection of ammonia and hydrochloric acid content in biological samples. The method of data analysis can be directly translated to other nanomaterials to analyze their applicability in the field of biosensory.

Keywords: electrical impedance spectroscopy, spectroscopy data analysis, selective adsorption sensor, nanotechnology

Procedia PDF Downloads 112
749 Evaluation of the Gamma-H2AX Expression as a Biomarker of DNA Damage after X-Ray Radiation in Angiography Patients

Authors: Reza Fardid, Aliyeh Alipour

Abstract:

Introduction: Coronary heart disease (CHD) is the most common and deadliest diseases. A coronary angiography is an important tool for the diagnosis and treatment of this disease. Because angiography is performed by exposure to ionizing radiation, it can lead to harmful effects. Ionizing radiation induces double-stranded breaks in DNA, which is a potentially life-threatening injury. The purpose of the present study is an investigation of the phosphorylation of histone H2AX in the location of the double-stranded break in Peripheral blood lymphocytes as an indication of Biological effects of radiation on angiography patients. Materials and Methods: This method is based on measurement of the phosphorylation of histone (gamma-H2AX, gH2AX) level on serine 139 after formation of DNA double-strand break. 5 cc of blood from 24 patients with angiography were sampled before and after irradiation. Blood lymphocytes were removed, fixed and were stained with specific ϒH2AX antibodies. Finally, ϒH2AX signal as an indicator of the double-strand break was measured with Flow Cytometry Technique. Results and discussion: In all patients, an increase was observed in the number of breaks in double-stranded DNA after irradiation (20.15 ± 14.18) compared to before exposure (1.52 ± 0.34). Also, the mean of DNA double-strand break was showed a linear correlation with DAP. However, although induction of DNA double-strand breaks associated with radiation dose in patients, the effect of individual factors such as radiosensitivity and regenerative capacity should not be ignored. If in future we can measure DNA damage response in every patient angiography and it will be used as a biomarker patient dose, will look very impressive on the public health level. Conclusion: Using flow cytometry readings which are done automatically, it is possible to detect ϒH2AX in the number of blood cells. Therefore, the use of this technique could play a significant role in monitoring patients.

Keywords: coronary angiography, DSB of DNA, ϒH2AX, ionizing radiation

Procedia PDF Downloads 182
748 Simulation: A Tool for Stabilization of Welding Processes in Lean Production Concepts

Authors: Ola Jon Mork, Lars Andre Giske, Emil Bjørlykhaug

Abstract:

Stabilization of critical processes in order to have the right quality of the products, more efficient production and smoother flow is a key issue in lean production. This paper presents how simulation of key welding processes can stabilize complicated welding processes in small scale production, and how simulation can impact the entire production concept seen from the perspective of lean production. First, a field study was made to learn the production processes in the factory, and subsequently the field study was transformed into a value stream map to get insight into each operation, the quality issues, operation times, lead times and flow of materials. Valuable practical knowledge of how the welding operations were done by operators, appropriate tools and jigs, and type of robots that could be used, was collected. All available information was then implemented into a simulation environment for further elaboration and development. Three researchers, the management of the company and skilled operators at the work floor where working on the project over a period of eight months, and a detailed description of the process was made by the researchers. The simulation showed that simulation could solve a number of technical challenges, the robot program can be tuned in off line mode, and the design and testing of the robot cell could be made in the simulator. Further on the design of the product could be optimized for robot welding and the jigs could be designed and tested in simulation environment. This means that a key issue of lean production can be solved; the welding operation will work with almost 100% performance when it is put into real production. Stabilizing of one key process is critical to gain control of the entire value chain, then a Takt Time can be established and the focus can be directed towards the next process in the production which should be stabilized. Results show that industrial parameters like welding time, welding cost and welding quality can be defined on the simulation stage. Further on, this gives valuable information for calculation of the factories business performance, like manufacturing volume and manufacturing efficiency. Industrial impact from simulation is more efficient implementation of lean manufacturing, since the welding process can be stabilized. More research should be done to gain more knowledge about simulation as a tool for implementation of lean, especially where there complex processes.

Keywords: simulation, lean, stabilization, welding process

Procedia PDF Downloads 318
747 Evaluation of Indoor Radon as Air Pollutant in Schools and Control of Exposure of the Children

Authors: Kremena Ivanona, Bistra Kunovska, Jana Djunova, Desislava Djunakova, Zdenka Stojanovska

Abstract:

In recent decades, the general public has become increasingly interested in the impact of air pollutions on their health. Currently, numerous studies are aimed at identifying pollutants in the indoor environment where they carry out daily activities. Internal pollutants can be of both natural and artificial origin. With regard to natural pollutants, special attention is paid to natural radioactivity. In recent years, radon has been one of the most studied indoor pollutants because it has the greatest contribution to human exposure to natural radionuclides. It is a known fact that lung cancer can be caused by radon radiation and it is the second risk factor after smoking for the onset of the disease. The main objective of the study under the National Science Fund of Bulgaria, in the framework of grant No КП-06-Н23/1/07.12.2018 is to evaluate the indoor radon as an important air pollutant in school buildings in order to reduce the exposure to children. The measurements were performed in 48 schools located in 55 buildings in one Bulgarian administrative district (Kardjaly). The nuclear track detectors (CR-39) were used for measurements. The arithmetic and geometric means of radon concentrations are AM = 140 Bq/m3, and GM = 117 Bq/m3 respectively. In 51 school rooms, the radon levels were greater than 200 Bq/m3, and in 28 rooms, located in 17 school buildings, it exceeded the national reference level of 300 Bq/m3, defined in the Bulgarian ordinance on radiation protection (or 30% of the investigated buildings). The statistically significant difference in the values of radon concentration by municipalities (KW, р < 0.001) obtained showed that the most likely reason for the differences between the groups is the geographical location of the buildings and the possible influence of the geological composition. The combined effect of the year of construction (technical condition of the buildings) and the energy efficiency measures was considered. The values of the radon concentration in the buildings where energy efficiency measures have been implemented are higher than those in buildings where they have not been performed. This result confirms the need for investigation of radon levels before conducting the energy efficiency measures in buildings. Corrective measures for reducing the radon levels have been recommended in school buildings with high radon levels in order to decrease the children's exposure.

Keywords: air pollution, indoor radon, children exposure, schools

Procedia PDF Downloads 172
746 Application of Thermal Dimensioning Tools to Consider Different Strategies for the Disposal of High-Heat-Generating Waste

Authors: David Holton, Michelle Dickinson, Giovanni Carta

Abstract:

The principle of geological disposal is to isolate higher-activity radioactive wastes deep inside a suitable rock formation to ensure that no harmful quantities of radioactivity reach the surface environment. To achieve this, wastes will be placed in an engineered underground containment facility – the geological disposal facility (GDF) – which will be designed so that natural and man-made barriers work together to minimise the escape of radioactivity. Internationally, various multi-barrier concepts have been developed for the disposal of higher-activity radioactive wastes. High-heat-generating wastes (HLW, spent fuel and Pu) provide a number of different technical challenges to those associated with the disposal of low-heat-generating waste. Thermal management of the disposal system must be taken into consideration in GDF design; temperature constraints might apply to the wasteform, container, buffer and host rock. Of these, the temperature limit placed on the buffer component of the engineered barrier system (EBS) can be the most constraining factor. The heat must therefore be managed such that the properties of the buffer are not compromised to the extent that it cannot deliver the required level of safety. The maximum temperature of a buffer surrounding a container at the centre of a fixed array of heat-generating sources, arises due to heat diffusing from neighbouring heat-generating wastes, incrementally contributing to the temperature of the EBS. A range of strategies can be employed for managing heat in a GDF, including the spatial arrangements or patterns of those containers; different geometrical configurations can influence the overall thermal density in a disposal facility (or area within a facility) and therefore the maximum buffer temperature. A semi-analytical thermal dimensioning tool and methodology have been applied at a generic stage to explore a range of strategies to manage the disposal of high-heat-generating waste. A number of examples, including different geometrical layouts and chequer-boarding, have been illustrated to demonstrate how these tools can be used to consider safety margins and inform strategic disposal options when faced with uncertainty, at a generic stage of the development of a GDF.

Keywords: buffer, geological disposal facility, high-heat-generating waste, spent fuel

Procedia PDF Downloads 280
745 Lifelong Learning in Applied Fields (LLAF) Tempus Funded Project: A Case Study of Problem-Based Learning

Authors: Nirit Raichel, Dorit Alt

Abstract:

Although university teaching is claimed to have a special task to support students in adopting ways of thinking and producing new knowledge anchored in scientific inquiry practices, it is argued that students' habits of learning are still overwhelmingly skewed toward passive acquisition of knowledge from authority sources rather than from collaborative inquiry activities. In order to overcome this critical inadequacy between current educational goals and instructional methods, the LLAF consortium is aimed at developing updated instructional practices that put a premium on adaptability to the emerging requirements of present society. LLAF has created a practical guide for teachers containing updated pedagogical strategies based on the constructivist approach for learning, arranged along Delors’ four theoretical ‘pillars’ of education: Learning to know, learning to do, learning to live together, and learning to be. This presentation will be limited to problem-based learning (PBL), as a strategy introduced in the second pillar. PBL leads not only to the acquisition of technical skills, but also allows the development of skills like problem analysis and solving, critical thinking, cooperation and teamwork, decision- making and self-regulation that can be transferred to other contexts. This educational strategy will be exemplified by a case study conducted in the pre-piloting stage of the project. The case describes a three-fold process implemented in a postgraduate course for in-service teachers, including: (1) learning about PBL (2) implementing PBL in the participants' classes, and (3) qualitatively assessing the contributions of PBL to students' outcomes. An example will be given regarding the ways by which PBL was applied and assessed in civic education for high-school students. Two 9th-grade classes have participated the study; both included several students with learning disability. PBL was applied only in one class whereas traditional instruction was used in the other. Results showed a robust contribution of PBL to students' affective and cognitive outcomes as reflected in their motivation to engage in learning activities, and to further explore the subject. However, students with learning disability were less favorable with this "active" and "annoying" environment. Implications of these findings for the LLAF project will be discussed.

Keywords: problem-based learning, higher education, pedagogical strategies

Procedia PDF Downloads 329
744 Anti-Corruption Strategies for Private Sector Development: Case Study for the Brazilian Automotive Industry

Authors: Rogerio Vieira Dos Reis

Abstract:

Countries like Brazil that despite fighting hard against corruption are not improving their corruption perception, especially due to systemic political corruption, should review their corruption prevention strategies. This thesis brings a case study based on an alternative way of preventing corruption: addressing the corruption drivers in public policies that lead to poor economic performance. After discussing the Brazilian industrial policies adopted recently, especially the measures towards the automotive sector, two corruption issues in this sector are analyzed: facilitating payment for fiscal benefits and buying the extension of fiscal benefits. In-depth interviews conducted with a policymaker and an executive of the automobile sector provide insights for identifying three main corruption drivers: excessive and unnecessary bureaucracy, a complex tax system and the existence of a closed market without setting performance requirements to be achieved by the benefited firms. Both the identification of the drivers of successful industrial policies and the proposal of anti-corruption strategies to ensure developmental outcomes are based on the economic perspective of industrial policy advocated by developmental authors and on the successful South Korean economic development experience. Structural anti-corruption measures include tax reform, the regulation of lobbying and legislation to allow corporate political contribution. Besides improving policymakers’ technical capabilities, measures at the ministry level include redesigning the automotive regimes as long-term policies focused on national investment with simple and clear rules and making fiscal benefits conditional upon performance targets focused on suppliers. This case study is of broader interest because it recommends the importance of adapting performance audits conducted by anti-corruption agencies, to focus not only on the delivery of public services, but also on the identification of potentially highly damaging corruption drivers in public policies that grant fiscal benefits to achieve developmental outcomes.

Keywords: Brazilian automotive sector, corruption, economic development, industrial policy, Inovar-Auto

Procedia PDF Downloads 206
743 Experimental Study of Complete Loss of Coolant Flow (CLOF) Test by System–Integrated Modular Advanced Reactor Integral Test Loop (SMART-ITL) with Passive Residual Heat Removal System (PRHRS)

Authors: Jin Hwa Yang, Hwang Bae, Sung Uk Ryu, Byong Guk Jeon, Sung Jae Yi, Hyun Sik Park

Abstract:

Experimental studies using a large-scale thermal-hydraulic integral test facility, System–integrated Modular Advanced Reactor Integral Test Loop (SMART-ITL), have been carried out to validate the performance of the prototype, SMART. After Fukushima accident, the passive safety systems have been dealt as important designs for retaining of nuclear safety. One of the concerned scenarios for evaluating the passive safety system is a Complete Loss of Coolant Flow (CLOF). The flowrate of coolant in the primary system is maintained by Reactor Coolant Pump (RCP). When the supply of electric power of RCP is shut off, the flowrate of coolant decreases sharply, and the temperature of the coolant increases rapidly. Therefore, the reactor trip signal is activated to prevent the over-heating of the core. In this situation, Passive Residual Heat Removal System (PRHRS) plays a significant role to assure the soundness of the SMART. The PRHRS using a two-phase natural circulation is a passive safety system in the SMART to eliminate the heat of steam generator in the secondary system with heat exchanger submarined in the Emergency Cooling Tank (ECT). As the RCPs continue to coast down, inherent natural circulation in the primary system transfers heat to the secondary system. The transferred heat is removed by PRHRS in the secondary system. In this paper, the progress of the CLOF accident is described with experimental data of transient condition performed by SMART-ITL. Finally, the capability of passive safety system and inherent natural circulation will be evaluated.

Keywords: CLOF, natural circulation, PRHRS, SMART-ITL

Procedia PDF Downloads 436
742 A User Interface for Easiest Way Image Encryption with Chaos

Authors: D. López-Mancilla, J. M. Roblero-Villa

Abstract:

Since 1990, the research on chaotic dynamics has received considerable attention, particularly in light of potential applications of this phenomenon in secure communications. Data encryption using chaotic systems was reported in the 90's as a new approach for signal encoding that differs from the conventional methods that use numerical algorithms as the encryption key. The algorithms for image encryption have received a lot of attention because of the need to find security on image transmission in real time over the internet and wireless networks. Known algorithms for image encryption, like the standard of data encryption (DES), have the drawback of low level of efficiency when the image is large. The encrypting based on chaos proposes a new and efficient way to get a fast and highly secure image encryption. In this work, a user interface for image encryption and a novel and easiest way to encrypt images using chaos are presented. The main idea is to reshape any image into a n-dimensional vector and combine it with vector extracted from a chaotic system, in such a way that the vector image can be hidden within the chaotic vector. Once this is done, an array is formed with the original dimensions of the image and turns again. An analysis of the security of encryption from the images using statistical analysis is made and is used a stage of optimization for image encryption security and, at the same time, the image can be accurately recovered. The user interface uses the algorithms designed for the encryption of images, allowing you to read an image from the hard drive or another external device. The user interface, encrypt the image allowing three modes of encryption. These modes are given by three different chaotic systems that the user can choose. Once encrypted image, is possible to observe the safety analysis and save it on the hard disk. The main results of this study show that this simple method of encryption, using the optimization stage, allows an encryption security, competitive with complicated encryption methods used in other works. In addition, the user interface allows encrypting image with chaos, and to submit it through any public communication channel, including internet.

Keywords: image encryption, chaos, secure communications, user interface

Procedia PDF Downloads 482