Search results for: transfer network
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 7234

Search results for: transfer network

814 Urban Corridor Management Strategy Based on Intelligent Transportation System

Authors: Sourabh Jain, Sukhvir Singh Jain, Gaurav V. Jain

Abstract:

Intelligent Transportation System (ITS) is the application of technology for developing a user–friendly transportation system for urban areas in developing countries. The goal of urban corridor management using ITS in road transport is to achieve improvements in mobility, safety, and the productivity of the transportation system within the available facilities through the integrated application of advanced monitoring, communications, computer, display, and control process technologies, both in the vehicle and on the road. This paper attempts to present the past studies regarding several ITS available that have been successfully deployed in urban corridors of India and abroad, and to know about the current scenario and the methodology considered for planning, design, and operation of Traffic Management Systems. This paper also presents the endeavor that was made to interpret and figure out the performance of the 27.4 Km long study corridor having eight intersections and four flyovers. The corridor consisting of 6 lanes as well as 8 lanes divided road network. Two categories of data were collected on February 2016 such as traffic data (traffic volume, spot speed, delay) and road characteristics data (no. of lanes, lane width, bus stops, mid-block sections, intersections, flyovers). The instruments used for collecting the data were video camera, radar gun, mobile GPS and stopwatch. From analysis, the performance interpretations incorporated were identification of peak hours and off peak hours, congestion and level of service (LOS) at mid blocks, delay followed by the plotting speed contours and recommending urban corridor management strategies. From the analysis, it is found that ITS based urban corridor management strategies will be useful to reduce congestion, fuel consumption and pollution so as to provide comfort and efficiency to the users. The paper presented urban corridor management strategies based on sensors incorporated in both vehicles and on the roads.

Keywords: congestion, ITS strategies, mobility, safety

Procedia PDF Downloads 433
813 Characteristics of Aerosols Properties Over Different Desert-Influenced Aeronet Sites

Authors: Abou Bakr Merdji, Alaa Mhawish, Xiaofeng Xu, Chunsong Lu

Abstract:

The characteristics of optical and microphysical properties of aerosols near deserts are analyzed using 11 AErosol RObotic NETwork (AERONET) sites located in 6 major desert areas (the Sahara, Arabia, Thar, Karakum, Taklamakan, and Gobi) between 1998 and 2021. The regional mean of Aerosol Optical Depth (AOD) (coarse AOD (CAOD)) are 0.44 (0.187), 0.38 (0.26), 0.35 (0.24), 0.23 (0.11), 0.20 (0.14), 0.10 (0.05) in the Thar, Arabian, Sahara, Karakum, Taklamakan and Gobi Deserts respectively, while an opposite for AE and Fine Mode Fraction (FMF). Higher extinctions are associated with larger particles (dust) over all the main desert regions. This is shown by the almost inversely proportional variations of AOD and CAOD compared with AE and FMF. Coarse particles contribute the most to the total AOD over the Sahara Desert compared to those in the other deserts all year round. Related to the seasonality of dust events, the maximum AOD (CAOD) generally appears in summer and spring, while the minimum is in winter. The mean values of absorbing AOD (AAOD), Absorbing AE (AAE), and the Single Scattering Albedo (SSA) for all sites ranged from 0.017 to 0.037, from 1.16 to 2.81 and from 0.844 to 0.944, respectively. Generally, the highest absorbing aerosol load are observed over the Thar, followed by the Karakum, the Sahara, the Gobi, and then the Taklamakan Deserts, while the largest absorbing particles are observed in the Sahara followed by Arabia, Thar, Karakum, Gobi, and the smallest over the Taklamakan Desert. Similar absorption qualities are observed over the Sahara, Arabia, Thar, and Karakum Deserts, with SSA values varying between 0.90 and 0.91, whereas the most and least absorbing particles are observed at the Taklamakan and the Gobi Deserts, respectively. The seasonal AAODs are distinctly different over the deserts, with parts of Sahara and Arabia, and the Dalanzadgad sites experiencing the maximum in summer, the Southern Sahara, Western Arabia, Jaipur, and Dushanbe in winter, while the Eastern Arabia and the Muztagh Ata in autumn. AAOD and SSA spectra are consistent with dust-dominated conditions that resulted from aerosol typing (dust and polluted dust) at most deserts, with a possible presence of other absorbing particles apart from dust at Arabia, the Taklamakan, and the Gobi Desert sites.

Keywords: sahara, AERONET, desert, dust belt, aerosols, optical properties

Procedia PDF Downloads 73
812 The Effect of Disseminating Basic Knowledge on Radiation in Emergency Distance Learning of COVID-19

Authors: Satoko Yamasaki, Hiromi Kawasaki, Kotomi Yamashita, Susumu Fukita, Kei Sounai

Abstract:

People are susceptible to rumors when the cause of their health problems is unknown or invisible. In order for individuals to be unaffected by rumors, they need basic knowledge and correct information. Community health nursing classes use cases where basic knowledge of radiation can be utilized on a regular basis, thereby teaching that basic knowledge is important in preventing anxiety caused by rumors. Nursing students need to learn that preventive activities are essential for public health nursing care. This is the same methodology used to reduce COVID-19 anxiety among individuals. This study verifies the learning effect concerning the basic knowledge of radiation necessary for case consultation by emergency distance learning. Sixty third-year nursing college students agreed to participate in this research. The knowledge tests conducted before and after classes were compared, with the chi-square test used for testing. There were five knowledge questions regarding distance lessons. This was considered to be 5% significant. The students’ reports which describe the results of responding to health consultations, were analyzed qualitatively and descriptively. In this case study, a person living in an area not affected by radiation was anxious about drinking water and, thus, consulted with a student. The contents of the lecture were selected the minimum amount of knowledge used for the answers of the consultant; specifically hot spots, internal exposure risk, food safety, characteristics of cesium-137, and precautions for counselors. Before taking the class, the most correctly answered question by students concerned daily behavior at risk of internal exposure (52.2%). The question with the fewest correct answers was the selection of places that are likely to be hot spots (3.4%). All responses increased significantly after taking the class (p < 0.001). The answers to the counselors, as written by the students, were 'Cesium is strongly bound to the soil, so it is difficult to transfer to water' and 'Water quality test results of tap water are posted on the city's website.' These were concrete answers obtained by using specialized knowledge. Even in emergency distance learning, the students gained basic knowledge regarding radiation and created a document to utilize said knowledge while assuming the situation concretely. It was thought that the flipped classroom method, even if conducted remotely, could maintain students' learning. It was thought that setting specific knowledge and scenes to be used would enhance the learning effect. By changing the case to concern that of the anxiety caused by infectious diseases, students may be able to effectively gain the basic knowledge to decrease the anxiety of residents due to infectious diseases.

Keywords: effect of class, emergency distance learning, nursing student, radiation

Procedia PDF Downloads 105
811 Engaging the Terrorism Problematique in Africa: Discursive and Non-Discursive Approaches to Counter Terrorism

Authors: Cecil Blake, Tolu Kayode-Adedeji, Innocent Chiluwa, Charles Iruonagbe

Abstract:

National, regional and international security threats have dominated the twenty-first century thus far. Insurgencies that utilize “terrorism” as their primary strategy pose the most serious threat to global security. States in turn adopt terrorist strategies to resist and even defeat insurgents who invoke the legitimacy of statehood to justify their action. In short, the era is dominated by the use of terror tactics by state and non-state actors. Globally, there is a powerful network of groups involved in insurgencies using Islam as the bastion for their cause. In Africa, there are Boko Haram, Al Shabaab and Al Qaeda in the Maghreb representing Islamic groups utilizing terror strategies and tactics to prosecute their wars. The task at hand is to discover and to use multiple ways of handling the present security threats, including novel approaches to policy formulation, implementation, monitoring and evaluation that would pay significant attention to the important role of culture and communication strategies germane for discursive means of conflict resolution. In other to achieve this, the proposed research would address inter alia, root causes of insurgences that predicate their mission on Islamic tenets particularly in Africa; discursive and non-discursive counter-terrorism approaches fashioned by African governments, continental supra-national and regional organizations, recruitment strategies by major non-sate actors in Africa that rely solely on terrorist strategies and tactics and sources of finances for the groups under study. A major anticipated outcome of this research is a contribution to answers that would lead to the much needed stability required for development in African countries experiencing insurgencies carried out by the use of patterned terror strategies and tactics. The nature of the research requires the use of triangulation as the methodological tool.

Keywords: counter-terrorism, discourse, Nigeria, security, terrorism

Procedia PDF Downloads 473
810 American Sign Language Recognition System

Authors: Rishabh Nagpal, Riya Uchagaonkar, Venkata Naga Narasimha Ashish Mernedi, Ahmed Hambaba

Abstract:

The rapid evolution of technology in the communication sector continually seeks to bridge the gap between different communities, notably between the deaf community and the hearing world. This project develops a comprehensive American Sign Language (ASL) recognition system, leveraging the advanced capabilities of convolutional neural networks (CNNs) and vision transformers (ViTs) to interpret and translate ASL in real-time. The primary objective of this system is to provide an effective communication tool that enables seamless interaction through accurate sign language interpretation. The architecture of the proposed system integrates dual networks -VGG16 for precise spatial feature extraction and vision transformers for contextual understanding of the sign language gestures. The system processes live input, extracting critical features through these sophisticated neural network models, and combines them to enhance gesture recognition accuracy. This integration facilitates a robust understanding of ASL by capturing detailed nuances and broader gesture dynamics. The system is evaluated through a series of tests that measure its efficiency and accuracy in real-world scenarios. Results indicate a high level of precision in recognizing diverse ASL signs, substantiating the potential of this technology in practical applications. Challenges such as enhancing the system’s ability to operate in varied environmental conditions and further expanding the dataset for training were identified and discussed. Future work will refine the model’s adaptability and incorporate haptic feedback to enhance the interactivity and richness of the user experience. This project demonstrates the feasibility of an advanced ASL recognition system and lays the groundwork for future innovations in assistive communication technologies.

Keywords: sign language, computer vision, vision transformer, VGG16, CNN

Procedia PDF Downloads 23
809 Impact of Transitioning to Renewable Energy Sources on KPIs and AI Modules of data centre

Authors: Ahmed Hossam El Molla, Mohamed Hatem Saleh, Hamza Mostafa, Lara Mamdouh, Yassin Wael

Abstract:

Artificial intelligence (AI) is reshaping industries, and its potential to revolutionize renewable energy and data center operations is immense. By harnessing AI's capabilities, we can optimize energy consumption, predict fluctuations in renewable energy generation, and improve the efficiency of data center infrastructure. This convergence of technologies promises a future where energy is managed more intelligently, sustainably, and cost-effectively. The integration of AI into renewable energy systems unlocks a wealth of opportunities. Ma-chine learning algorithms can analyze vast amounts of data to forecast weather patterns, solar irradiance, and wind speeds, enabling more accurate energy production planning. AI-powered systems can optimize energy storage and grid management, ensuring a stable power supply even during intermittent renewable genera-tion. Moreover, AI can identify maintenance needs for renewable energy infrastructure, preventing costly breakdowns and maximizing system lifespan. Data centers, which consume substantial amounts of energy, are prime candidates for AI-driven optimization. AI can analyze energy consumption patterns, identify inefficien-cies, and recommend adjustments to cooling systems, server utilization, and power distribution. Predictive maintenance using AI can prevent equipment failures, reducing energy waste and downtime. Additionally, AI can optimize data placement and retrieval, minimizing energy consumption associated with data transfer. As AI transforms renewable energy and data center operations, new Key Performance Indicators (KPIs) will emerge. Traditional metrics like energy efficiency and cost-per-megawatt-hour will continue to be relevant, but additional KPIs focused on AI's impact will be essential. These might include AI-driven cost savings, predic-tive accuracy of energy generation and consumption, and the reduction of carbon emissions attributed to AI-optimized operations. By tracking these KPIs, organizations can measure the success of their AI initiatives and identify areas for improvement. Ultimately, the synergy between AI, renewable energy, and data centers holds the potential to create a more sustainable and resilient future. By embracing these technologies, we can build smarter, greener, and more efficient systems that benefit both the environment and the economy.

Keywords: data center, artificial intelligence (AI), renewable energy, energy efficiency, sustainability, optimization, predictive analytics, energy consumption, energy storage, grid management, data center optimization, key performance indicators (KPIs), carbon emissions, resiliency

Procedia PDF Downloads 0
808 An Exploratory Study in Nursing Education: Factors Influencing Nursing Students’ Acceptance of Mobile Learning

Authors: R. Abdulrahman, A. Eardley, A. Soliman

Abstract:

The proliferation in the development of mobile learning (m-learning) has played a vital role in the rapidly growing electronic learning market. This relatively new technology can help to encourage the development of in learning and to aid knowledge transfer a number of areas, by familiarizing students with innovative information and communications technologies (ICT). M-learning plays a substantial role in the deployment of learning methods for nursing students by using the Internet and portable devices to access learning resources ‘anytime and anywhere’. However, acceptance of m-learning by students is critical to the successful use of m-learning systems. Thus, there is a need to study the factors that influence student’s intention to use m-learning. This paper addresses this issue. It outlines the outcomes of a study that evaluates the unified theory of acceptance and use of technology (UTAUT) model as applied to the subject of user acceptance in relation to m-learning activity in nurse education. The model integrates the significant components across eight prominent user acceptance models. Therefore, a standard measure is introduced with core determinants of user behavioural intention. The research model extends the UTAUT in the context of m-learning acceptance by modifying and adding individual innovativeness (II) and quality of service (QoS) to the original structure of UTAUT. The paper goes on to add the factors of previous experience (of using mobile devices in similar applications) and the nursing students’ readiness (to use the technology) to influence their behavioural intentions to use m-learning. This study uses a technique called ‘convenience sampling’ which involves student volunteers as participants in order to collect numerical data. A quantitative method of data collection was selected and involves an online survey using a questionnaire form. This form contains 33 questions to measure the six constructs, using a 5-point Likert scale. A total of 42 respondents participated, all from the Nursing Institute at the Armed Forces Hospital in Saudi Arabia. The gathered data were then tested using a research model that employs the structural equation modelling (SEM), including confirmatory factor analysis (CFA). The results of the CFA show that the UTAUT model has the ability to predict student behavioural intention and to adapt m-learning activity to the specific learning activities. It also demonstrates satisfactory, dependable and valid scales of the model constructs. This suggests further analysis to confirm the model as a valuable instrument in order to evaluate the user acceptance of m-learning activity.

Keywords: mobile learning, nursing institute students’ acceptance of m-learning activity in Saudi Arabia, unified theory of acceptance and use of technology model (UTAUT), structural equation modelling (SEM)

Procedia PDF Downloads 176
807 Enhancement of Long Term Peak Demand Forecast in Peninsular Malaysia Using Hourly Load Profile

Authors: Nazaitul Idya Hamzah, Muhammad Syafiq Mazli, Maszatul Akmar Mustafa

Abstract:

The peak demand forecast is crucial to identify the future generation plant up needed in the long-term capacity planning analysis for Peninsular Malaysia as well as for the transmission and distribution network planning activities. Currently, peak demand forecast (in Mega Watt) is derived from the generation forecast by using load factor assumption. However, a forecast using this method has underperformed due to the structural changes in the economy, emerging trends and weather uncertainty. The dynamic changes of these drivers will result in many possible outcomes of peak demand for Peninsular Malaysia. This paper will look into the independent model of peak demand forecasting. The model begins with the selection of driver variables to capture long-term growth. This selection and construction of variables, which include econometric, emerging trend and energy variables, will have an impact on the peak forecast. The actual framework begins with the development of system energy and load shape forecast by using the system’s hourly data. The shape forecast represents the system shape assuming all embedded technology and use patterns to continue in the future. This is necessary to identify the movements in the peak hour or changes in the system load factor. The next step would be developing the peak forecast, which involves an iterative process to explore model structures and variables. The final step is combining the system energy, shape, and peak forecasts into the hourly system forecast then modifying it with the forecast adjustments. Forecast adjustments are among other sales forecasts for electric vehicles, solar and other adjustments. The framework will result in an hourly forecast that captures growth, peak usage and new technologies. The advantage of this approach as compared to the current methodology is that the peaks capture new technology impacts that change the load shape.

Keywords: hourly load profile, load forecasting, long term peak demand forecasting, peak demand

Procedia PDF Downloads 150
806 How Does Paradoxical Leadership Enhance Organizational Success?

Authors: Wageeh A. Nafei

Abstract:

This paper explores the role of Paradoxical Leadership (PL) in enhancing Organizational Success (OS) at private hospitals in Egypt. Based on the collected data from employees in private hospitals (doctors, nursing staff, and administrative staff). The researcher has adopted a sampling method to collect data for the study. The appropriate statistical methods, such as Alpha Correlation Coefficient (ACC), Confirmatory Factor Analysis (CFA), and Multiple Regression Analysis (MRA), are used to analyze the data and test the hypotheses. The research has reached a number of results, the most important of which are (1) there is a statistical relationship between the independent variable represented by PL and the dependent variable represented by Organizational Success (OS). The paradoxical leader encourages employees to express their opinions and builds a work environment characterized by flexibility and independence. Also, the paradoxical leader works to support specialized work teams, which leads to the creation of new ideas, on the one hand, and contributes to the achievement of outstanding performance on the other hand. (2) the mentality of the paradoxical leader is flexible and capable of absorbing all suggestions from all employees. Also, the paradoxical leader is interested in enhancing cooperation among them and provides an opportunity to transfer experience and increase knowledge-sharing. Also, the sharing of knowledge creates the necessary diversity that helps the organization to obtain rich external information and enables the organization to deal with a rapidly changing environment. (3) The PL approach helps in facing the paradoxical demands of employees. A paradoxical leader plays an important role in reducing the feeling of instability in the work environment and lack of job security, reducing negative feelings for employees, restoring balance in the work environment, improving the well-being of employees, and increasing the degree of job satisfaction of employees in the organization. The study referred to a number of recommendations, the most important of which are (1) the leaders of the organizations must listen to the views of employees and their needs and move away from the official method of control. The leader should give sufficient freedom to employees to participate in decision-making and maintain enough space among them. The treatment between the leaders and employees must be based on friendliness, (2) the need for organizational leaders to pay attention to sharing knowledge among employees through training courses. The leader should make sure that every information provided by the employee is valuable and useful, which can be used to solve a problem that may face his/her colleagues at work, (3) the need for organizational leaders to pay attention to sharing knowledge among employees through brainstorming sessions. The leader should ensure that employees obtain knowledge from their colleagues and share ideas and information among them. This is in addition to motivating employees to complete their work in a new creative way, which leads to employees’ not feeling bored of repeating the same routine procedures in the organization.

Keywords: paradoxical leadership, organizational success, human resourece, management

Procedia PDF Downloads 47
805 The Use of Space Syntax in Urban Transportation Planning and Evaluation: Limits and Potentials

Authors: Chuan Yang, Jing Bie, Yueh-Lung Lin, Zhong Wang

Abstract:

Transportation planning is an academic integration discipline combining research and practice with the aim of mobility and accessibility improvements at both strategic-level policy-making and operational dimensions of practical planning. Transportation planning could build the linkage between traffic and social development goals, for instance, economic benefits and environmental sustainability. The transportation planning analysis and evaluation tend to apply empirical quantitative approaches with the guidance of the fundamental principles, such as efficiency, equity, safety, and sustainability. Space syntax theory has been applied in the spatial distribution of pedestrian movement or vehicle flow analysis, however rare has been written about its application in transportation planning. The correlated relationship between the variables of space syntax analysis and authentic observations have declared that the urban configurations have a significant effect on urban dynamics, for instance, land value, building density, traffic, crime. This research aims to explore the potentials of applying Space Syntax methodology to evaluate urban transportation planning through studying the effects of urban configuration on cities transportation performance. By literature review, this paper aims to discuss the effects that urban configuration with different degrees of integration and accessibility have on three elementary components of transportation planning - transportation efficiency, transportation safety, and economic agglomeration development - via intensifying and stabilising the nature movements generated by the street network. And then the potential and limits of Space Syntax theory to study the performance of urban transportation and transportation planning would be discussed in the paper. In practical terms, this research will help future research explore the effects of urban design on transportation performance, and identify which patterns of urban street networks would allow for most efficient and safe transportation performance with higher economic benefits.

Keywords: transportation planning, space syntax, economic agglomeration, transportation efficiency, transportation safety

Procedia PDF Downloads 181
804 Development of an Atmospheric Radioxenon Detection System for Nuclear Explosion Monitoring

Authors: V. Thomas, O. Delaune, W. Hennig, S. Hoover

Abstract:

Measurement of radioactive isotopes of atmospheric xenon is used to detect, locate and identify any confined nuclear tests as part of the Comprehensive Nuclear Test-Ban Treaty (CTBT). In this context, the Alternative Energies and French Atomic Energy Commission (CEA) has developed a fixed device to continuously measure the concentration of these fission products, the SPALAX process. During its atmospheric transport, the radioactive xenon will undergo a significant dilution between the source point and the measurement station. Regarding the distance between fixed stations located all over the globe, the typical volume activities measured are near 1 mBq m⁻³. To avoid the constraints induced by atmospheric dilution, the development of a mobile detection system is in progress; this system will allow on-site measurements in order to confirm or infringe a suspicious measurement detected by a fixed station. Furthermore, this system will use beta/gamma coincidence measurement technique in order to drastically reduce environmental background (which masks such activities). The detector prototype consists of a gas cell surrounded by two large silicon wafers, coupled with two square NaI(Tl) detectors. The gas cell has a sample volume of 30 cm³ and the silicon wafers are 500 µm thick with an active surface area of 3600 mm². In order to minimize leakage current, each wafer has been segmented into four independent silicon pixels. This cell is sandwiched between two low background NaI(Tl) detectors (70x70x40 mm³ crystal). The expected Minimal Detectable Concentration (MDC) for each radio-xenon is in the order of 1-10 mBq m⁻³. Three 4-channels digital acquisition modules (Pixie-NET) are used to process all the signals. Time synchronization is ensured by a dedicated PTP-network, using the IEEE 1588 Precision Time Protocol. We would like to present this system from its simulation to the laboratory tests.

Keywords: beta/gamma coincidence technique, low level measurement, radioxenon, silicon pixels

Procedia PDF Downloads 116
803 Radar Track-based Classification of Birds and UAVs

Authors: Altilio Rosa, Chirico Francesco, Foglia Goffredo

Abstract:

In recent years, the number of Unmanned Aerial Vehicles (UAVs) has significantly increased. The rapid development of commercial and recreational drones makes them an important part of our society. Despite the growing list of their applications, these vehicles pose a huge threat to civil and military installations: detection, classification and neutralization of such flying objects become an urgent need. Radar is an effective remote sensing tool for detecting and tracking flying objects, but scenarios characterized by the presence of a high number of tracks related to flying birds make especially challenging the drone detection task: operator PPI is cluttered with a huge number of potential threats and his reaction time can be severely affected. Flying birds compared to UAVs show similar velocity, RADAR cross-section and, in general, similar characteristics. Building from the absence of a single feature that is able to distinguish UAVs and birds, this paper uses a multiple features approach where an original feature selection technique is developed to feed binary classifiers trained to distinguish birds and UAVs. RADAR tracks acquired on the field and related to different UAVs and birds performing various trajectories were used to extract specifically designed target movement-related features based on velocity, trajectory and signal strength. An optimization strategy based on a genetic algorithm is also introduced to select the optimal subset of features and to estimate the performance of several classification algorithms (Neural network, SVM, Logistic regression…) both in terms of the number of selected features and misclassification error. Results show that the proposed methods are able to reduce the dimension of the data space and to remove almost all non-drone false targets with a suitable classification accuracy (higher than 95%).

Keywords: birds, classification, machine learning, UAVs

Procedia PDF Downloads 206
802 Numerical Modelling of Shear Zone and Its Implications on Slope Instability at Letšeng Diamond Open Pit Mine, Lesotho

Authors: M. Ntšolo, D. Kalumba, N. Lefu, G. Letlatsa

Abstract:

Rock mass damage due to shear tectonic activity has been investigated largely in geoscience where fluid transport is of major interest. However, little has been studied on the effect of shear zones on rock mass behavior and its impact on stability of rock slopes. At Letšeng Diamonds open pit mine in Lesotho, the shear zone composed of sheared kimberlite material, calcite and altered basalt is forming part of the haul ramp into the main pit cut 3. The alarming rate at which the shear zone is deteriorating has triggered concerns about both local and global stability of pit the walls. This study presents the numerical modelling of the open pit slope affected by shear zone at Letšeng Diamond Mine (LDM). Analysis of the slope involved development of the slope model by using a two-dimensional finite element code RS2. Interfaces between shear zone and host rock were represented by special joint elements incorporated in the finite element code. The analysis of structural geological mapping data provided a good platform to understand the joint network. Major joints including shear zone were incorporated into the model for simulation. This approach proved successful by demonstrating that continuum modelling can be used to evaluate evolution of stresses, strain, plastic yielding and failure mechanisms that are consistent with field observations. Structural control due to geological shear zone structure proved to be important in its location, size and orientation. Furthermore, the model analyzed slope deformation and sliding possibility along shear zone interfaces. This type of approach can predict shear zone deformation and failure mechanism, hence mitigation strategies can be deployed for safety of human lives and property within mine pits.

Keywords: numerical modeling, open pit mine, shear zone, slope stability

Procedia PDF Downloads 287
801 Influence of a Cationic Membrane in a Double Compartment Filter-Press Reactor on the Atenolol Electro-Oxidation

Authors: Alan N. A. Heberle, Salatiel W. Da Silva, Valentin Perez-Herranz, Andrea M. Bernardes

Abstract:

Contaminants of emerging concern are substances widely used, such as pharmaceutical products. These compounds represent risk for both wild and human life since they are not completely removed from wastewater by conventional wastewater treatment plants. In the environment, they can be harm even in low concentration (µ or ng/L), causing bacterial resistance, endocrine disruption, cancer, among other harmful effects. One of the most common taken medicine to treat cardiocirculatory diseases is the Atenolol (ATL), a β-Blocker, which is toxic to aquatic life. In this way, it is necessary to implement a methodology, which is capable to promote the degradation of the ATL, to avoid the environmental detriment. A very promising technology is the advanced electrochemical oxidation (AEO), which mechanisms are based on the electrogeneration of reactive radicals (mediated oxidation) and/or on the direct substance discharge by electron transfer from contaminant to electrode surface (direct oxidation). The hydroxyl (HO•) and sulfate (SO₄•⁻) radicals can be generated, depending on the reactional medium. Besides that, at some condition, the peroxydisulfate (S₂O₈²⁻) ion is also generated from the SO₄• reaction in pairs. Both radicals, ion, and the direct contaminant discharge can break down the molecule, resulting in the degradation and/or mineralization. However, ATL molecule and byproducts can still remain in the treated solution. On this wise, some efforts can be done to implement the AEO process, being one of them the use of a cationic membrane to separate the cathodic (reduction) from the anodic (oxidation) reactor compartment. The aim of this study is investigate the influence of the implementation of a cationic membrane (Nafion®-117) to separate both cathodic and anodic, AEO reactor compartments. The studied reactor was a filter-press, with bath recirculation mode, flow 60 L/h. The anode was an Nb/BDD2500 and the cathode a stainless steel, both bidimensional, geometric surface area 100 cm². The solution feeding the anodic compartment was prepared with ATL 100 mg/L using Na₂SO₄ 4 g/L as support electrolyte. In the cathodic compartment, it was used a solution containing Na₂SO₄ 71 g/L. Between both solutions was placed the membrane. The applied currents densities (iₐₚₚ) of 5, 20 and 40 mA/cm² were studied over 240 minutes treatment time. Besides that, the ATL decay was analyzed by ultraviolet spectroscopy (UV/Vis). The mineralization was determined performing total organic carbon (TOC) in TOC-L CPH Shimadzu. In the cases without membrane, the iₐₚₚ 5, 20 and 40 mA/cm² resulted in 55, 87 and 98 % ATL degradation at the end of treatment time, respectively. However, with membrane, the degradation, for the same iₐₚₚ, was 90, 100 and 100 %, spending 240, 120, 40 min for the maximum degradation, respectively. The mineralization, without membrane, for the same studied iₐₚₚ, was 40, 55 and 72 %, respectively at 240 min, but with membrane, all tested iₐₚₚ reached 80 % of mineralization, differing only in the time spent, 240, 150 and 120 min, for the maximum mineralization, respectively. The membrane increased the ATL oxidation, probably due to avoid oxidant ions (S₂O₈²⁻) reduction on the cathode surface.

Keywords: contaminants of emerging concern, advanced electrochemical oxidation, atenolol, cationic membrane, double compartment reactor

Procedia PDF Downloads 118
800 Measurement of Ionospheric Plasma Distribution over Myanmar Using Single Frequency Global Positioning System Receiver

Authors: Win Zaw Hein, Khin Sandar Linn, Su Su Yi Mon, Yoshitaka Goto

Abstract:

The Earth ionosphere is located at the altitude of about 70 km to several 100 km from the ground, and it is composed of ions and electrons called plasma. In the ionosphere, these plasma makes delay in GPS (Global Positioning System) signals and reflect in radio waves. The delay along the signal path from the satellite to the receiver is directly proportional to the total electron content (TEC) of plasma, and this delay is the largest error factor in satellite positioning and navigation. Sounding observation from the top and bottom of the ionosphere was popular to investigate such ionospheric plasma for a long time. Recently, continuous monitoring of the TEC using networks of GNSS (Global Navigation Satellite System) observation stations, which are basically built for land survey, has been conducted in several countries. However, in these stations, multi-frequency support receivers are installed to estimate the effect of plasma delay using their frequency dependence and the cost of multi-frequency support receivers are much higher than single frequency support GPS receiver. In this research, single frequency GPS receiver was used instead of expensive multi-frequency GNSS receivers to measure the ionospheric plasma variation such as vertical TEC distribution. In this measurement, single-frequency support ublox GPS receiver was used to probe ionospheric TEC. The location of observation was assigned at Mandalay Technological University in Myanmar. In the method, the ionospheric TEC distribution is represented by polynomial functions for latitude and longitude, and parameters of the functions are determined by least-squares fitting on pseudorange data obtained at a known location under an assumption of thin layer ionosphere. The validity of the method was evaluated by measurements obtained by the Japanese GNSS observation network called GEONET. The performance of measurement results using single-frequency of GPS receiver was compared with the results by dual-frequency measurement.

Keywords: ionosphere, global positioning system, GPS, ionospheric delay, total electron content, TEC

Procedia PDF Downloads 122
799 Spatial Element Importance and Its Relation to Characters’ Emotions and Self Awareness in Michela Murgia’s Collection of Short Stories Tre Ciotole. Rituali per Un Anno DI Crisi

Authors: Nikica Mihaljević

Abstract:

Published in 2023, "Tre ciotole. Rituali per un anno di crisi" is a collection of short stories completely disconnected from one another in regard to topics and the representation of characters. However, these short stories complete and somehow continue each other in a particular way. The book happens to be Murgia's last book, as the author died a few months later after the book's publication and it appears as a kind of summary of all her previous literary works. Namely, in her previous publications, Murgia already stressed certain characters' particularities, such as solitude and alienation from others, which are at the center of attention in this literary work, too. What all the stories present in "Tre ciotole" have in common is the dealing with characters' identity and self-awareness through the challenges they confront and the way the characters live their emotions in relation to the surrounding space. Although the challenges seem similar, the spatial element around the characters is different, but it confirms each time that characters' emotions, and, consequently, their self-awareness, can be formed and built only through their connection and relation to the surrounding space. In that way, the reader creates an imaginary network of complex relations among characters in all the short stories, which gives him/her the opportunity to search for a way to break out of the usual patterns that tend to be repeated while characters focus on building self-awareness. The aim of the paper is to determine and analyze the role of spatial elements in the creation of characters' emotions and in the process of self-awareness. As the spatial element changes or gets transformed and/or substituted, in the same way, we notice the arise of the unconscious desire for self-harm in the characters, which damages their self-awareness. Namely, the characters face a crisis that they cannot control by inventing other types of crises that can be controlled. That happens to be their way of acting in order to find the way out of the identity crisis. Consequently, we expect that the results of the analysis point out the similarities in the short stories in characters' depiction as well as to show the extent to which the characters' identities depend on the surrounding space in each short story. In this way, the results will highlight the importance of spatial elements in characters' identity formation in Michela Murgia's short stories and also summarize the importance of the whole Murgia's literary opus.

Keywords: Italian literature, short stories, environment, spatial element, emotions, characters

Procedia PDF Downloads 39
798 Development of an Automatic Computational Machine Learning Pipeline to Process Confocal Fluorescence Images for Virtual Cell Generation

Authors: Miguel Contreras, David Long, Will Bachman

Abstract:

Background: Microscopy plays a central role in cell and developmental biology. In particular, fluorescence microscopy can be used to visualize specific cellular components and subsequently quantify their morphology through development of virtual-cell models for study of effects of mechanical forces on cells. However, there are challenges with these imaging experiments, which can make it difficult to quantify cell morphology: inconsistent results, time-consuming and potentially costly protocols, and limitation on number of labels due to spectral overlap. To address these challenges, the objective of this project is to develop an automatic computational machine learning pipeline to predict cellular components morphology for virtual-cell generation based on fluorescence cell membrane confocal z-stacks. Methods: Registered confocal z-stacks of nuclei and cell membrane of endothelial cells, consisting of 20 images each, were obtained from fluorescence confocal microscopy and normalized through software pipeline for each image to have a mean pixel intensity value of 0.5. An open source machine learning algorithm, originally developed to predict fluorescence labels on unlabeled transmitted light microscopy cell images, was trained using this set of normalized z-stacks on a single CPU machine. Through transfer learning, the algorithm used knowledge acquired from its previous training sessions to learn the new task. Once trained, the algorithm was used to predict morphology of nuclei using normalized cell membrane fluorescence images as input. Predictions were compared to the ground truth fluorescence nuclei images. Results: After one week of training, using one cell membrane z-stack (20 images) and corresponding nuclei label, results showed qualitatively good predictions on training set. The algorithm was able to accurately predict nuclei locations as well as shape when fed only fluorescence membrane images. Similar training sessions with improved membrane image quality, including clear lining and shape of the membrane, clearly showing the boundaries of each cell, proportionally improved nuclei predictions, reducing errors relative to ground truth. Discussion: These results show the potential of pre-trained machine learning algorithms to predict cell morphology using relatively small amounts of data and training time, eliminating the need of using multiple labels in immunofluorescence experiments. With further training, the algorithm is expected to predict different labels (e.g., focal-adhesion sites, cytoskeleton), which can be added to the automatic machine learning pipeline for direct input into Principal Component Analysis (PCA) for generation of virtual-cell mechanical models.

Keywords: cell morphology prediction, computational machine learning, fluorescence microscopy, virtual-cell models

Procedia PDF Downloads 193
797 Synthesis of Methanol through Photocatalytic Conversion of CO₂: A Green Chemistry Approach

Authors: Sankha Chakrabortty, Biswajit Ruj, Parimal Pal

Abstract:

Methanol is one of the most important chemical products and intermediates. It can be used as a solvent, intermediate or raw material for a number of higher valued products, fuels or additives. From the last one decay, the total global demand of methanol has increased drastically which forces the scientists to produce a large amount of methanol from a renewable source to meet the global demand with a sustainable way. Different types of non-renewable based raw materials have been used for the synthesis of methanol on a large scale which makes the process unsustainable. In this circumstances, photocatalytic conversion of CO₂ into methanol under solar/UV excitation becomes a viable approach to give a sustainable production approach which not only meets the environmental crisis by recycling CO₂ to fuels but also reduces CO₂ amount from the atmosphere. Development of such sustainable production approach for CO₂ conversion into methanol still remains a major challenge in the current research comparing with conventional energy expensive processes. In this backdrop, the development of environmentally friendly materials, like photocatalyst has taken a great perspective for methanol synthesis. Scientists in this field are always concerned about finding an improved photocatalyst to enhance the photocatalytic performance. Graphene-based hybrid and composite materials with improved properties could be a better nanomaterial for the selective conversion of CO₂ to methanol under visible light (solar energy) or UV light. The present invention relates to synthesis an improved heterogeneous graphene-based photocatalyst with improved catalytic activity and surface area. Graphene with enhanced surface area is used as coupled material of copper-loaded titanium oxide to improve the electron capture and transport properties which substantially increase the photoinduced charge transfer and extend the lifetime of photogenerated charge carriers. A fast reduction method through H₂ purging has been adopted to synthesis improved graphene whereas ultrasonication based sol-gel method has been applied for the preparation of graphene coupled copper loaded titanium oxide with some enhanced properties. Prepared photocatalysts were exhaustively characterized using different characterization techniques. Effects of catalyst dose, CO₂ flow rate, reaction temperature and stirring time on the efficacy of the system in terms of methanol yield and productivity have been studied in the present study. The study shown that the newly synthesized photocatalyst with an enhanced surface resulting in a sustained productivity and yield of methanol 0.14 g/Lh, and 0.04 g/gcat respectively, after 3 h of illumination under UV (250W) at an optimum catalyst dosage of 10 g/L having 1:2:3 (Graphene: TiO₂: Cu) weight ratio.

Keywords: renewable energy, CO₂ capture, photocatalytic conversion, methanol

Procedia PDF Downloads 98
796 Realizing Teleportation Using Black-White Hole Capsule Constructed by Space-Time Microstrip Circuit Control

Authors: Mapatsakon Sarapat, Mongkol Ketwongsa, Somchat Sonasang, Preecha Yupapin

Abstract:

The designed and performed preliminary tests on a space-time control circuit using a two-level system circuit with a 4-5 cm diameter microstrip for realistic teleportation have been demonstrated. It begins by calculating the parameters that allow a circuit that uses the alternative current (AC) at a specified frequency as the input signal. A method that causes electrons to move along the circuit perimeter starting at the speed of light, which found satisfaction based on the wave-particle duality. It is able to establish the supersonic speed (faster than light) for the electron cloud in the middle of the circuit, creating a timeline and propulsive force as well. The timeline is formed by the stretching and shrinking time cancellation in the relativistic regime, in which the absolute time has vanished. In fact, both black holes and white holes are created from time signals at the beginning, where the speed of electrons travels close to the speed of light. They entangle together like a capsule until they reach the point where they collapse and cancel each other out, which is controlled by the frequency of the circuit. Therefore, we can apply this method to large-scale circuits such as potassium, from which the same method can be applied to form the system to teleport living things. In fact, the black hole is a hibernation system environment that allows living things to live and travel to the destination of teleportation, which can be controlled from position and time relative to the speed of light. When the capsule reaches its destination, it increases the frequency of the black holes and white holes canceling each other out to a balanced environment. Therefore, life can safely teleport to the destination. Therefore, there must be the same system at the origin and destination, which could be a network. Moreover, it can also be applied to space travel as well. The design system will be tested on a small system using a microstrip circuit system that we can create in the laboratory on a limited budget that can be used in both wired and wireless systems.

Keywords: quantum teleportation, black-white hole, time, timeline, relativistic electronics

Procedia PDF Downloads 65
795 Plotting of an Ideal Logic versus Resource Outflow Graph through Response Analysis on a Strategic Management Case Study Based Questionnaire

Authors: Vinay A. Sharma, Shiva Prasad H. C.

Abstract:

The initial stages of any project are often observed to be in a mixed set of conditions. Setting up the project is a tough task, but taking the initial decisions is rather not complex, as some of the critical factors are yet to be introduced into the scenario. These simple initial decisions potentially shape the timeline and subsequent events that might later be plotted on it. Proceeding towards the solution for a problem is the primary objective in the initial stages. The optimization in the solutions can come later, and hence, the resources deployed towards attaining the solution are higher than what they would have been in the optimized versions. A ‘logic’ that counters the problem is essentially the core of the desired solution. Thus, if the problem is solved, the deployment of resources has led to the required logic being attained. As the project proceeds along, the individuals working on the project face fresh challenges as a team and are better accustomed to their surroundings. The developed, optimized solutions are then considered for implementation, as the individuals are now experienced, and know better of the consequences and causes of possible failure, and thus integrate the adequate tolerances wherever required. Furthermore, as the team graduates in terms of strength, acquires prodigious knowledge, and begins its efficient transfer, the individuals in charge of the project along with the managers focus more on the optimized solutions rather than the traditional ones to minimize the required resources. Hence, as time progresses, the authorities prioritize attainment of the required logic, at a lower amount of dedicated resources. For empirical analysis of the stated theory, leaders and key figures in organizations are surveyed for their ideas on appropriate logic required for tackling a problem. Key-pointers spotted in successfully implemented solutions are noted from the analysis of the responses and a metric for measuring logic is developed. A graph is plotted with the quantifiable logic on the Y-axis, and the dedicated resources for the solutions to various problems on the X-axis. The dedicated resources are plotted over time, and hence the X-axis is also a measure of time. In the initial stages of the project, the graph is rather linear, as the required logic will be attained, but the consumed resources are also high. With time, the authorities begin focusing on optimized solutions, since the logic attained through them is higher, but the resources deployed are comparatively lower. Hence, the difference between consecutive plotted ‘resources’ reduces and as a result, the slope of the graph gradually increases. On an overview, the graph takes a parabolic shape (beginning on the origin), as with each resource investment, ideally, the difference keeps on decreasing, and the logic attained through the solution keeps increasing. Even if the resource investment is higher, the managers and authorities, ideally make sure that the investment is being made on a proportionally high logic for a larger problem, that is, ideally the slope of the graph increases with the plotting of each point.

Keywords: decision-making, leadership, logic, strategic management

Procedia PDF Downloads 95
794 An Artificial Intelligence Framework to Forecast Air Quality

Authors: Richard Ren

Abstract:

Air pollution is a serious danger to international well-being and economies - it will kill an estimated 7 million people every year, costing world economies $2.6 trillion by 2060 due to sick days, healthcare costs, and reduced productivity. In the United States alone, 60,000 premature deaths are caused by poor air quality. For this reason, there is a crucial need to develop effective methods to forecast air quality, which can mitigate air pollution’s detrimental public health effects and associated costs by helping people plan ahead and avoid exposure. The goal of this study is to propose an artificial intelligence framework for predicting future air quality based on timing variables (i.e. season, weekday/weekend), future weather forecasts, as well as past pollutant and air quality measurements. The proposed framework utilizes multiple machine learning algorithms (logistic regression, random forest, neural network) with different specifications and averages the results of the three top-performing models to eliminate inaccuracies, weaknesses, and biases from any one individual model. Over time, the proposed framework uses new data to self-adjust model parameters and increase prediction accuracy. To demonstrate its applicability, a prototype of this framework was created to forecast air quality in Los Angeles, California using datasets from the RP4 weather data repository and EPA pollutant measurement data. The results showed good agreement between the framework’s predictions and real-life observations, with an overall 92% model accuracy. The combined model is able to predict more accurately than any of the individual models, and it is able to reliably forecast season-based variations in air quality levels. Top air quality predictor variables were identified through the measurement of mean decrease in accuracy. This study proposed and demonstrated the efficacy of a comprehensive air quality prediction framework leveraging multiple machine learning algorithms to overcome individual algorithm shortcomings. Future enhancements should focus on expanding and testing a greater variety of modeling techniques within the proposed framework, testing the framework in different locations, and developing a platform to automatically publish future predictions in the form of a web or mobile application. Accurate predictions from this artificial intelligence framework can in turn be used to save and improve lives by allowing individuals to protect their health and allowing governments to implement effective pollution control measures.Air pollution is a serious danger to international wellbeing and economies - it will kill an estimated 7 million people every year, costing world economies $2.6 trillion by 2060 due to sick days, healthcare costs, and reduced productivity. In the United States alone, 60,000 premature deaths are caused by poor air quality. For this reason, there is a crucial need to develop effective methods to forecast air quality, which can mitigate air pollution’s detrimental public health effects and associated costs by helping people plan ahead and avoid exposure. The goal of this study is to propose an artificial intelligence framework for predicting future air quality based on timing variables (i.e. season, weekday/weekend), future weather forecasts, as well as past pollutant and air quality measurements. The proposed framework utilizes multiple machine learning algorithms (logistic regression, random forest, neural network) with different specifications and averages the results of the three top-performing models to eliminate inaccuracies, weaknesses, and biases from any one individual model. Over time, the proposed framework uses new data to self-adjust model parameters and increase prediction accuracy. To demonstrate its applicability, a prototype of this framework was created to forecast air quality in Los Angeles, California using datasets from the RP4 weather data repository and EPA pollutant measurement data. The results showed good agreement between the framework’s predictions and real-life observations, with an overall 92% model accuracy. The combined model is able to predict more accurately than any of the individual models, and it is able to reliably forecast season-based variations in air quality levels. Top air quality predictor variables were identified through the measurement of mean decrease in accuracy. This study proposed and demonstrated the efficacy of a comprehensive air quality prediction framework leveraging multiple machine learning algorithms to overcome individual algorithm shortcomings. Future enhancements should focus on expanding and testing a greater variety of modeling techniques within the proposed framework, testing the framework in different locations, and developing a platform to automatically publish future predictions in the form of a web or mobile application. Accurate predictions from this artificial intelligence framework can in turn be used to save and improve lives by allowing individuals to protect their health and allowing governments to implement effective pollution control measures.Air pollution is a serious danger to international wellbeing and economies - it will kill an estimated 7 million people every year, costing world economies $2.6 trillion by 2060 due to sick days, healthcare costs, and reduced productivity. In the United States alone, 60,000 premature deaths are caused by poor air quality. For this reason, there is a crucial need to develop effective methods to forecast air quality, which can mitigate air pollution’s detrimental public health effects and associated costs by helping people plan ahead and avoid exposure. The goal of this study is to propose an artificial intelligence framework for predicting future air quality based on timing variables (i.e. season, weekday/weekend), future weather forecasts, as well as past pollutant and air quality measurements. The proposed framework utilizes multiple machine learning algorithms (logistic regression, random forest, neural network) with different specifications and averages the results of the three top-performing models to eliminate inaccuracies, weaknesses, and biases from any one individual model. Over time, the proposed framework uses new data to self-adjust model parameters and increase prediction accuracy. To demonstrate its applicability, a prototype of this framework was created to forecast air quality in Los Angeles, California using datasets from the RP4 weather data repository and EPA pollutant measurement data. The results showed good agreement between the framework’s predictions and real-life observations, with an overall 92% model accuracy. The combined model is able to predict more accurately than any of the individual models, and it is able to reliably forecast season-based variations in air quality levels. Top air quality predictor variables were identified through the measurement of mean decrease in accuracy. This study proposed and demonstrated the efficacy of a comprehensive air quality prediction framework leveraging multiple machine learning algorithms to overcome individual algorithm shortcomings. Future enhancements should focus on expanding and testing a greater variety of modeling techniques within the proposed framework, testing the framework in different locations, and developing a platform to automatically publish future predictions in the form of a web or mobile application. Accurate predictions from this artificial intelligence framework can in turn be used to save and improve lives by allowing individuals to protect their health and allowing governments to implement effective pollution control measures.

Keywords: air quality prediction, air pollution, artificial intelligence, machine learning algorithms

Procedia PDF Downloads 107
793 Verification and Validation of Simulated Process Models of KALBR-SIM Training Simulator

Authors: T. Jayanthi, K. Velusamy, H. Seetha, S. A. V. Satya Murty

Abstract:

Verification and Validation of Simulated Process Model is the most important phase of the simulator life cycle. Evaluation of simulated process models based on Verification and Validation techniques checks the closeness of each component model (in a simulated network) with the real system/process with respect to dynamic behaviour under steady state and transient conditions. The process of Verification and validation helps in qualifying the process simulator for the intended purpose whether it is for providing comprehensive training or design verification. In general, model verification is carried out by comparison of simulated component characteristics with the original requirement to ensure that each step in the model development process completely incorporates all the design requirements. Validation testing is performed by comparing the simulated process parameters to the actual plant process parameters either in standalone mode or integrated mode. A Full Scope Replica Operator Training Simulator for PFBR - Prototype Fast Breeder Reactor has been developed at IGCAR, Kalpakkam, INDIA named KALBR-SIM (Kalpakkam Breeder Reactor Simulator) wherein the main participants are engineers/experts belonging to Modeling Team, Process Design and Instrumentation and Control design team. This paper discusses the Verification and Validation process in general, the evaluation procedure adopted for PFBR operator training Simulator, the methodology followed for verifying the models, the reference documents and standards used etc. It details out the importance of internal validation by design experts, subsequent validation by external agency consisting of experts from various fields, model improvement by tuning based on expert’s comments, final qualification of the simulator for the intended purpose and the difficulties faced while co-coordinating various activities.

Keywords: Verification and Validation (V&V), Prototype Fast Breeder Reactor (PFBR), Kalpakkam Breeder Reactor Simulator (KALBR-SIM), steady state, transient state

Procedia PDF Downloads 246
792 Biotechnology Sector in the Context of National Innovation System: The Case of Norway

Authors: Parisa Afshin, Terje Grønning

Abstract:

Norway, similar to many other countries, has set the focus of its policies in creating new strong and highly innovative sectors in recent years, as the oil and gas sector profitability is declining. Biotechnology sector in Norway has a great potential, especially in marine-biotech and cancer medicine. However, Norway being a periphery faces especial challenges in the path of creating internationally well-known biotech sector and an international knowledge hub. The aim of this article is to analyze the progress of the Norwegian biotechnology industry, its pathway to build up an innovation network and conduct collaborative innovation based on its initial conditions and its own advantage and disadvantages. The findings have important implications not only for politicians and academic in understanding the infrastructure of biotechnology sector in the country, but it has important lessons for other periphery countries or regions aiming in creating strong biotechnology sector and catching up with the strong internationally-recognized regions. Data and methodology: To achieve the main goal of this study, information has been collected via secondary resources such as web pages and annual reports published by the officials and mass media along with interviews were used. The data were collected with the goal to shed light on a brief history and current status of Norway biotechnology sector, as well as geographic distribution of biotech industry, followed by the role of academic and industry collaboration and public policies in Norway biotech. As knowledge is the key input in innovation, knowledge perspective of the system such as knowledge flow in the sector regarding the national and regional innovation system has been studied. Primary results: The internationalization has been an important element in development of periphery regions' innovativeness enabling them to overcome their weakness while putting more weight on the importance of regional policies. Following such findings, suggestions on policy decision and international collaboration, regarding national and regional system of innovation, has been offered as means of promoting strong innovative sector.

Keywords: biotechnology sector, knowledge-based industry, national innovation system, regional innovation system

Procedia PDF Downloads 210
791 A Retrospective Analysis of the Impact of the Choosing Wisely Canada Campaign on Emergency Department Imaging Utilization for Head Injuries

Authors: Sameer Masood, Lucas Chartier

Abstract:

Head injuries are a commonly encountered presentation in emergency departments (ED) and the Choosing Wisely Canada (CWC) campaign was released in June 2015 in an attempt to decrease imaging utilization for patients with minor head injuries. The impact of the CWC campaign on imaging utilization for head injuries has not been explored in the ED setting. In our study, we describe the characteristics of patients with head injuries presenting to a tertiary care academic ED and the impact of the CWC campaign on CT head utilization. This retrospective cohort study used linked databases from the province of Ontario, Canada to assess emergency department visits with a primary diagnosis of head injury made between June 1, 2014 and Aug 31, 2016 at the University Health Network in Toronto, Canada. We examined the number of visits during the study period, the proportion of patients that had a CT head performed before and after the release of the CWC campaign, as well as mode of arrival, and disposition. There were 4,322 qualifying visits at our site during the study period. The median presenting age was 44.12 years (IQR 27.83,67.45), the median GCS was 15 (IQR 15,15) and the majority of patients presenting had intermediate acuity (CTAS 3). Overall, 43.17% of patients arrived via ambulance, 49.24 % of patients received a CT head and 10.46% of patients were admitted. Compared to patients presenting before the CWC campaign release, there was no significant difference in the rate of CT heads after the CWC (50.41% vs 47.68%, P = 0.07). There were also no significant differences between the two groups in mode of arrival (ambulance vs ambulatory) (42.94% vs 43.48%, P = 0.72) or admission rates (9.85% vs 11.26%, P = 0.15). However, more patients belonged to the high acuity groups (CTAS 1 or 2) in the post CWC campaign release group (12.98% vs 8.11% P <0.001). Visits for head injuries make up a significant proportion of total ED visits and approximately half of these patients receive CT imaging in the ED. The CWC campaign did not seem to impact imaging utilization for head injuries in the 14 months following its launch. Further efforts, including local quality improvement initiatives, are likely needed to increase adherence to its recommendation and reduce imaging utilization for head injuries.

Keywords: choosing wisely, emergency department, head injury, quality improvement

Procedia PDF Downloads 209
790 Geomorphometric Analysis of the Hydrologic and Topographic Parameters of the Katsina-Ala Drainage Basin, Benue State, Nigeria

Authors: Oyatayo Kehinde Taofik, Ndabula Christopher

Abstract:

Drainage basins are a central theme in the green economy. The rising challenges in flooding, erosion or sediment transport and sedimentation threaten the green economy. This has led to increasing emphasis on quantitative analysis of drainage basin parameters for better understanding, estimation and prediction of fluvial responses and, thus associated hazards or disasters. This can be achieved through direct measurement, characterization, parameterization, or modeling. This study applied the Remote Sensing and Geographic Information System approach of parameterization and characterization of the morphometric variables of Katsina – Ala basin using a 30 m resolution Shuttle Radar Topographic Mission (SRTM) Digital Elevation Model (DEM). This was complemented with topographic and hydrological maps of Katsina-Ala on a scale of 1:50,000. Linear, areal and relief parameters were characterized. The result of the study shows that Ala and Udene sub-watersheds are 4th and 5th order basins, respectively. The stream network shows a dendritic pattern, indicating homogeneity in texture and a lack of structural control in the study area. Ala and Udene sub-watersheds have the following values for elongation ratio, circularity ratio, form factor and relief ratio: 0.48 / 0.39 / 0.35/ 9.97 and 0.40 / 0.35 / 0.32 / 6.0. They also have the following values for drainage texture and ruggedness index of 0.86 / 0.011 and 1.57 / 0.016. The study concludes that the two sub-watersheds are elongated, suggesting that they are susceptible to erosion and, thus higher sediment load in the river channels, which will dispose the watersheds to higher flood peaks. The study also concludes that the sub-watersheds have a very coarse texture, with good permeability of subsurface materials and infiltration capacity, which significantly recharge the groundwater. The study recommends that efforts should be put in place by the Local and State Governments to reduce the size of paved surfaces in these sub-watersheds by implementing a robust agroforestry program at the grass root level.

Keywords: erosion, flood, mitigation, morphometry, watershed

Procedia PDF Downloads 74
789 2017 Survey on Correlation between Connection and Emotions for Children and Adolescents

Authors: Ya-Hsing Yeh, I-Chun Tai, Ming-Chieh Lin, Li-Ting Lee, Ping-Ting Hsieh, Yi-Chen Ling, Jhia-Ying Du, Li-Ping Chang, Guan-Long Yu

Abstract:

Objective: To understand the connection between children/adolescents and those who they miss, as well as the correlation between connection and their emotions. Method: Based on the objective, a close-ended questionnaire was made into a formal questionnaire after experts evaluated its validity. In February 2017, the paper-based questionnaire was adopted. Twenty-one elementary schools and junior high schools in Taiwan were sampled by purposive sampling approach and the fifth to ninth graders were our participants. A total of 2,502 valid questionnaires were retrieved. Results: Forty-four-point three percent of children/adolescents missed a person in mind, or they thought a person as a significant other in mind, but they had no connection with them. The highest proportion of those they wanted to contact with was ‘Friends and classmates’, and the others were ‘immediate family’, such as parents and grandparents, and ‘academic or vocational instructors, such as home-room teachers, coaches, cram school teachers and so on, respectively. Only 14% of children/adolescents would actively contact those they missed. The proportion of what children/adolescents ‘often’ actively keeping in touch with those they missed felt happy or cheerful was higher compared with those who ‘seldom’ actively keeping in touch with people they missed whenever they recalled who they missed, or the person actively contacted with them. Sixty-one-point seven percent of participants haven’t connected with those they missed for more than one year. The main reason was ‘environmental factors’, such as school/class transfer or moving, and then ‘academic or personal factors’, ‘communication tools’, and ‘personalities’, respectively. In addition to ‘greetings during festivals and holidays’, ‘hearing from those they missed’, and ‘knowing the latest information about those they missed on their Internet communities’, children/adolescents would like to actively contact with them when they felt ‘happy’ and ‘depressed or frustrated. The first three opinions of what children/adolescents regarded truly connection were ‘listening to people they missed attentively’, ‘sharing their secrets’, and ‘contacting with people they regularly missed with real actions’. In terms of gender, girls’ proportion on ‘showing with actions, including contacting with people they missed regularly or expressing their feelings openly’, and ‘sharing secrets’ was higher than boys’, while boy’s proportion on ‘the attitudes when contacting people they missed, including listening attentively or without being distracted’ was higher than girls’. Conclusions: I. The more ‘active’ connection they have, the more happiness they feel. II. Teachers can teach children how to manage their emotions and express their feelings appropriately. III. It is very important to turn connection into ‘action.’ Teachers can set a good example and share their moods with others whatever they are in the mood. This is a kind of connection.

Keywords: children, connection, emotion, mental health

Procedia PDF Downloads 141
788 Modeling Sorption and Permeation in the Separation of Benzene/ Cyclohexane Mixtures through Styrene-Butadiene Rubber Crosslinked Membranes

Authors: Hassiba Benguergoura, Kamal Chanane, Sâad Moulay

Abstract:

Pervaporation (PV), a membrane-based separation technology, has gained much attention because of its energy saving capability and low-cost, especially for separation of azeotropic or close-boiling liquid mixtures. There are two crucial issues for industrial application of pervaporation process. The first is developing membrane material and tailoring membrane structure to obtain high pervaporation performances. The second is modeling pervaporation transport to better understand of the above-mentioned structure–pervaporation relationship. Many models were proposed to predict the mass transfer process, among them, solution-diffusion model is most widely used in describing pervaporation transport including preferential sorption, diffusion and evaporation steps. For modeling pervaporation transport, the permeation flux, which depends on the solubility and diffusivity of components in the membrane, should be obtained first. Traditionally, the solubility was calculated according to the Flory–Huggins theory. Separation of the benzene (Bz)/cyclohexane (Cx) mixture is industrially significant. Numerous papers have been focused on the Bz/Cx system to assess the PV properties of membrane materials. Membranes with both high permeability and selectivity are desirable for practical application. Several new polymers have been prepared to get both high permeability and selectivity. Styrene-butadiene rubbers (SBR), dense membranes cross-linked by chloromethylation were used in the separation of benzene/cyclohexane mixtures. The impact of chloromethylation reaction as a new method of cross-linking SBR on the pervaporation performance have been reported. In contrast to the vulcanization with sulfur, the cross-linking takes places on styrene units of polymeric chains via a methylene bridge. The partial pervaporative (PV) fluxes of benzene/cyclohexane mixtures in styrene-butadiene rubber (SBR) were predicted using Fick's first law. The predicted partial fluxes and the PV separation factor agreed well with the experimental data by integrating Fick's law over the benzene concentration. The effects of feed concentration and operating temperature on the predicted permeation flux by this proposed model are investigated. The predicted permeation fluxes are in good agreement with experimental data at lower benzene concentration in feed, but at higher benzene concentration, the model overestimated permeation flux. The predicted and experimental permeation fluxes all increase with operating temperature increasing. Solvent sorption levels for benzene/ cyclohexane mixtures in a SBR membrane were determined experimentally. The results showed that the solvent sorption levels were strongly affected by the feed composition. The Flory- Huggins equation generates higher R-square coefficient for the sorption selectivity.

Keywords: benzene, cyclohexane, pervaporation, permeation, sorption modeling, SBR

Procedia PDF Downloads 313
787 Investigation of Electrochemical, Morphological, Rheological and Mechanical Properties of Nano-Layered Graphene/Zinc Nanoparticles Incorporated Cold Galvanizing Compound at Reduced Pigment Volume Concentration

Authors: Muhammad Abid

Abstract:

The ultimate goal of this research was to produce a cold galvanizing compound (CGC) at reduced pigment volume concentration (PVC) to protect metallic structures from corrosion. The influence of the partial replacement of Zn dust by nano-layered graphene (NGr) and Zn metal nanoparticles on the electrochemical, morphological, rheological, and mechanical properties of CGC was investigated. EIS was used to explore the electrochemical nature of coatings. The EIS results revealed that the partial replacement of Zn by NGr and Zn nanoparticles enhanced the cathodic protection at reduced PVC (4:1) by improving the electrical contact between the Zn particles and the metal substrate. The Tafel scan was conducted to support the cathodic behaviour of the coatings. The sample formulated solely with Zn at PVC 4:1 was found to be dominated in physical barrier characteristics over cathodic protection. By increasing the concentration of NGr in the formulation, the corrosion potential shifted towards a more negative side. The coating with 1.5% NGr showed the highest galvanic action at reduced PVC. FE-SEM confirmed the interconnected network of conducting particles. The coating without NGr and Zn nanoparticles at PVC 4:1 showed significant gaps between the Zn dust particles. The novelty was evidenced when micrographs showed the consistent distribution of NGr and Zn nanoparticles all over the surface, which acted as a bridge between spherical Zn particles and provided cathodic protection at a reduced PVC. The layered structure of graphene also improved the physical shielding effect of the coatings, which limited the diffusion of electrolytes and corrosion products (oxides/hydroxides) into the coatings, which was reflected by the salt spray test. The rheological properties of coatings showed good liquid/fluid properties. All the coatings showed excellent adhesion but had different strength values. A real-time scratch resistance assessment showed all the coatings had good scratch resistance.

Keywords: protective coatings, anti-corrosion, galvanization, graphene, nanomaterials, polymers

Procedia PDF Downloads 80
786 A Contemporary Advertising Strategy on Social Networking Sites

Authors: M. S. Aparna, Pushparaj Shetty D.

Abstract:

Nowadays social networking sites have become so popular that the producers or the sellers look for these sites as one of the best options to target the right audience to market their products. There are several tools available to monitor or analyze the social networks. Our task is to identify the right community web pages and find out the behavior analysis of the members by using these tools and formulate an appropriate strategy to market the products or services to achieve the set goals. The advertising becomes more effective when the information of the product/ services come from a known source. The strategy explores great buying influence in the audience on referral marketing. Our methodology proceeds with critical budget analysis and promotes viral influence propagation. In this context, we encompass the vital bits of budget evaluation such as the number of optimal seed nodes or primary influential users activated onset, an estimate coverage spread of nodes and maximum influence propagating distance from an initial seed to an end node. Our proposal for Buyer Prediction mathematical model arises from the urge to perform complex analysis when the probability density estimates of reliable factors are not known or difficult to calculate. Order Statistics and Buyer Prediction mapping function guarantee the selection of optimal influential users at each level. We exercise an efficient tactics of practicing community pages and user behavior to determine the product enthusiasts on social networks. Our approach is promising and should be an elementary choice when there is little or no prior knowledge on the distribution of potential buyers on social networks. In this strategy, product news propagates to influential users on or surrounding networks. By applying the same technique, a user can search friends who are capable to advise better or give referrals, if a product interests him.

Keywords: viral marketing, social network analysis, community web pages, buyer prediction, influence propagation, budget constraints

Procedia PDF Downloads 248
785 Indigenizing Social Work Practice: Best Practice of Family Service Agency (LK3) State Islamic University (UIN) Syarif Hidayatullah Jakarta

Authors: Siti Napsiyah, Ismet Firdaus, Lisma Dyawati Fuaida, Ellies Sukmawati

Abstract:

This paper examines the existence, role, and challenge of Family Service Agency, in Bahasa Indonesia known as Lembaga Konsultasi Kesejahteraan Keluarga (LK3) of Syarif Hidayatullah State Islamic University (UIN) Jakarta. It has been established since 2012. It is an official agency under the Ministry of Social Affairs of Indonesia. The establishment of LK3 aims to provide psychosocial services for families of students who has psychosocial problem in their life. The study also aims to explore the trend of psychosocial problems of its client (student) for the past three years (2014-2016). The research method of the study is using a qualitative social work research method. A review of selected data of the client of LK3 UIN Syarif Hidayatullah Jakarta around five main issues: Family background, psychosocial mapping, potential resources, student coping mechanism strategy, client strength and network. The study also uses a review of academic performance report as well as an interview and observation. The findings show that the trend of psychosocial problems of the client of LK3 UIN Syarif Hidayatullah Jakarta vary as follow: bad academic performance, low income family, broken home, domestic violence, disability, mental disorder, sexual abuse, and the like. LK3 UIN Syarif Hidayatullah Jakarta has significant roles to provide psychosocial support and services for the survival of the students to deal with their psychosocial problems. Social worker of LK3 performs indigenous social work practice: individual counseling, family counseling, group therapy, home visit, case conference, Islamic Spiritual Approach, and Spiritual Emotional Freedom Technique (SEPT).

Keywords: psychosocial, indigenizing social work, resiliency, coping mechanism

Procedia PDF Downloads 250