Search results for: reading speed
560 Modeling of the Dynamic Characteristics of a Spindle with Experimental Validation
Authors: Jhe-Hao Huang, Kun-Da Wu, Wei-Cheng Shih, Jui-Pin Hung
Abstract:
This study presented the investigation on the dynamic characteristics of a spindle tool system by experimental and finite element modeling approaches. As well known facts, the machining stability is greatly determined by the dynamic characteristics of the spindle tool system. Therefore, understanding the factors affecting dynamic behavior of a spindle tooling system is a prerequisite in dominating the final machining performance of machine tool system. To this purpose, a physical spindle unit was employed to assess the dynamic characteristics by vibration tests. Then, a three-dimensional finite element model of a high-speed spindle system integrated with tool holder was created to simulate the dynamic behaviors. For modeling the angular contact bearings, a series of spring elements were introduced between the inner and outer rings. The spring constant can be represented by the contact stiffness of the rolling bearing based on Hertz theory. The interface characteristic between spindle nose and tool holder taper can be quantified from the comparison of the measurements and predictions. According to the results obtained from experiments and finite element predictions, the vibration behavior of the spindle is dominated by the bending deformation of the spindle shaft in different modes, which is further determined by the stiffness of the bearings in spindle housing. Also, the spindle unit with tool holder shows a different dynamic behavior from that of spindle without tool holder. This indicates the interface property between tool holder and spindle nose plays an dominance on the dynamic characteristics the spindle tool system. Overall, the dynamic behaviors the spindle with and without tool holder can be successfully investigated through the finite element model proposed in this study. The prediction accuracy is determined by the modeling of the rolling interface of ball bearings in spindles and the interface characteristics between tool holder and spindle nose. Besides, identifications of the interface characteristics of a ball bearing and spindle tool holder are important for the refinement of the spindle tooling system to achieve the optimum machining performance.Keywords: contact stiffness, dynamic characteristics, spindle, tool holder interface
Procedia PDF Downloads 298559 Effects of Non-Motorized Vehicles on a Selected Intersection in Dhaka City for Non Lane Based Heterogeneous Traffic Using VISSIM 5.3
Authors: A. C. Dey, H. M. Ahsan
Abstract:
Heterogeneous traffic composed of both motorized and non-motorized vehicles that are a common feature of urban Bangladeshi roads. Popular non-motorized vehicles include rickshaws, rickshaw-van, and bicycle. These modes performed an important role in moving people and goods in the absence of a dependable mass transport system. However, rickshaws play a major role in meeting the demand for door-to-door public transport services to the city dwellers. But there is no separate lane for non-motorized vehicles in this city. Non-motorized vehicles generally occupy the outermost or curb-side lanes, however, at intersections non-motorized vehicles get mixed with the motorized vehicles. That’s why the conventional models fail to analyze the situation completely. Microscopic traffic simulation software VISSIM 5.3, itself a lane base software but default behavioral parameters [such as driving behavior, lateral distances, overtaking tendency, CCO=0.4m, CC1=1.5s] are modified for calibrating a model to analyze the effects of non-motorized traffic at an intersection (Mirpur-10) in a non-lane based mixed traffic condition. It is seen from field data that NMV occupies an average 20% of the total number of vehicles almost all the link roads. Due to the large share of non-motorized vehicles, capacity significantly drop. After analyzing simulation raw data, significant variation is noticed. Such as the average vehicular speed is reduced by 25% and the number of vehicles decreased by 30% only for the presence of NMV. Also the variation of lateral occupancy and queue delay time increase by 2.37% and 33.75% respectively. Thus results clearly show the negative effects of non-motorized vehicles on capacity at an intersection. So special management technics or restriction of NMV at major intersections may be an effective solution to improve this existing critical condition.Keywords: lateral occupancy, non lane based intersection, nmv, queue delay time, VISSIM 5.3
Procedia PDF Downloads 155558 Understanding Student Engagement through Sentiment Analytics of Response Times to Electronically Shared Feedback
Authors: Yaxin Bi, Peter Nicholl
Abstract:
The rapid advancement of Information and communication technologies (ICT) is extremely influencing every aspect of Higher Education. It has transformed traditional teaching, learning, assessment and feedback into a new era of Digital Education. This also introduces many challenges in capturing and understanding student engagement with their studies in Higher Education. The School of Computing at Ulster University has developed a Feedback And Notification (FAN) Online tool that has been used to send students links to personalized feedback on their submitted assessments and record students’ frequency of review of the shared feedback as well as the speed of collection. The feedback that the students initially receive is via a personal email directing them through to the feedback via a URL link that maps to the feedback created by the academic marker. This feedback is typically a Word or PDF report including comments and the final mark for the work submitted approximately three weeks before. When the student clicks on the link, the student’s personal feedback is viewable in the browser and they can view the contents. The FAN tool provides the academic marker with a report that includes when and how often a student viewed the feedback via the link. This paper presents an investigation into student engagement through analyzing the interaction timestamps and frequency of review by the student. We have proposed an approach to modeling interaction timestamps and use sentiment classification techniques to analyze the data collected over the last five years for a set of modules. The data studied is across a number of final years and second-year modules in the School of Computing. The paper presents the details of quantitative analysis methods and describes further their interactions with the feedback overtime on each module studied. We have projected the students into different groups of engagement based on sentiment analysis results and then provide a suggestion of early targeted intervention for the set of students seen to be under-performing via our proposed model.Keywords: feedback, engagement, interaction modelling, sentiment analysis
Procedia PDF Downloads 103557 Experimental and Numerical Analysis of Wood Pellet Breakage during Pneumatic Transport
Authors: Julian Jaegers, Siegmar Wirtz, Viktor Scherer
Abstract:
Wood pellets belong to the most established trade formats of wood-based fuels. Especially, because of the transportability and the storage properties, but also due to low moisture content, high energy density, and the homogeneous particle size and shape, wood pellets are well suited for power generation in power plants and for the use in automated domestic firing systems. Before they are thermally converted, wood pellets pass various transport and storage procedures. There they undergo different mechanical impacts, which leads to pellet breakage and abrasion and to an increase in fines. The fines lead to operational problems during storage, charging, and discharging of pellets, they can increase the risk of dust explosions and can lead to pollutant emissions during combustion. In the current work, the dependence of the formation of fines caused by breakage during pneumatic transport is analyzed experimentally and numerically. The focus lies on the influence of conveying velocity, pellet loading, pipe diameter, and the shape of pipe components like bends or couplings. A test rig has been built, which allows the experimental evaluation of the pneumatic transport varying the above-mentioned parameters. Two high-speed cameras are installed for the quantitative optical access to the particle-particle and particle-wall contacts. The particle size distribution of the bulk before and after a transport process is measured as well as the amount of fines produced. The experiments will be compared with results of corresponding DEM/CFD simulations to provide information on contact frequencies and forces. The contribution proposed will present experimental results and report on the status of the DEM/CFD simulations. The final goal of the project is to provide a better insight into pellet breakage during pneumatic transport and to develop guidelines ensuring a more gentle transport.Keywords: DEM/CFD-simulation of pneumatic conveying, mechanical impact on wood pellets during transportation, pellet breakage, pneumatic transport of wood pellets
Procedia PDF Downloads 150556 Centrifuge Modelling Approach on Sysmic Loading Analysis of Clay: A Geotechnical Study
Authors: Anthony Quansah, Tresor Ntaryamira, Shula Mushota
Abstract:
Models for geotechnical centrifuge testing are usually made from re-formed soil, allowing for comparisons with naturally occurring soil deposits. However, there is a fundamental omission in this process because the natural soil is deposited in layers creating a unique structure. Nonlinear dynamics of clay material deposit is an essential part of changing the attributes of ground movements when subjected to solid seismic loading, particularly when diverse intensification conduct of speeding up and relocation are considered. The paper portrays a review of axis shaking table tests and numerical recreations to explore the offshore clay deposits subjected to seismic loadings. These perceptions are accurately reenacted by DEEPSOIL with appropriate soil models and parameters reviewed from noteworthy centrifuge modeling researches. At that point, precise 1-D site reaction investigations are performed on both time and recurrence spaces. The outcomes uncover that for profound delicate clay is subjected to expansive quakes, noteworthy increasing speed lessening may happen close to the highest point of store because of soil nonlinearity and even neighborhood shear disappointment; nonetheless, huge enhancement of removal at low frequencies are normal in any case the forces of base movements, which proposes that for dislodging touchy seaward establishments and structures, such intensified low-recurrence relocation reaction will assume an essential part in seismic outline. This research shows centrifuge as a tool for creating a layered sample important for modelling true soil behaviour (such as permeability) which is not identical in all directions. Currently, there are limited methods for creating layered soil samples.Keywords: seismic analysis, layered modeling, terotechnology, finite element modeling
Procedia PDF Downloads 156555 Simulation of Nano Drilling Fluid in an Extended Reach Well
Authors: Lina Jassim, Robiah Yunus, , Amran Salleh
Abstract:
Since nano particles have been assessed as thermo stabilizer, rheology enhancer, and ecology safer, nano drilling fluid can be utilized to overcome the complexity of hole cleaning in highly deviated interval of an extended reach wells. The eccentric annular flow is a flow with special considerations; it forms a vital part of drilling fluid flow analysis in an extended reach wells. In this work eccentric, dual phase flow (different types of rock cuttings with different size were blended with nano fluid) through horizontal well (an extended reach well) are simulated with the help of CFD, Fluent package. In horizontal wells flow occurs in an adverse pressure gradient condition, that makes the particle inside it susceptible to reversed flow. Thus the flow has to be analyzed in a three dimensional manner. Moreover the non-Newtonian behavior of the nano fluid makes the problem really challenging in numerical and physical aspects. The primary objective of the work is to establish a relationship between different flow characteristics with the speed of inner wall rotation. The nano fluid flow characteristics include swirl of flow and its effect on wellbore cleaning ability , wall shear stress and its effect on fluid viscosity to suspend and carry the rock cuttings, axial velocity and its effect on transportation of rock cuttings to the wellbore surface, finally pressure drop and its effect on managed of drilling pressure. The importance of eccentricity of the inner cylinder has to be analyzed as a part of it. Practical horizontal well flows contain a good amount of particles (rock cuttings) with moderate axial velocity, which verified nano drilling fluid ability of carrying and transferring cuttings particles in the highly deviated eccentric annular flow is also of utmost importance.Keywords: Non-Newtonian, dual phase, eccentric annular, CFD
Procedia PDF Downloads 434554 Optimal Tamping for Railway Tracks, Reducing Railway Maintenance Expenditures by the Use of Integer Programming
Authors: Rui Li, Min Wen, Kim Bang Salling
Abstract:
For the modern railways, maintenance is critical for ensuring safety, train punctuality and overall capacity utilization. The cost of railway maintenance in Europe is high, on average between 30,000 – 100,000 Euros per kilometer per year. In order to reduce such maintenance expenditures, this paper presents a mixed 0-1 linear mathematical model designed to optimize the predictive railway tamping activities for ballast track in the planning horizon of three to four years. The objective function is to minimize the tamping machine actual costs. The approach of the research is using the simple dynamic model for modelling condition-based tamping process and the solution method for finding optimal condition-based tamping schedule. Seven technical and practical aspects are taken into account to schedule tamping: (1) track degradation of the standard deviation of the longitudinal level over time; (2) track geometrical alignment; (3) track quality thresholds based on the train speed limits; (4) the dependency of the track quality recovery on the track quality after tamping operation; (5) Tamping machine operation practices (6) tamping budgets and (7) differentiating the open track from the station sections. A Danish railway track between Odense and Fredericia with 42.6 km of length is applied for a time period of three and four years in the proposed maintenance model. The generated tamping schedule is reasonable and robust. Based on the result from the Danish railway corridor, the total costs can be reduced significantly (50%) than the previous model which is based on optimizing the number of tamping. The different maintenance strategies have been discussed in the paper. The analysis from the results obtained from the model also shows a longer period of predictive tamping planning has more optimal scheduling of maintenance actions than continuous short term preventive maintenance, namely yearly condition-based planning.Keywords: integer programming, railway tamping, predictive maintenance model, preventive condition-based maintenance
Procedia PDF Downloads 444553 Optimal Design of Wind Turbine Blades Equipped with Flaps
Authors: I. Kade Wiratama
Abstract:
As a result of the significant growth of wind turbines in size, blade load control has become the main challenge for large wind turbines. Many advanced techniques have been investigated aiming at developing control devices to ease blade loading. Amongst them, trailing edge flaps have been proven as effective devices for load alleviation. The present study aims at investigating the potential benefits of flaps in enhancing the energy capture capabilities rather than blade load alleviation. A software tool is especially developed for the aerodynamic simulation of wind turbines utilising blades equipped with flaps. As part of the aerodynamic simulation of these wind turbines, the control system must be also simulated. The simulation of the control system is carried out via solving an optimisation problem which gives the best value for the controlling parameter at each wind turbine run condition. Developing a genetic algorithm optimisation tool which is especially designed for wind turbine blades and integrating it with the aerodynamic performance evaluator, a design optimisation tool for blades equipped with flaps is constructed. The design optimisation tool is employed to carry out design case studies. The results of design case studies on wind turbine AWT 27 reveal that, as expected, the location of flap is a key parameter influencing the amount of improvement in the power extraction. The best location for placing a flap is at about 70% of the blade span from the root of the blade. The size of the flap has also significant effect on the amount of enhancement in the average power. This effect, however, reduces dramatically as the size increases. For constant speed rotors, adding flaps without re-designing the topology of the blade can improve the power extraction capability as high as of about 5%. However, with re-designing the blade pretwist the overall improvement can be reached as high as 12%.Keywords: flaps, design blade, optimisation, simulation, genetic algorithm, WTAero
Procedia PDF Downloads 337552 Application of Computational Fluid Dynamics in the Analysis of Water Flow in Rice Leaves
Authors: Marcio Mesquita, Diogo Henrique Morato de Moraes, Henrique Fonseca Elias de Oliveira, Rilner Alves Flores, Mateus Rodrigues Ferreira, Dalva Graciano Ribeiro
Abstract:
This study aimed to analyze the movement of water in irrigated and non-irrigated rice (Oryza sativa L.) leaves, from the xylem to the stomata, through numerical simulations. Through three-dimensional modeling, it was possible to determine how the spacing of parenchyma cells and the permeability of these cells influence the apoplastic flow and the opening of the stomata. The thickness of the cuticle and the number of vascular bundles are greater in plants subjected to water stress, indicating an adaptive response of plants to environments with water deficit. In addition, numerical simulations revealed that the opening of the stomata, the permeability of the parenchyma cells and the cell spacing have significant impacts on the energy loss and the speed of water movement. It was observed that a more open stoma facilitates water flow, decreasing the resistance and energy required for transport, while higher levels of permeability reduce energy loss, indicating that a more permeable tissue allows for more efficient water transport. Furthermore, it was possible to note that stomatal aperture, parenchyma permeability and cell spacing are crucial factors in the efficient water management of plants, especially under water stress conditions. These insights are essential for the development of more effective agricultural management strategies and for the breeding of plant varieties that are more resistant to adverse growing conditions. Computed fluid dynamics has allowed us to overcome the limitations of conventional techniques by providing a means to visualize and understand the complex hydrodynamic processes within the vascular system of plants.Keywords: numerical modeling, vascular anatomy, vascular hydrodynamics, xylem, Oryza sativa L.
Procedia PDF Downloads 17551 Exploring Methods for Urbanization of 'Village in City' in China: A Case Study of Hangzhou
Abstract:
After the economic reform in 1978, the urbanization in China has grown fast. It urged cities to expand in an unprecedented high speed. Villages around were annexed unprepared, and it turned out to be a new type of community called 'village in city.' Two things happened here. First, the locals gave up farming and turned to secondary industry and tertiary industry, as a result of losing their land. Secondly, attracted by the high income in cities and low rent here, plenty of migrants came into the community. This area is important to a city in rapid growth for providing a transitional zone. But thanks to its passivity and low development, 'village in city' has caused lots of trouble to the city. Densities of population and construction are both high, while facilities are severely inadequate. Unplanned and illegal structures are built, which creates a complex mixed-function area and leads to a bad residential area. Besides, the locals have a strong property right consciousness for the land. It holds back the transformation and development of the community. Although the land capitalization can bring significant benefits, it’s inappropriate to make a great financial compensation to the locals, and considering the large population of city migrants, it’s important to explore the relationship among the 'village in city,' city immigrants and the city itself. Taking the example of Hangzhou, this paper analyzed the developing process, functions spatial distribution, industrial structure and current traffic system of 'village in city.' Above the research on the community, this paper put forward a common method to make urban planning through the following ways: adding city functions, building civil facilities, re-planning functions spatial distribution, changing the constitution of local industry and planning new traffic system. Under this plan, 'village in city' finally can be absorbed into cities and make its own contribution to the urbanization.Keywords: China, city immigrant, urbanization, village in city
Procedia PDF Downloads 217550 Improvement of Fixed Offshore Structures' Boat Landing Performance Using Practicable Design Criteria
Authors: A. Hamadelnil, Z. Razak, E. Matsoom
Abstract:
Boat landings on fixed offshore structure are designed to absorb the impact energy from the boats approaching the platform for crew transfer. As the size and speed of operating boats vary, the design and maintenance of the boat landings become more challenging. Different oil and gas operators adopting different design criteria for the boat landing design in the region of South East Asia. Rubber strip is used to increase the capacity of the boat landing in absorbing bigger impact energy. Recently, it has been reported that all the rubber strips peel off the boat landing frame within one to two years, and replacement is required to avoid puncturing of the boat’s hull by the exposed sharp edges and bolts used to secure the rubber strip. The capacity of the boat landing in absorbing the impact energy is reduced after the failure of the rubber strip and results in failure of the steel members. The replacement of the rubber strip is costly as it requires a diving spread. The objective of this study is to propose the most practicable criteria to be adopted by oil and gas operators in the design of the boat landings in the region of South East Asia to improve the performance of the boat landing and assure safe operation and cheaper maintenance. This study explores the current design and maintenance challenges of boat landing and compares between the criteria adopted by different operators. In addition, this study explains the reasons behind the denting of many of the boat landing. It also evaluates the effect of grout and rubber strip in the capacity of the boat landing and jacket legs and highlight. Boat landing model and analysis using USFOS and SACS software are carried out and presented in this study considering different design criteria. This study proposes the most practicable criteria to be used in designing the boat landing in South East Asia region to save cost and achieve better performance, safe operation and less cost and maintenance.Keywords: boat landing, grout, plastic hinge, rubber strip
Procedia PDF Downloads 301549 Polymer Mixing in the Cavity Transfer Mixer
Authors: Giovanna Grosso, Martien A. Hulsen, Arash Sarhangi Fard, Andrew Overend, Patrick. D. Anderson
Abstract:
In many industrial applications and, in particular in polymer industry, the quality of mixing between different materials is fundamental to guarantee the desired properties of finished products. However, properly modelling and understanding polymer mixing often presents noticeable difficulties, because of the variety and complexity of the physical phenomena involved. This is the case of the Cavity Transfer Mixer (CTM), for which a clear understanding of mixing mechanisms is still missing, as well as clear guidelines for the system optimization. This device, invented and patented by Gale at Rapra Technology Limited, is an add-on to be mounted downstream of existing extruders, in order to improve distributive mixing. It consists of two concentric cylinders, the rotor and stator, both provided with staggered rows of hemispherical cavities. The inner cylinder (rotor) rotates, while the outer (stator) remains still. At the same time, the pressure load imposed upstream, pushes the fluid through the CTM. Mixing processes are driven by the flow field generated by the complex interaction between the moving geometry, the imposed pressure load and the rheology of the fluid. In such a context, the present work proposes a complete and accurate three dimensional modelling of the CTM and results of a broad range of simulations assessing the impact on mixing of several geometrical and functioning parameters. Among them, we find: the number of cavities per row, the number of rows, the size of the mixer, the rheology of the fluid and the ratio between the rotation speed and the fluid throughput. The model is composed of a flow part and a mixing part: a finite element solver computes the transient velocity field, which is used in the mapping method implementation in order to simulate the concentration field evolution. Results of simulations are summarized in guidelines for the device optimization.Keywords: Mixing, non-Newtonian fluids, polymers, rheology.
Procedia PDF Downloads 379548 A Benchmark System for Testing Medium Voltage Direct Current (MVDC-CB) Robustness Utilizing Real Time Digital Simulation and Hardware-In-Loop Theory
Authors: Ali Kadivar, Kaveh Niayesh
Abstract:
The integration of green energy resources is a major focus, and the role of Medium Voltage Direct Current (MVDC) systems is exponentially expanding. However, the protection of MVDC systems against DC faults is a challenge that can have consequences on reliable and safe grid operation. This challenge reveals the need for MVDC circuit breakers (MVDC CB), which are in infancies of their improvement. Therefore will be a lack of MVDC CBs standards, including thresholds for acceptable power losses and operation speed. To establish a baseline for comparison purposes, a benchmark system for testing future MVDC CBs is vital. The literatures just give the timing sequence of each switch and the emphasis is on the topology, without in-depth study on the control algorithm of DCCB, as the circuit breaker control system is not yet systematic. A digital testing benchmark is designed for the Proof-of-concept of simulation studies using software models. It can validate studies based on real-time digital simulators and Transient Network Analyzer (TNA) models. The proposed experimental setup utilizes data accusation from the accurate sensors installed on the tested MVDC CB and through general purpose input/outputs (GPIO) from the microcontroller and PC Prototype studies in the laboratory-based models utilizing Hardware-in-the-Loop (HIL) equipment connected to real-time digital simulators is achieved. The improved control algorithm of the circuit breaker can reduce the peak fault current and avoid arc resignation, helping the coordination of DCCB in relay protection. Moreover, several research gaps are identified regarding case studies and evaluation approaches.Keywords: DC circuit breaker, hardware-in-the-loop, real time digital simulation, testing benchmark
Procedia PDF Downloads 79547 Swift Rising Pattern of Emerging Construction Technology Trends in the Construction Management
Authors: Gayatri Mahajan
Abstract:
Modern Construction Technology (CT) includes a broad range of advanced techniques and practices that bound the recent developments in material technology, design methods, quantity surveying, facility management, services, structural analysis and design, and other management education. Adoption of recent digital transformation technology is the need of today to speed up the business and is also the basis of construction improvement. Incorporating and practicing the technologies such as cloud-based communication and collaboration solution, Mobile Apps and 5G,3D printing, BIM and Digital Twins, CAD / CAM, AR/ VR, Big Data, IoT, Wearables, Blockchain, Modular Construction, Offsite Manifesting, Prefabrication, Robotic, Drones and GPS controlled equipment expedite the progress in the Construction industry (CI). Resources used are journaled research articles, web/net surfing, books, thesis, reports/surveys, magazines, etc. The outline of the research organization for this study is framed at four distinct levels in context to conceptualization, resources, innovative and emerging trends in CI, and better methods for completion of the construction projects. The present study conducted during 2020-2022 reveals that implementing these technologies improves the level of standards, planning, security, well-being, sustainability, and economics too. Application uses, benefits, impact, advantages/disadvantages, limitations and challenges, and policies are dealt with to provide information to architects and builders for smooth completion of the project. Results explain that construction technology trends vary from 4 to 15 for CI, and eventually, it reaches 27 for Civil Engineering (CE). The perspective of the most recent innovations, trends, tools, challenges, and solutions is highly embraced in the field of construction. The incorporation of the above said technologies in the pandemic Covid -19 and post-pandemic might lead to a focus on finding out effective ways to adopt new-age technologies for CI.Keywords: BIM, drones, GPS, mobile apps, 5G, modular construction, robotics, 3D printing
Procedia PDF Downloads 105546 Training for Digital Manufacturing: A Multilevel Teaching Model
Authors: Luís Rocha, Adam Gąska, Enrico Savio, Michael Marxer, Christoph Battaglia
Abstract:
The changes observed in the last years in the field of manufacturing and production engineering, popularly known as "Fourth Industry Revolution", utilizes the achievements in the different areas of computer sciences, introducing new solutions at almost every stage of the production process, just to mention such concepts as mass customization, cloud computing, knowledge-based engineering, virtual reality, rapid prototyping, or virtual models of measuring systems. To effectively speed up the production process and make it more flexible, it is necessary to tighten the bonds connecting individual stages of the production process and to raise the awareness and knowledge of employees of individual sectors about the nature and specificity of work in other stages. It is important to discover and develop a suitable education method adapted to the specificities of each stage of the production process, becoming an extremely crucial issue to exploit the potential of the fourth industrial revolution properly. Because of it, the project “Train4Dim” (T4D) intends to develop complex training material for digital manufacturing, including content for design, manufacturing, and quality control, with a focus on coordinate metrology and portable measuring systems. In this paper, the authors present an approach to using an active learning methodology for digital manufacturing. T4D main objective is to develop a multi-degree (apprenticeship up to master’s degree studies) and educational approach that can be adapted to different teaching levels. It’s also described the process of creating the underneath methodology. The paper will share the steps to achieve the aims of the project (training model for digital manufacturing): 1) surveying the stakeholders, 2) Defining the learning aims, 3) producing all contents and curriculum, 4) training for tutors, and 5) Pilot courses test and improvements.Keywords: learning, Industry 4.0, active learning, digital manufacturing
Procedia PDF Downloads 97545 Chemical Warfare Agent Simulant by Photocatalytic Filtering Reactor: Effect of Operating Parameters
Authors: Youcef Serhane, Abdelkrim Bouzaza, Dominique Wolbert, Aymen Amin Assadi
Abstract:
Throughout history, the use of chemical weapons is not exclusive to combats between army corps; some of these weapons are also found in very targeted intelligence operations (political assassinations), organized crime, and terrorist organizations. To improve the speed of action, important technological devices have been developed in recent years, in particular in the field of protection and decontamination techniques to better protect and neutralize a chemical threat. In order to assess certain protective, decontaminating technologies or to improve medical countermeasures, tests must be conducted. In view of the great toxicity of toxic chemical agents from (real) wars, simulants can be used, chosen according to the desired application. Here, we present an investigation about using a photocatalytic filtering reactor (PFR) for highly contaminated environments containing diethyl sulfide (DES). This target pollutant is used as a simulant of CWA, namely of Yperite (Mustard Gas). The influence of the inlet concentration (until high concentrations of DES (1200 ppmv, i.e., 5 g/m³ of air) has been studied. Also, the conversion rate was monitored under different relative humidity and different flow rates (respiratory flow - standards: ISO / DIS 8996 and NF EN 14387 + A1). In order to understand the efficacity of pollutant neutralization by PFR, a kinetic model based on the Langmuir–Hinshelwood (L–H) approach and taking into account the mass transfer step was developed. This allows us to determine the adsorption and kinetic degradation constants with no influence of mass transfer. The obtained results confirm that this small configuration of reactor presents an extremely promising way for the use of photocatalysis for treatment to deal with highly contaminated environments containing real chemical warfare agents. Also, they can give birth to an individual protection device (an autonomous cartridge for a gas mask).Keywords: photocatalysis, photocatalytic filtering reactor, diethylsulfide, chemical warfare agents
Procedia PDF Downloads 105544 A Game-Based Methodology to Discriminate Executive Function – a Pilot Study With Institutionalized Elderly People
Authors: Marlene Rosa, Susana Lopes
Abstract:
There are few studies that explore the potential of board games as a performance measure, despite it can be an interesting strategy in the context of frailty populations. In fact, board games are immersive strategies than can inhibit the pressure of being evaluated. This study aimed to test the ability of gamed-base strategies to assess executive function in elderly population. Sixteen old participants were included: 10 with affected executive functions (G1 – 85.30±6.00 yrs old; 10 male); 6 with executive functions with non-clinical important modifications (G2 - 76.30±5.19 yrs old; 6 male). Executive tests were assessed using the Frontal Assessment Battery (FAB), which is a quick-applicable cognitive screening test (score<12 means impairment). The board game used in this study was the TATI Hand Game, specifically for training rhythmic coordination of the upper limbs with multiple cognitive stimuli. This game features 1 table grid, 1 set of Single Game cards (to play with one hand); Double Game cards (to play simultaneously with two hands); 1 dice to plan Single Game mode; cards to plan the Double Game mode; 1 bell; 2 cups. Each participant played 3 single game cards, and the following data were collected: (i) variability in time during board game challenges (SD); (ii) number of errors; (iii) execution speed (sec). G1 demonstrated: high variability in execution time during board game challenges (G1 – 13.0s vs G2- 0.5s); a higher number of errors (1.40 vs 0.67); higher execution velocity (607.80s vs 281.83s). These results demonstrated the potential of implementing board games as a functional assessment strategy in geriatric care. Future studies might include larger samples and statistical methodologies to find cut-off values for impairment in executive functions during performance in TATI game.Keywords: board game, aging, executive function, evaluation
Procedia PDF Downloads 142543 Effect of Modulation Factors on Tomotherapy Plans and Their Quality Analysis
Authors: Asawari Alok Pawaskar
Abstract:
This study was aimed at investigating quality assurance (QA) done with IBA matrix, the discrepancies observed for helical tomotherapy plans. A selection of tomotherapy plans that initially failed the with Matrix process was chosen for this investigation. These plans failed the fluence analysis as assessed using gamma criteria (3%, 3 mm). Each of these plans was modified (keeping the planning constraints the same), beamlets rebatched and reoptimized. By increasing and decreasing the modulation factor, the fluence in a circumferential plane as measured with a diode array was assessed. A subset of these plans was investigated using varied pitch values. Factors for each plan that were examined were point doses, fluences, leaf opening times, planned leaf sinograms, and uniformity indices. In order to ensure that the treatment constraints remained the same, the dose-volume histograms (DVHs) of all the modulated plans were compared to the original plan. It was observed that a large increase in the modulation factor did not significantly improve DVH uniformity, but reduced the gamma analysis pass rate. This also increased the treatment delivery time by slowing down the gantry rotation speed which then increases the maximum to mean non-zero leaf open time ratio. Increasing and decreasing the pitch value did not substantially change treatment time, but the delivery accuracy was adversely affected. This may be due to many other factors, such as the complexity of the treatment plan and site. Patient sites included in this study were head and neck, breast, abdomen. The impact of leaf timing inaccuracies on plans was greater with higher modulation factors. Point-dose measurements were seen to be less susceptible to changes in pitch and modulation factors. The initial modulation factor used by the optimizer, such that the TPS generated ‘actual’ modulation factor within the range of 1.4 to 2.5, resulted in an improved deliverable plan.Keywords: dose volume histogram, modulation factor, IBA matrix, tomotherapy
Procedia PDF Downloads 177542 Destination Management Organization in the Digital Era: A Data Framework to Leverage Collective Intelligence
Authors: Alfredo Fortunato, Carmelofrancesco Origlia, Sara Laurita, Rossella Nicoletti
Abstract:
In the post-pandemic recovery phase of tourism, the role of a Destination Management Organization (DMO) as a coordinated management system of all the elements that make up a destination (attractions, access, marketing, human resources, brand, pricing, etc.) is also becoming relevant for local territories. The objective of a DMO is to maximize the visitor's perception of value and quality while ensuring the competitiveness and sustainability of the destination, as well as the long-term preservation of its natural and cultural assets, and to catalyze benefits for the local economy and residents. In carrying out the multiple functions to which it is called, the DMO can leverage a collective intelligence that comes from the ability to pool information, explicit and tacit knowledge, and relationships of the various stakeholders: policymakers, public managers and officials, entrepreneurs in the tourism supply chain, researchers, data journalists, schools, associations and committees, citizens, etc. The DMO potentially has at its disposal large volumes of data and many of them at low cost, that need to be properly processed to produce value. Based on these assumptions, the paper presents a conceptual framework for building an information system to support the DMO in the intelligent management of a tourist destination tested in an area of southern Italy. The approach adopted is data-informed and consists of four phases: (1) formulation of the knowledge problem (analysis of policy documents and industry reports; focus groups and co-design with stakeholders; definition of information needs and key questions); (2) research and metadatation of relevant sources (reconnaissance of official sources, administrative archives and internal DMO sources); (3) gap analysis and identification of unconventional information sources (evaluation of traditional sources with respect to the level of consistency with information needs, the freshness of information and granularity of data; enrichment of the information base by identifying and studying web sources such as Wikipedia, Google Trends, Booking.com, Tripadvisor, websites of accommodation facilities and online newspapers); (4) definition of the set of indicators and construction of the information base (specific definition of indicators and procedures for data acquisition, transformation, and analysis). The framework derived consists of 6 thematic areas (accommodation supply, cultural heritage, flows, value, sustainability, and enabling factors), each of which is divided into three domains that gather a specific information need to be represented by a scheme of questions to be answered through the analysis of available indicators. The framework is characterized by a high degree of flexibility in the European context, given that it can be customized for each destination by adapting the part related to internal sources. Application to the case study led to the creation of a decision support system that allows: •integration of data from heterogeneous sources, including through the execution of automated web crawling procedures for data ingestion of social and web information; •reading and interpretation of data and metadata through guided navigation paths in the key of digital story-telling; •implementation of complex analysis capabilities through the use of data mining algorithms such as for the prediction of tourist flows.Keywords: collective intelligence, data framework, destination management, smart tourism
Procedia PDF Downloads 121541 Research of the Load Bearing Capacity of Inserts Embedded in CFRP under Different Loading Conditions
Authors: F. Pottmeyer, M. Weispfenning, K. A. Weidenmann
Abstract:
Continuous carbon fiber reinforced plastics (CFRP) exhibit a high application potential for lightweight structures due to their outstanding specific mechanical properties. Embedded metal elements, so-called inserts, can be used to join structural CFRP parts. Drilling of the components to be joined can be avoided using inserts. In consequence, no bearing stress is anticipated. This is a distinctive benefit of embedded inserts, since continuous CFRP have low shear and bearing strength. This paper aims at the investigation of the load bearing capacity after preinduced damages from impact tests and thermal-cycling. In addition, characterization of mechanical properties during dynamic high speed pull-out testing under different loading velocities was conducted. It has been shown that the load bearing capacity increases up to 100% for very high velocities (15 m/s) in comparison with quasi-static loading conditions (1.5 mm/min). Residual strength measurements identified the influence of thermal loading and preinduced mechanical damage. For both, the residual strength was evaluated afterwards by quasi-static pull-out tests. Taking into account the DIN EN 6038 a high decrease of force occurs at impact energy of 16 J with significant damage of the laminate. Lower impact energies of 6 J, 9 J, and 12 J do not decrease the measured residual strength, although the laminate is visibly damaged - distinguished by cracks on the rear side. To evaluate the influence of thermal loading, the specimens were placed in a climate chamber and were exposed to various numbers of temperature cycles. One cycle took 1.5 hours from -40 °C to +80 °C. It could be shown that already 10 temperature cycles decrease the load bearing capacity up to 20%. Further reduction of the residual strength with increasing number of thermal cycles was not observed. Thus, it implies that the maximum damage of the composite is already induced after 10 temperature cycles.Keywords: composite, joining, inserts, dynamic loading, thermal loading, residual strength, impact
Procedia PDF Downloads 280540 Enhancement of Critical Current Density of Liquid Infiltration Processed Y-Ba-Cu-O Bulk Superconductors Used for Flywheel Energy Storage System
Authors: Asif Mahmood, Yousef Alzeghayer
Abstract:
The size effects of a precursor Y2BaCuO5 (Y211) powder on the microstructure and critical current density (Jc) of liquid infiltration growth (LIG)-processed YBa2Cu3O7-y (Y123) bulk superconductors were investigated in terms of milling time (t). YBCO bulk samples having high Jc values have been selected for the flywheel energy storage system. Y211 powders were attrition-milled for 0-10 h in 2 h increments at a fixed rotation speed of 400 RPM. Y211 pre-forms were made by pelletizing the milled Y211 powders followed by subsequent sintering, after which an LIG process with top seeding was applied to the Y211/Ba3Cu5O8 (Y035) pre-forms. Spherical pores were observed in all LIG-processed Y123 samples, and the pore density gradually decreased as t increased from 0 h to 8 h. In addition to the reduced pore density, the Y211 particle size in the final Y123 products also decreased with increasing t. As t increased further to 10 h, unexpected Y211 coarsening and large pore evolutions were observed. The magnetic susceptibility-temperature curves showed that the onset superconducting transition temperature (Tc, onset) of all samples was the same (91.5 K), but the transition width became greater as t increased. The Jc of the Y123 bulk superconductors fabricated in this study was observed to correlate well with t of the Y211 precursor powder. The maximum Jc of 1.0×105 A cm-2 (at 77 K, 0 T) was achieved at t = 8 h, which is attributed to the reduction in pore density and Y211 particle size. The prolonged milling time of t = 10 h decreased the Jc of the LIG-processed Y123 superconductor owing to the evolution of large pores and exaggerated Y211 growth. YBCO bulk samples having high Jc (samples prepared using 8 h milled powders) have been used for the energy storage system in flywheel energy storage system.Keywords: critical current, bulk superconductor, liquid infiltration, bioinformatics
Procedia PDF Downloads 212539 Characterization of A390 Aluminum Alloy Produced at Different Slow Shot Speeds Using Assisted Vacuum High-Pressure Die Casting
Authors: Wenbo Yu, Zihao Yuan, Zhipeng Guo, Shoumei Xiong
Abstract:
Under different slow shot speeds in vacuum assisted high pressure die casting (VHPDC) process, plate-shaped specimens of hypereutectic A390 aluminum alloy were produced. According to the results, the vacuum pressure inside the die cavity increased linearly with the increasing slow shot speed at the beginning of mold filling. Meanwhile, it was found that the tensile properties of vacuum die castings were deteriorated by the porosity content. In addition, the average primary Si size varies between 14µm to 23µm, which has a binary functional relationship with the slow shot speeds. Due to the vacuum effect, the castings were treated by T6 heat treatment. After heat treatment, microstructural morphologies revealed that needle-shaped and thin-flaked eutectic Si particles became rounded while Al2Cu dissolved into α-Al matrix. For the as-received sample in-situ tensile test, microcracks firstly initiate at the primary Si particles and propagated along Al matrix with a transgranular fracture mode. In contrast, for the treated sample, the crack initiated at the Al2Cu particles and propagated along Al grain boundaries with an intergranular fracture mode. In-situ three bending test, microcracks firstly formed in the primary Si particles for both samples. Subsequently, the cracks between primary Si linked along Al grain boundaries in as received sample. In contrast, the cracks in primary Si linked through the solid lines in Al matrix. Furthermore, the fractography revealed that the fracture mechanism has evolved from brittle transgranular fracture to a fracture mode with many dimples after heat treatment.Keywords: A390 aluminum, vacuum assisted high pressure die casting, heat treatment, mechanical properties
Procedia PDF Downloads 248538 High School Gain Analytics From National Assessment Program – Literacy and Numeracy and Australian Tertiary Admission Rankin Linkage
Authors: Andrew Laming, John Hattie, Mark Wilson
Abstract:
Nine Queensland Independent high schools provided deidentified student-matched ATAR and NAPLAN data for all 1217 ATAR graduates since 2020 who also sat NAPLAN at the school. Graduating cohorts from the nine schools contained a mean 100 ATAR graduates with previous NAPLAN data from their school. Excluded were vocational students (mean=27) and any ATAR graduates without NAPLAN data (mean=20). Based on Index of Community Socio-Educational Access (ICSEA) prediction, all schools had larger that predicted proportions of their students graduating with ATARs. There were an additional 173 students not releasing their ATARs to their school (14%), requiring this data to be inferred by schools. Gain was established by first converting each student’s strongest NAPLAN domain to a statewide percentile, then subtracting this result from final ATAR. The resulting ‘percentile shift’ was corrected for plausible ATAR participation at each NAPLAN level. Strongest NAPLAN domain had the highest correlation with ATAR (R2=0.58). RESULTS School mean NAPLAN scores fitted ICSEA closely (R2=0.97). Schools achieved a mean cohort gain of two ATAR rankings, but only 66% of students gained. This ranged from 46% of top-NAPLAN decile students gaining, rising to 75% achieving gains outside the top decile. The 54% of top-decile students whose ATAR fell short of prediction lost a mean 4.0 percentiles (or 6.2 percentiles prior to correction for regression to the mean). 71% of students in smaller schools gained, compared to 63% in larger schools. NAPLAN variability in each of the 13 ICSEA1100 cohorts was 17%, with both intra-school and inter-school variation of these values extremely low (0.3% to 1.8%). Mean ATAR change between years in each school was just 1.1 ATAR ranks. This suggests consecutive school cohorts and ICSEA-similar schools share very similar distributions and outcomes over time. Quantile analysis of the NAPLAN/ATAR revealed heteroscedasticity, but splines offered little additional benefit over simple linear regression. The NAPLAN/ATAR R2 was 0.33. DISCUSSION Standardised data like NAPLAN and ATAR offer educators a simple no-cost progression metric to analyse performance in conjunction with their internal test results. Change is expressed in percentiles, or ATAR shift per student, which is layperson intuitive. Findings may also reduce ATAR/vocational stream mismatch, reveal proportions of cohorts meeting or falling short of expectation and demonstrate by how much. Finally, ‘crashed’ ATARs well below expectation are revealed, which schools can reasonably work to minimise. The percentile shift method is neither value-add nor a growth percentile. In the absence of exit NAPLAN testing, this metric is unable to discriminate academic gain from legitimate ATAR-maximizing strategies. But by controlling for ICSEA, ATAR proportion variation and student mobility, it uncovers progression to ATAR metrics which are not currently publicly available. However achieved, ATAR maximisation is a sought-after private good. So long as standardised nationwide data is available, this analysis offers useful analytics for educators and reasonable predictivity when counselling subsequent cohorts about their ATAR prospects.Keywords: NAPLAN, ATAR, analytics, measurement, gain, performance, data, percentile, value-added, high school, numeracy, reading comprehension, variability, regression to the mean
Procedia PDF Downloads 68537 Design and Fabrication of Piezoelectric Tactile Sensor by Deposition of PVDF-TrFE with Spin-Coating Method for Minimally Invasive Surgery
Authors: Saman Namvarrechi, Armin A. Dormeny, Javad Dargahi, Mojtaba Kahrizi
Abstract:
Since last two decades, minimally invasive surgery (MIS) has grown significantly due to its advantages compared to the traditional open surgery like less physical pain, faster recovery time and better healing condition around incision regions; however, one of the important challenges in MIS is getting an effective sensing feedback within the patient’s body during operations. Therefore, surgeons need efficient tactile sensing like determining the hardness of contact tissue for investigating the patient’s health condition. In such a case, MIS tactile sensors are preferred to be able to provide force/pressure sensing, force position, lump detection, and softness sensing. Among different pressure sensor technologies, the piezoelectric operating principle is the fittest for MIS’s instruments, such as catheters. Using PVDF with its copolymer, TrFE, as a piezoelectric material, is a common method of design and fabrication of a tactile sensor due to its ease of implantation and biocompatibility. In this research, PVDF-TrFE polymer is deposited via spin-coating method and treated with various post-deposition processes to investigate its piezoelectricity and amount of electroactive β phase. These processes include different post thermal annealing, the effect of spin-coating speed, different layer of deposition, and the presence of additional hydrate salt. According to FTIR spectroscopy and SEM images, the amount of the β phase and porosity of each sample is determined. In addition, the optimum experimental study is established by considering every aspect of the fabrication process. This study clearly shows the effective way of deposition and fabrication of a tactile PVDF-TrFE based sensor and an enhancement methodology to have a higher β phase and piezoelectric constant in order to have a better sense of touch at the end effector of biomedical devices.Keywords: β phase, minimally invasive surgery, piezoelectricity, PVDF-TrFE, tactile sensor
Procedia PDF Downloads 123536 Climate Change Results in Increased Accessibility of Offshore Wind Farms for Installation and Maintenance
Authors: Victoria Bessonova, Robert Dorrell, Nina Dethlefs, Evdokia Tapoglou, Katharine York
Abstract:
As the global pursuit of renewable energy intensifies, offshore wind farms have emerged as a promising solution to combat climate change. The global offshore wind installed capacity is projected to increase 56-fold by 2055. However, the impacts of climate change, particularly changes in wave climate, are not widely understood. Offshore wind installation and maintenance activities often require specific weather windows, characterized by calm seas and low wave heights, to ensure safe and efficient operations. However, climate change-induced alterations in wave characteristics can reduce the availability of suitable weather windows, leading to delays and disruptions in project timelines. it applied the operational limits of installation and maintenance vessels to past and future climate wave projections. This revealed changes in the annual and monthly accessibility of offshore wind farms at key global development locations. When accessibility is only defined by significant wave height, spatial patterns in the annual accessibility roughly follow changes in significant wave height, with increased availability where significant wave height is decreasing. This resulted in a 1-6% increase in Europe and North America and a similar decrease in South America, Australia and Asia. Monthly changes suggest unchanged or slightly decreased (1-2%) accessibility in summer months and increased (2-6%) in winter. Further assessment includes assessing the sensitivity of accessibility to operational limits defined by wave height combined with wave period and wave height combined with wind speed. Results of this assessment will be included in the presentation. These findings will help stakeholders inform climate change adaptations in installation and maintenance planning practices.Keywords: climate change, offshore wind, offshore wind installation, operations and maintenance, wave climate, wind farm accessibility
Procedia PDF Downloads 83535 Constructing the Joint Mean-Variance Regions for Univariate and Bivariate Normal Distributions: Approach Based on the Measure of Cumulative Distribution Functions
Authors: Valerii Dashuk
Abstract:
The usage of the confidence intervals in economics and econometrics is widespread. To be able to investigate a random variable more thoroughly, joint tests are applied. One of such examples is joint mean-variance test. A new approach for testing such hypotheses and constructing confidence sets is introduced. Exploring both the value of the random variable and its deviation with the help of this technique allows checking simultaneously the shift and the probability of that shift (i.e., portfolio risks). Another application is based on the normal distribution, which is fully defined by mean and variance, therefore could be tested using the introduced approach. This method is based on the difference of probability density functions. The starting point is two sets of normal distribution parameters that should be compared (whether they may be considered as identical with given significance level). Then the absolute difference in probabilities at each 'point' of the domain of these distributions is calculated. This measure is transformed to a function of cumulative distribution functions and compared to the critical values. Critical values table was designed from the simulations. The approach was compared with the other techniques for the univariate case. It differs qualitatively and quantitatively in easiness of implementation, computation speed, accuracy of the critical region (theoretical vs. real significance level). Stable results when working with outliers and non-normal distributions, as well as scaling possibilities, are also strong sides of the method. The main advantage of this approach is the possibility to extend it to infinite-dimension case, which was not possible in the most of the previous works. At the moment expansion to 2-dimensional state is done and it allows to test jointly up to 5 parameters. Therefore the derived technique is equivalent to classic tests in standard situations but gives more efficient alternatives in nonstandard problems and on big amounts of data.Keywords: confidence set, cumulative distribution function, hypotheses testing, normal distribution, probability density function
Procedia PDF Downloads 176534 Multi-Criterial Analysis: Potential Regions and Height of Wind Turbines, Rio de Janeiro, Brazil
Authors: Claudio L. M. Souza, Milton Erthal, Aldo Shimoya, Elias R. Goncalves, Igor C. Rangel, Allysson R. T. Tavares, Elias G. Figueira
Abstract:
The process of choosing a region for the implementation of wind farms involves factors such as the wind regime, economic viability, land value, topography, and accessibility. This work presents results obtained by multi-criteria decision analysis, and it establishes a hierarchy, regarding the installation of wind farms, among geopolicy regions in the state of ‘Rio de Janeiro’, Brazil: ‘Regiao Norte-RN’, ‘Regiao dos Lagos-RL’ and ‘Regiao Serrana-RS’. The wind regime map indicates only these three possible regions with an average annual wind speed of above of 6.0 m/s. The method applied was the Analytical Hierarchy Process-AHP, designed to prioritize and rank the three regions based on four criteria as follows: 1) potential of the site and average wind speeds of above 6.0 ms-¹, 2) average land value, 3) distribution and interconnection to electric network with the highest number of electricity stations, and 4) accessibility with proximity and quality of highways and flat topography. The values of energy generation were calculated for wind turbines 50, 75, and 100 meters high, considering the production of site (GWh/Km²) and annual production (GWh). The weight of each criterion was attributed by six engineers and by analysis of Road Map, the Map of the Electric System, the Map of Wind Regime and the Annual Land Value Report. The results indicated that in 'RS', the demand was estimated at 2,000 GWh, so a wind farm can operate efficiently in 50 m turbines. This region is mainly mountainous with difficult access and lower land value. With respect to ‘RL’, the wind turbines have to be installed at a height of 75 m high to reach a demand of 6,300 GWh. This region is very flat, with easy access, and low land value. Finally, the ‘NR’ was evaluated as very flat and with expensive lands. In this case, wind turbines with 100 m can reach an annual production of 19,000 GWh. In this Region, the coast area was classified as of greater logistic, productivity and economic potential.Keywords: AHP, renewable energy, wind energy
Procedia PDF Downloads 151533 The Impact of the Media in the Implementation of Qatar’s Foreign Policy on the Public Opinion of the People of the Middle East (2011-2023)
Authors: Negar Vkilbashi, Hassan Kabiri
Abstract:
Modern diplomacy, in its general form, refers to the people and not the governments, and diplomacy tactics are more addressed to the people than to the governments. Media diplomacy and cyber diplomacy are also one of the sub-branches of public diplomacy and, in fact, the role of media in the process of influencing public opinion and directing foreign policy. Mass media, including written, radio and television, theater, satellite, internet, and news agencies, transmit information and demands. What the Qatari government tried to implement in the countries of the region during the Arab Spring and after was through its important media, Al Jazeera. The embargo on Qatar began in 2017, when Saudi Arabia, the United Arab Emirates, Bahrain, and Egypt imposed a land, sea, and air blockade against the country. The media tool constitutes the cornerstone of soft power in the field of foreign policy, which Qatari leaders have consistently resorted to over the past two decades. Undoubtedly, the role it played in covering the events of the Arab Spring has created geopolitical tensions. The United Arab Emirates and other neighboring countries sometimes criticize Al Jazeera for providing a platform for the Muslim Brotherhood, Hamas, and other Islamists to promote their ideology. In 2011, at the same time as the Arab Spring, Al Jazeera reached the peak of its popularity. Al Jazeera's live coverage of protests in Tunisia, Egypt, Yemen, Libya, and Syria helped create a unified narrative of the Arab Spring, with audiences tuning in every Friday to watch simultaneous protests across the Middle East. Al Jazeera operates in three groups: First, it is a powerful base in the hands of the government so that it can direct and influence Arab public opinion. Therefore, this network has been able to benefit from the unlimited financial support of the Qatar government to promote its desired policies and culture. Second, it has provided an attractive platform for politicians and scientific and intellectual elites, thus attracting their support and defense from the government and its rulers. Third, during the last years of Prince Hamad's reign, the Al Jazeera network formed a deterrent weapon to counter the media and political struggle campaigns. The importance of the research is that this network covers a wide range of people in the Middle East and, therefore, has a high influence on the decision-making of countries. On the other hand, Al Jazeera is influential as a tool of public diplomacy and soft power in Qatar's foreign policy, and by studying it, the results of its effectiveness in the past years can be examined. Using a qualitative method, this research analyzes the impact of the media on the implementation of Qatar's foreign policy on the public opinion of the people of the Middle East. Data collection has been done by the secondary method, that is, reading related books, magazine articles, newspaper reports and articles, and analytical reports of think tanks. The most important findings of the research are that Al Jazeera plays an important role in Qatar's foreign policy in Qatar's public diplomacy. So that, in 2011, 2017 and 2023, it played an important role in Qatar's foreign policy in various crises. Also, the people of Arab countries use Al-Jazeera as their first reference.Keywords: Al Jazeera, Qatar, media, diplomacy
Procedia PDF Downloads 78532 Multi-Dimensional Experience of Processing Textual and Visual Information: Case Study of Allocations to Places in the Mind’s Eye Based on Individual’s Semantic Knowledge Base
Authors: Joanna Wielochowska, Aneta Wielochowska
Abstract:
Whilst the relationship between scientific areas such as cognitive psychology, neurobiology and philosophy of mind has been emphasized in recent decades of scientific research, concepts and discoveries made in both fields overlap and complement each other in their quest for answers to similar questions. The object of the following case study is to describe, analyze and illustrate the nature and characteristics of a certain cognitive experience which appears to display features of synaesthesia, or rather high-level synaesthesia (ideasthesia). The following research has been conducted on the subject of two authors, monozygotic twins (both polysynaesthetes) experiencing involuntary associations of identical nature. Authors made attempts to identify which cognitive and conceptual dependencies may guide this experience. Operating on self-introduced nomenclature, the described phenomenon- multi-dimensional processing of textual and visual information- aims to define a relationship that involuntarily and immediately couples the content introduced by means of text or image a sensation of appearing in a certain place in the mind’s eye. More precisely: (I) defining a concept introduced by means of textual content during activity of reading or writing, or (II) defining a concept introduced by means of visual content during activity of looking at image(s) with simultaneous sensation of being allocated to a given place in the mind’s eye. A place can be then defined as a cognitive representation of a certain concept. During the activity of processing information, a person has an immediate and involuntary feel of appearing in a certain place themselves, just like a character of a story, ‘observing’ a venue or a scenery from one or more perspectives and angles. That forms a unique and unified experience, constituting a background mental landscape of text or image being looked at. We came to a conclusion that semantic allocations to a given place could be divided and classified into the categories and subcategories and are naturally linked with an individual’s semantic knowledge-base. A place can be defined as a representation one’s unique idea of a given concept that has been established in their semantic knowledge base. A multi-level structure of selectivity of places in the mind’s eye, as a reaction to a given information (one stimuli), draws comparisons to structures and patterns found in botany. Double-flowered varieties of flowers and a whorl system (arrangement) which is characteristic to components of some flower species were given as an illustrative example. A composition of petals that fan out from one single point and wrap around a stem inspired an idea that, just like in nature, in philosophy of mind there are patterns driven by the logic specific to a given phenomenon. The study intertwines terms perceived through the philosophical lens, such as definition of meaning, subjectivity of meaning, mental atmosphere of places, and others. Analysis of this rare experience aims to contribute to constantly developing theoretical framework of the philosophy of mind and influence the way human semantic knowledge base and processing given content in terms of distinguishing between information and meaning is researched.Keywords: information and meaning, information processing, mental atmosphere of places, patterns in nature, philosophy of mind, selectivity, semantic knowledge base, senses, synaesthesia
Procedia PDF Downloads 124531 Comparison of Catalyst Support for High Pressure Reductive Amination
Authors: Tz-Bang Du, Cheng-Han Hsieh, Li-Ping Ju, Hung-Jie Liou
Abstract:
Polyether amines synthesize by secondary hydroxyl polyether diol play an important role in epoxy hardener. The low molecular weight product is used in low viscosity and high transparent polyamine product for the logo, ground cover, especially for wind turbine blade, while the high molecular weight products are used in advanced agricultures such as a high-speed railway. High-pressure reductive amination process is required for producing these amines. In the condition of higher than 150 atm pressure and 200 degrees Celsius temperature, supercritical ammonia is used as a reactant and also a solvent. It would be a great challenge to select a catalyst support for such high-temperature alkaline circumstance. In this study, we have established a six-autoclave-type (SAT) high-pressure reactor for amination catalyst screening, which six experiment conditions with different temperature and pressure could be examined at the same time. We synthesized copper-nickel catalyst on different shaped alumina catalyst support and evaluated the catalyst activity for high-pressure reductive amination of polypropylene glycol (PPG) by SAT reactor. Ball type gamma alumina, ball type activated alumina and pellet type gamma alumina catalyst supports are evaluated in this study. Gamma alumina supports have shown better activity on PPG reductive amination than activated alumina support. In addition, the catalysts are evaluated in fixed bed reactor. The diamine product was successfully synthesized via this catalyst and the strength of the catalysts is measured. The crush strength of blank supports is about 13.5 lb for both gamma alumina and activated alumina. The strength increases to 20.3 lb after synthesized to be copper-nickel catalyst. After test in the fixed bed high-pressure reductive amination process for 100 hours, the crush strength of the used catalyst is 3.7 lb for activated alumina support, 12.0 lb for gamma alumina support. The gamma alumina is better than activated alumina to use as catalyst support in high-pressure reductive amination process.Keywords: high pressure reductive amination, copper nickel catalyst, polyether amine, alumina
Procedia PDF Downloads 229