Search results for: soetal complexity
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 1633

Search results for: soetal complexity

733 Imaginations of the Silk Road in Sven Hedin’s Travel Writings: 1900-1936

Authors: Kexin Tan

Abstract:

The Silk Road is a concept idiosyncratic in nature. Western scholars co-created and conceptualized in its early days, transliterated into the countries along the Silk Road, redefined, reimagined, and reconfigured by the public in the second half of the twentieth century. Therefore, the image is not only a mirror of the discursive interactions between East and West but Self and Other. The travel narrative of Sven Hedin, through which the Silk Road was enriched in meanings and popularized, is the focus of this study. This article examines how the Silk Road was imagined in three key texts of Sven Hedin: The Silk Road, The Wandering Lake, and The Flight of “Big Horse”. Three recurring themes are extracted and analyzed: the Silk Road, the land of enigmas, the virgin land, and the reconnecting road. Ideas about ethnotypes and images drawn from theorists such as Joep Leerssen have been deployed in the analysis. This research tracks how the images were configured, concentrating on China’s ethnotypes, travel writing tropes, and the Silk Road discourse that preceded Sven Hedin. Hedin’s role in his expedition, his geopolitical viewpoints, and the commercial considerations of his books are also discussed in relation to the intellectual construct of the Silk Road. It is discovered that the images of the Silk Road and the discursive traditions behind it are mobile rather than static, inclusive than antithetical. The paradoxical characters of the Silk Road reveal the complexity of the socio-historical background of Hedin’s time, as well as the collision of discursive traditions and practical issues. While it is true that Hedin’s discursive construction of the Silk Road image embodies the bias of Self-West against Other-East, its characteristics such as fluidity and openness could probably offer a hint at its resurgence in the postcolonial era.

Keywords: the silk road, Sven Hedin, imagology, ethnotype, travelogue

Procedia PDF Downloads 187
732 Origamic Forms: A New Realm in Improving Acoustical Environment

Authors: Mostafa Refat Ismail, Hazem Eldaly

Abstract:

The adaptation of architecture design to building function is getting highly needed in contemporary designs, especially with the great progression in design methods and tools. This, in turn, requires great flexibility in design strategies, as well as a wider spectrum of space settings to achieve the required environment that special activities imply. Acoustics is an essential factor influencing cognitive acts and behavior as well as, on the extreme end, the physical well-being inside a space. The complexity of this constrain is fueled up by the extended geometric dimensions of multipurpose halls, making acoustic adequateness a great concern that could not easily be achieved for each purpose. To achieve a performance oriented acoustic environment, various parametric shaped false ceilings based on origami folded notion are simulated. These parametric origami shapes are able to fold and unfold forming an interactive structure that changes the mutual acoustic environment according to the geometric shapes' position and its changing exposed surface areas. The mobility of the facets in the origami surface can stretch up the range from a complete plain surface to an unfolded element where a considerable amount of absorption is added to the space. The behavior of the parametric origami shapes are being modeled employing a ray tracing computer simulation package for various shapes topology. The conclusion shows a great variation in the acoustical performance due to the variation in folding faces of the origami surfaces, which cause different reflections and consequently large variations in decay curves.

Keywords: parametric, origami, acoustics, architecture

Procedia PDF Downloads 282
731 Impact of Emotional Intelligence of Principals in High Schools on Teachers Conflict Management: A Case Study on Secondary Schools, Tehran, Iran

Authors: Amir Ahmadi, Hossein Ahmadi, Alireza Ahmadi

Abstract:

Emotional Intelligence (EI) has been defined as the ability to empathize, persevere, control impulses, communicate clearly, make thoughtful decisions, solve problems, and work with others in a way that earns friends and success. These abilities allow an individual to recognize and regulate emotion, develop self-control, set goals, develop empathy, resolve conflicts, and develop skills needed for leadership and effective group participation. Due to the increasing complexity of organizations and different ways of thinking, attitudes and beliefs of individuals, Conflict as an important part of organizational life has been examined frequently. The main point is that the conflict is not necessarily in organization, unnecessary; But it can be more creative (increase creativity), to promote innovation, or may avoid wasting energy and resources of the organization. The purpose of this study was to investigate the relation between principals emotional intelligence as one of the factors affecting conflict management among teachers. This relation was analyzed through cluster sampling with a sample size consisting of 120 individuals. The results of the study showed that, at the 95% level of confidence, the two secondary hypotheses (i.e. relation between emotional intelligence of principals and use of competition and cooperation strategies of conflict management among teachers)were confirmed, but the other three secondary hypotheses (i.e. the relation between emotional intelligence of managers and use of avoidance, adaptation and adaptability strategies of conflict management among teachers) were rejected. The primary hypothesis (i.e. relation between emotional intelligence of principals with conflict management among teachers) is supported.

Keywords: emotional intelligence, conflict, conflict management, strategies of conflict management

Procedia PDF Downloads 347
730 Optimization of the Mechanical Performance of Fused Filament Fabrication Parts

Authors: Iván Rivet, Narges Dialami, Miguel Cervera, Michele Chiumenti

Abstract:

Process parameters in Additive Manufacturing (AM) play a critical role in the mechanical performance of the final component. In order to find the input configuration that guarantees the optimal performance of the printed part, the process-performance relationship must be found. Fused Filament Fabrication (FFF) is the selected demonstrative AM technology due to its great popularity in the industrial manufacturing world. A material model that considers the different printing patterns present in a FFF part is used. A voxelized mesh is built from the manufacturing toolpaths described in the G-Code file. An Adaptive Mesh Refinement (AMR) based on the octree strategy is used in order to reduce the complexity of the mesh while maintaining its accuracy. High-fidelity and cost-efficient Finite Element (FE) simulations are performed and the influence of key process parameters in the mechanical performance of the component is analyzed. A robust optimization process based on appropriate failure criteria is developed to find the printing direction that leads to the optimal mechanical performance of the component. The Tsai-Wu failure criterion is implemented due to the orthotropy and heterogeneity constitutive nature of FFF components and because of the differences between the strengths in tension and compression. The optimization loop implements a modified version of an Anomaly Detection (AD) algorithm and uses the computed metrics to obtain the optimal printing direction. The developed methodology is verified with a case study on an industrial demonstrator.

Keywords: additive manufacturing, optimization, printing direction, mechanical performance, voxelization

Procedia PDF Downloads 56
729 Study and Simulation of the Thrust Vectoring in Supersonic Nozzles

Authors: Kbab H, Hamitouche T

Abstract:

In recent years, significant progress has been accomplished in the field of aerospace propulsion and propulsion systems. These developments are associated with efforts to enhance the accuracy of the analysis of aerothermodynamic phenomena in the engine. This applies in particular to the flow in the nozzles used. One of the most remarkable processes in this field is thrust vectoring by means of devices able to orientate the thrust vector and control the deflection of the exit jet in the engine nozzle. In the study proposed, we are interested in the fluid thrust vectoring using a second injection in the nozzle divergence. This fluid injection causes complex phenomena, such as boundary layer separation, which generates a shock wave in the primary jet upstream of the fluid interacting zone (primary jet - secondary jet). This will cause the deviation of the main flow, and therefore of the thrust vector with reference to the axis nozzle. In the modeling of the fluidic thrust vector, various parameters can be used. The Mach number of the primary jet and the injected fluid, the total pressures ratio, the injection rate, the thickness of the upstream boundary layer, the injector position in the divergent part, and the nozzle geometry are decisive factors in this type of phenomenon. The complexity of the latter challenges researchers to understand the physical phenomena of the turbulent boundary layer encountered in supersonic nozzles, as well as the calculation of its thickness and the friction forces induced on the walls. The present study aims to numerically simulate the thrust vectoring by secondary injection using the ANSYS-FLUENT, then to analyze and validate the results and the performances obtained (angle of deflection, efficiency...), which will then be compared with those obtained by other authors.

Keywords: CD Nozzle, TVC, SVC, NPR, CFD, NPR, SPR

Procedia PDF Downloads 129
728 4D Modelling of Low Visibility Underwater Archaeological Excavations Using Multi-Source Photogrammetry in the Bulgarian Black Sea

Authors: Rodrigo Pacheco-Ruiz, Jonathan Adams, Felix Pedrotti

Abstract:

This paper introduces the applicability of underwater photogrammetric survey within challenging conditions as the main tool to enhance and enrich the process of documenting archaeological excavation through the creation of 4D models. Photogrammetry was being attempted on underwater archaeological sites at least as early as the 1970s’ and today the production of traditional 3D models is becoming a common practice within the discipline. Photogrammetry underwater is more often implemented to record exposed underwater archaeological remains and less so as a dynamic interpretative tool.  Therefore, it tends to be applied in bright environments and when underwater visibility is > 1m, reducing its implementation on most submerged archaeological sites in more turbid conditions. Recent years have seen significant development of better digital photographic sensors and the improvement of optical technology, ideal for darker environments. Such developments, in tandem with powerful processing computing systems, have allowed underwater photogrammetry to be used by this research as a standard recording and interpretative tool. Using multi-source photogrammetry (5, GoPro5 Hero Black cameras) this paper presents the accumulation of daily (4D) underwater surveys carried out in the Early Bronze Age (3,300 BC) to Late Ottoman (17th Century AD) archaeological site of Ropotamo in the Bulgarian Black Sea under challenging conditions (< 0.5m visibility). It proves that underwater photogrammetry can and should be used as one of the main recording methods even in low light and poor underwater conditions as a way to better understand the complexity of the underwater archaeological record.

Keywords: 4D modelling, Black Sea Maritime Archaeology Project, multi-source photogrammetry, low visibility underwater survey

Procedia PDF Downloads 234
727 A Comparative Analysis Approach Based on Fuzzy AHP, TOPSIS and PROMETHEE for the Selection Problem of GSCM Solutions

Authors: Omar Boutkhoum, Mohamed Hanine, Abdessadek Bendarag

Abstract:

Sustainable economic growth is nowadays driving firms to extend toward the adoption of many green supply chain management (GSCM) solutions. However, the evaluation and selection of these solutions is a matter of concern that needs very serious decisions, involving complexity owing to the presence of various associated factors. To resolve this problem, a comparative analysis approach based on multi-criteria decision-making methods is proposed for adequate evaluation of sustainable supply chain management solutions. In the present paper, we propose an integrated decision-making model based on FAHP (Fuzzy Analytic Hierarchy Process), TOPSIS (Technique for Order of Preference by Similarity to Ideal Solution) and PROMETHEE (Preference Ranking Organisation METHod for Enrichment Evaluations) to contribute to a better understanding and development of new sustainable strategies for industrial organizations. Due to the varied importance of the selected criteria, FAHP is used to identify the evaluation criteria and assign the importance weights for each criterion, while TOPSIS and PROMETHEE methods employ these weighted criteria as inputs to evaluate and rank the alternatives. The main objective is to provide a comparative analysis based on TOPSIS and PROMETHEE processes to help make sound and reasoned decisions related to the selection problem of GSCM solution.

Keywords: GSCM solutions, multi-criteria analysis, decision support system, TOPSIS, FAHP, PROMETHEE

Procedia PDF Downloads 157
726 Developing a Framework to Aid Sustainable Assessment in Indian Buildings

Authors: P. Amarnath, Albert Thomas

Abstract:

Buildings qualify to be the major consumer of energy and resources thereby urging the designers, architects and policy makers to place a great deal of effort in achieving and implementing sustainable building strategies in construction. Green building rating systems help a great deal in this by measuring the effectiveness of these strategies along with the escalation of building performance in social, environmental and economic perspective, and construct new sustainable buildings. However, for a country like India, enormous population and its rapid rate of growth impose an increasing burden on the country's limited and continuously degrading natural resource base, which also includes the land available for construction. In general, the number of sustainable rated buildings in India is very minimal primarily due to the complexity and obstinate nature of the assessment systems/regulations that restrict the stakeholders and designers in proper implementation and utilization of these rating systems. This paper aims to introduce a data driven and user-friendly framework which cross compares the present prominent green building rating systems such as LEED, BREEAM, and GRIHA and subsequently help the users to rate their proposed building design as per the regulations of these assessment frameworks. This framework is validated using the input data collected from green buildings constructed globally. The proposed system has prospects to encourage the users to test the efficiency of various sustainable construction practices and thereby promote more sustainable buildings in the country.

Keywords: BREEAM, GRIHA, green building rating systems, LEED, sustainable buildings

Procedia PDF Downloads 129
725 Distributed System Computing Resource Scheduling Algorithm Based on Deep Reinforcement Learning

Authors: Yitao Lei, Xingxiang Zhai, Burra Venkata Durga Kumar

Abstract:

As the quantity and complexity of computing in large-scale software systems increase, distributed system computing becomes increasingly important. The distributed system realizes high-performance computing by collaboration between different computing resources. If there are no efficient resource scheduling resources, the abuse of distributed computing may cause resource waste and high costs. However, resource scheduling is usually an NP-hard problem, so we cannot find a general solution. However, some optimization algorithms exist like genetic algorithm, ant colony optimization, etc. The large scale of distributed systems makes this traditional optimization algorithm challenging to work with. Heuristic and machine learning algorithms are usually applied in this situation to ease the computing load. As a result, we do a review of traditional resource scheduling optimization algorithms and try to introduce a deep reinforcement learning method that utilizes the perceptual ability of neural networks and the decision-making ability of reinforcement learning. Using the machine learning method, we try to find important factors that influence the performance of distributed system computing and help the distributed system do an efficient computing resource scheduling. This paper surveys the application of deep reinforcement learning on distributed system computing resource scheduling proposes a deep reinforcement learning method that uses a recurrent neural network to optimize the resource scheduling, and proposes the challenges and improvement directions for DRL-based resource scheduling algorithms.

Keywords: resource scheduling, deep reinforcement learning, distributed system, artificial intelligence

Procedia PDF Downloads 102
724 Towards Green(er) Cities: The Role of Spatial Planning in Realising the Green Agenda

Authors: Elizelle Juaneé Cilliers

Abstract:

The green hype is becoming stronger within various disciplines, modern practices and academic thinking, enforced by concepts such as eco-health, eco-tourism, eco-cities, and eco-engineering. There is currently also an expanded scientific understanding regarding the value and benefits relating to green infrastructure, for both communities and their host cities, linked to broader sustainability and resilience thinking. The integration and implementation of green infrastructure as part of spatial planning approaches and municipal planning, are, however, more complex, especially in South Africa, inflated by limitations of budgets and human resources, development pressures, inequities in terms of green space availability and political legacies of the past. The prevailing approach to spatial planning is further contributing to complexity, linked to misguided perceptions of the function and value of green infrastructure. As such, green spaces are often considered a luxury, and green infrastructure a costly alternative, resulting in green networks being susceptible to land-use changes and under-prioritized in local authority decision-making. Spatial planning, in this sense, may well be a valuable tool to realise the green agenda, encapsulating various initiatives of sustainability as provided by a range of disciplines. This paper aims to clarify the importance and value of green infrastructure planning as a component of spatial planning approaches, in order to inform and encourage local authorities to embed sustainability thinking into city planning and decision-making approaches. It reflects on the decisive role of land-use management to guide the green agenda and refers to some recent planning initiatives. Lastly, it calls for trans-disciplinary planning approaches to build a case towards green(er) cities.

Keywords: green infrastructure, spatial planning, transdisciplinary, integrative

Procedia PDF Downloads 245
723 Sustainable Production of Tin Oxide Nanoparticles: Exploring Synthesis Techniques, Formation Mechanisms, and Versatile Applications

Authors: Yemane Tadesse Gebreslassie, Henok Gidey Gebretnsae

Abstract:

Nanotechnology has emerged as a highly promising field of research with wide-ranging applications across various scientific disciplines. In recent years, tin oxide has garnered significant attention due to its intriguing properties, particularly when synthesized in the nanoscale range. While numerous physical and chemical methods exist for producing tin oxide nanoparticles, these approaches tend to be costly, energy-intensive, and involve the use of toxic chemicals. Given the growing concerns regarding human health and environmental impact, there has been a shift towards developing cost-effective and environmentally friendly processes for tin oxide nanoparticle synthesis. Green synthesis methods utilizing biological entities such as plant extracts, bacteria, and natural biomolecules have shown promise in successfully producing tin oxide nanoparticles. However, scaling up the production to an industrial level using green synthesis approaches remains challenging due to the complexity of biological substrates, which hinders the elucidation of reaction mechanisms and formation processes. Thus, this review aims to provide an overview of the various sources of biological entities and methodologies employed in the green synthesis of tin oxide nanoparticles, as well as their impact on nanoparticle properties. Furthermore, this research delves into the strides made in comprehending the mechanisms behind the formation of nanoparticles as documented in existing literature. It also sheds light on the array of analytical techniques employed to investigate and elucidate the characteristics of these minuscule particles.

Keywords: nanotechnology, tin oxide, green synthesis, formation mechanisms

Procedia PDF Downloads 43
722 Foreign Languages and Employability in the European Union

Authors: Paulina Pietrzyk-Kowalec

Abstract:

This paper presents the phenomenon of multilingualism becoming the norm rather than the exception in the European Union. It also seeks to describe the correlation between the command of foreign languages and employability. It is evident that the challenges of today's societies when it comes to employability and to the reality of the current labor market are more and more diversified. Thus, it is one of the crucial tasks of higher education to prepare its students to face this kind of complexity, understand its nuances, and have the capacity to adapt effectively to situations that are common in corporations based in the countries belonging to the EU. From this point of view, the assessment of the impact that the command of foreign languages of European university students could have on the numerous business sectors becomes vital. It also involves raising awareness of future professionals to make them understand the importance of mastering communicative skills in foreign languages that will meet the requirements of students' prospective employers. The direct connection between higher education institutions and the world of business also allows companies to realize that they should rethink their recruitment and human resources procedures in order to take into account the importance of foreign languages. This article focuses on the objective of the multilingualism policy developed by the European Commission, which is to enable young people to master at least two foreign languages, which is crucial in their future careers. The article puts emphasis on the existence of a crucial connection between the research conducted in higher education institutions and the business sector in order to reduce current qualification gaps.

Keywords: cross-cultural communication, employability, human resources, language attitudes, multilingualism

Procedia PDF Downloads 128
721 Systems Engineering Management Using Transdisciplinary Quality System Development Lifecycle Model

Authors: Mohamed Asaad Abdelrazek, Amir Taher El-Sheikh, M. Zayan, A.M. Elhady

Abstract:

The successful realization of complex systems is dependent not only on the technology issues and the process for implementing them, but on the management issues as well. Managing the systems development lifecycle requires technical management. Systems engineering management is the technical management. Systems engineering management is accomplished by incorporating many activities. The three major activities are development phasing, systems engineering process and lifecycle integration. Systems engineering management activities are performed across the system development lifecycle. Due to the ever-increasing complexity of systems as well the difficulty of managing and tracking the development activities, new ways to achieve systems engineering management activities are required. This paper presents a systematic approach used as a design management tool applied across systems engineering management roles. In this approach, Transdisciplinary System Development Lifecycle (TSDL) Model has been modified and integrated with Quality Function Deployment. Hereinafter, the name of the systematic approach is the Transdisciplinary Quality System Development Lifecycle (TQSDL) Model. The QFD translates the voice of customers (VOC) into measurable technical characteristics. The modified TSDL model is based on Axiomatic Design developed by Suh which is applicable to all designs: products, processes, systems and organizations. The TQSDL model aims to provide a robust structure and systematic thinking to support the implementation of systems engineering management roles. This approach ensures that the customer requirements are fulfilled as well as satisfies all the systems engineering manager roles and activities.

Keywords: axiomatic design, quality function deployment, systems engineering management, system development lifecycle

Procedia PDF Downloads 354
720 Deployment of Beyond 4G Wireless Communication Networks with Carrier Aggregation

Authors: Bahram Khan, Anderson Rocha Ramos, Rui R. Paulo, Fernando J. Velez

Abstract:

With the growing demand for a new blend of applications, the users dependency on the internet is increasing day by day. Mobile internet users are giving more attention to their own experiences, especially in terms of communication reliability, high data rates and service stability on move. This increase in the demand is causing saturation of existing radio frequency bands. To address these challenges, researchers are investigating the best approaches, Carrier Aggregation (CA) is one of the newest innovations, which seems to fulfill the demands of the future spectrum, also CA is one the most important feature for Long Term Evolution - Advanced (LTE-Advanced). For this purpose to get the upcoming International Mobile Telecommunication Advanced (IMT-Advanced) mobile requirements (1 Gb/s peak data rate), the CA scheme is presented by 3GPP, which would sustain a high data rate using widespread frequency bandwidth up to 100 MHz. Technical issues such as aggregation structure, its implementations, deployment scenarios, control signal techniques, and challenges for CA technique in LTE-Advanced, with consideration of backward compatibility, are highlighted in this paper. Also, performance evaluation in macro-cellular scenarios through a simulation approach is presented, which shows the benefits of applying CA, low-complexity multi-band schedulers in service quality, system capacity enhancement and concluded that enhanced multi-band scheduler is less complex than the general multi-band scheduler, which performs better for a cell radius longer than 1800 m (and a PLR threshold of 2%).

Keywords: component carrier, carrier aggregation, LTE-advanced, scheduling

Procedia PDF Downloads 193
719 Soft Computing Employment to Optimize Safety Stock Levels in Supply Chain Dairy Product under Supply and Demand Uncertainty

Authors: Riyadh Jamegh, Alla Eldin Kassam, Sawsan Sabih

Abstract:

In order to overcome uncertainty conditions and inability to meet customers' requests due to these conditions, organizations tend to reserve a certain safety stock level (SSL). This level must be chosen carefully in order to avoid the increase in holding cost due to excess in SSL or shortage cost due to too low SSL. This paper used soft computing fuzzy logic to identify optimal SSL; this fuzzy model uses the dynamic concept to cope with high complexity environment status. The proposed model can deal with three input variables, i.e., demand stability level, raw material availability level, and on hand inventory level by using dynamic fuzzy logic to obtain the best SSL as an output. In this model, demand stability, raw material, and on hand inventory levels are described linguistically and then treated by inference rules of the fuzzy model to extract the best level of safety stock. The aim of this research is to provide dynamic approach which is used to identify safety stock level, and it can be implanted in different industries. Numerical case study in the dairy industry with Yogurt 200 gm cup product is explained to approve the validity of the proposed model. The obtained results are compared with the current level of safety stock which is calculated by using the traditional approach. The importance of the proposed model has been demonstrated by the significant reduction in safety stock level.

Keywords: inventory optimization, soft computing, safety stock optimization, dairy industries inventory optimization

Procedia PDF Downloads 118
718 Co-payment Strategies for Chronic Medications: A Qualitative and Comparative Analysis at European Level

Authors: Pedro M. Abreu, Bruno R. Mendes

Abstract:

The management of pharmacotherapy and the process of dispensing medicines is becoming critical in clinical pharmacy due to the increase of incidence and prevalence of chronic diseases, the complexity and customization of therapeutic regimens, the introduction of innovative and more expensive medicines, the unbalanced relation between expenditure and revenue as well as due to the lack of rationalization associated with medication use. For these reasons, co-payments emerged in Europe in the 70s and have been applied over the past few years in healthcare. Co-payments lead to a rationing and rationalization of user’s access under healthcare services and products, and simultaneously, to a qualification and improvement of the services and products for the end-user. This analysis, under hospital practices particularly and co-payment strategies in general, was carried out on all the European regions and identified four reference countries, that apply repeatedly this tool and with different approaches. The structure, content and adaptation of European co-payments were analyzed through 7 qualitative attributes and 19 performance indicators, and the results expressed in a scorecard, allowing to conclude that the German models (total score of 68,2% and 63,6% in both elected co-payments) can collect more compliance and effectiveness, the English models (total score of 50%) can be more accessible, and the French models (total score of 50%) can be more adequate to the socio-economic and legal framework. Other European models did not show the same quality and/or performance, so were not taken as a standard in the future design of co-payments strategies. In this sense, we can see in the co-payments a strategy not only to moderate the consumption of healthcare products and services, but especially to improve them, as well as a strategy to increment the value that the end-user assigns to these services and products, such as medicines.

Keywords: clinical pharmacy, co-payments, healthcare, medicines

Procedia PDF Downloads 247
717 INRAM-3DCNN: Multi-Scale Convolutional Neural Network Based on Residual and Attention Module Combined with Multilayer Perceptron for Hyperspectral Image Classification

Authors: Jianhong Xiang, Rui Sun, Linyu Wang

Abstract:

In recent years, due to the continuous improvement of deep learning theory, Convolutional Neural Network (CNN) has played a great superior performance in the research of Hyperspectral Image (HSI) classification. Since HSI has rich spatial-spectral information, only utilizing a single dimensional or single size convolutional kernel will limit the detailed feature information received by CNN, which limits the classification accuracy of HSI. In this paper, we design a multi-scale CNN with MLP based on residual and attention modules (INRAM-3DCNN) for the HSI classification task. We propose to use multiple 3D convolutional kernels to extract the packet feature information and fully learn the spatial-spectral features of HSI while designing residual 3D convolutional branches to avoid the decline of classification accuracy due to network degradation. Secondly, we also design the 2D Inception module with a joint channel attention mechanism to quickly extract key spatial feature information at different scales of HSI and reduce the complexity of the 3D model. Due to the high parallel processing capability and nonlinear global action of the Multilayer Perceptron (MLP), we use it in combination with the previous CNN structure for the final classification process. The experimental results on two HSI datasets show that the proposed INRAM-3DCNN method has superior classification performance and can perform the classification task excellently.

Keywords: INRAM-3DCNN, residual, channel attention, hyperspectral image classification

Procedia PDF Downloads 73
716 Characterization of Optical Systems for Intraocular Projection

Authors: Charles Q. Yu, Victoria H. Fan, Ahmed F. Al-Qahtani, Ibraim Viera

Abstract:

Introduction: Over 12 million people are blind due to opacity of the cornea, the clear tissue forming the front of the eye. Current methods use plastic implants to produce a clear optical pathway into the eye but are limited by a high rate of complications. New implants utilizing completely inside-the-eye projection technology can overcome blindness due to scarring of the eye by producing images on the retina without need for a clear optical pathway into the eye and may be free of the complications of traditional treatments. However, the interior of the eye is a challenging location for the design of optical focusing systems which can produce a sufficiently high quality image. No optical focusing systems have previously been characterized for this purpose. Methods: 3 optical focusing systems for intraocular (inside the eye) projection were designed and then modeled with ray tracing software, including a pinhole system, a planoconvex, and an achromatic system. These were then constructed using off-the-shelf components and tested in the laboratory. Weight, size, magnification, depth of focus, image quality and brightness were characterized. Results: Image quality increased with complexity of system design, as did weight and size. A dual achromatic doublet optical system produced the highest image quality. The visual acuity equivalent achieved with this system was better than 20/200. Its weight was less than that of the natural human crystalline lens. Conclusions: We demonstrate for the first time that high quality images can be produced by optical systems sufficiently small and light to be implanted within the eye.

Keywords: focusing, projection, blindness, cornea , achromatic, pinhole

Procedia PDF Downloads 124
715 Embedded System of Signal Processing on FPGA: Underwater Application Architecture

Authors: Abdelkader Elhanaoui, Mhamed Hadji, Rachid Skouri, Said Agounad

Abstract:

The purpose of this paper is to study the phenomenon of acoustic scattering by using a new method. The signal processing (Fast Fourier Transform FFT Inverse Fast Fourier Transform iFFT and BESSEL functions) is widely applied to obtain information with high precision accuracy. Signal processing has a wider implementation in general-purpose pro-cessors. Our interest was focused on the use of FPGAs (Field-Programmable Gate Ar-rays) in order to minimize the computational complexity in single processor architecture, then be accelerated on FPGA and meet real-time and energy efficiency requirements. Gen-eral-purpose processors are not efficient for signal processing. We implemented the acous-tic backscattered signal processing model on the Altera DE-SOC board and compared it to Odroid xu4. By comparison, the computing latency of Odroid xu4 and FPGA is 60 sec-onds and 3 seconds, respectively. The detailed SoC FPGA-based system has shown that acoustic spectra are performed up to 20 times faster than the Odroid xu4 implementation. FPGA-based system of processing algorithms is realized with an absolute error of about 10⁻³. This study underlines the increasing importance of embedded systems in underwater acoustics, especially in non-destructive testing. It is possible to obtain information related to the detection and characterization of submerged cells. So we have achieved good exper-imental results in real-time and energy efficiency.

Keywords: DE1 FPGA, acoustic scattering, form function, signal processing, non-destructive testing

Procedia PDF Downloads 72
714 The Influence of Human Factors Education on the Irish Registered Pre-Hospital Practitioner within the National Ambulance Service

Authors: Desmond Wade, Alfredo Ormazabal

Abstract:

Background: Ever since it commenced its registration process of pre-hospital practitioners in the year 2000 through the Irish Government Statute Instrument (SI 109 of 2000) process, the approach to education of its professionals has changed drastically. The progression from the traditional behaviouristic to the current constructivist approach has been based on experiences from other sectors and industries, nationally and internationally. Today, the delivery of a safe and efficient ambulance service heavily depends on its practitioners’ range of technical skills, academic knowledge, and overall competences. As these increase, so does the level of complexity of paramedics’ everyday practice. This has made it inevitable to consider the 'Human Factor' as a source of potential risk and made formative institutions like the National Ambulance Service College to include it in their curriculum. Methods: This paper used a mixed-method approach, where both, an online questionnaire and a set of semi-structured interviews were the source of primary data. An analysis of this data was carried out using qualitative and quantitative data analysis. Conclusions: The evidence presented leads to the conclusion that in the National Ambulance Service there is a considerable lack of education of Human Factors and the levels in understanding of how to manage Human Factors in practice vary across its spectrum. Paramedic Practitioners in Ireland seem to understand that the responsibility of patient care lies on the team, rather than on the most hierarchically senior practitioner present in the scene.

Keywords: human factors, ergonomics, stress, decision making, pre-hospital care, paramedic, education

Procedia PDF Downloads 145
713 Gas Phase Extraction: An Environmentally Sustainable and Effective Method for The Extraction and Recovery of Metal from Ores

Authors: Kolela J Nyembwe, Darlington C. Ashiegbu, Herman J. Potgieter

Abstract:

Over the past few decades, the demand for metals has increased significantly. This has led to a decrease and decline of high-grade ore over time and an increase in mineral complexity and matrix heterogeneity. In addition to that, there are rising concerns about greener processes and a sustainable environment. Due to these challenges, the mining and metal industry has been forced to develop new technologies that are able to economically process and recover metallic values from low-grade ores, materials having a metal content locked up in industrially processed residues (tailings and slag), and complex matrix mineral deposits. Several methods to address these issues have been developed, among which are ionic liquids (IL), heap leaching, and bioleaching. Recently, the gas phase extraction technique has been gaining interest because it eliminates many of the problems encountered in conventional mineral processing methods. The technique relies on the formation of volatile metal complexes, which can be removed from the residual solids by a carrier gas. The complexes can then be reduced using the appropriate method to obtain the metal and regenerate-recover the organic extractant. Laboratory work on the gas phase have been conducted for the extraction and recovery of aluminium (Al), iron (Fe), copper (Cu), chrome (Cr), nickel (Ni), lead (Pb), and vanadium V. In all cases the extraction revealed to depend of temperature and mineral surface area. The process technology appears very promising, offers the feasibility of recirculation, organic reagent regeneration, and has the potential to deliver on all promises of a “greener” process.

Keywords: gas-phase extraction, hydrometallurgy, low-grade ore, sustainable environment

Procedia PDF Downloads 122
712 A Key Parameter in Ocean Thermal Energy Conversion Plant Design and Operation

Authors: Yongjian Gu

Abstract:

Ocean thermal energy is one of the ocean energy sources. It is a renewable, sustainable, and green energy source. Ocean thermal energy conversion (OTEC) applies the ocean temperature gradient between the warmer surface seawater and the cooler deep seawater to run a heat engine and produce a useful power output. Unfortunately, the ocean temperature gradient is not big. Even in the tropical and equatorial regions, the surface water temperature can only reach up to 28oC and the deep water temperature can be as low as 4oC. The thermal efficiency of the OTEC plants, therefore, is low. In order to improve the plant thermal efficiency by using the limited ocean temperature gradient, some OTEC plants use the method of adding more equipment for better heat recovery, such as heat exchangers, pumps, etc. Obviously, the method will increase the plant's complexity and cost. The more important impact of the method is the additional equipment needs to consume power too, which may have an adverse effect on the plant net power output, in turn, the plant thermal efficiency. In the paper, the author first describes varied OTEC plants and the practice of using the method of adding more equipment for improving the plant's thermal efficiency. Then the author proposes a parameter, plant back works ratio ϕ, for measuring if the added equipment is appropriate for the plant thermal efficiency improvement. Finally, in the paper, the author presents examples to illustrate the application of the back work ratio ϕ as a key parameter in the OTEC plant design and operation.

Keywords: ocean thermal energy, ocean thermal energy conversion (OTEC), OTEC plant, plant back work ratio ϕ

Procedia PDF Downloads 189
711 Secure and Privacy-Enhanced Blockchain-Based Authentication System for University User Management

Authors: Ali El Ksimi

Abstract:

In today's digital academic environment, secure authentication methods are essential for managing sensitive user data, including that of students and faculty. The rise in cyber threats and data breaches has exposed the vulnerabilities of traditional authentication systems used in universities. Passwords, often the first line of defense, are particularly susceptible to hacking, phishing, and brute-force attacks. While multi-factor authentication (MFA) provides an additional layer of security, it can still be compromised and often adds complexity and inconvenience for users. As universities seek more robust security measures, blockchain technology emerges as a promising solution. Renowned for its decentralization, immutability, and transparency, blockchain has the potential to transform how user management is conducted in academic institutions. In this article, we explore a system that leverages blockchain technology specifically for managing user accounts within a university setting. The system enables the secure creation and management of accounts for different roles, such as administrators, teachers, and students. Each user is authenticated through a decentralized application (DApp) that ensures their data is securely stored and managed on the blockchain. By eliminating single points of failure and utilizing cryptographic techniques, the system enhances the security and integrity of user management processes. We will delve into the technical architecture, security benefits, and implementation considerations of this approach. By integrating blockchain into user management, we aim to address the limitations of traditional systems and pave the way for the future of digital security in education.

Keywords: blockchain, university, authentication, decentralization, cybersecurity, user management, privacy

Procedia PDF Downloads 9
710 Numerical Simulation of Free Surface Water Wave for the Flow Around NACA 0012 Hydrofoil and Wigley Hull Using VOF Method

Authors: Omar Imine, Mohammed Aounallah, Mustapha Belkadi

Abstract:

Steady three-dimensional and two free surface waves generated by moving bodies are presented, the flow problem to be simulated is rich in complexity and poses many modeling challenges because of the existence of breaking waves around the ship hull, and because of the interaction of the two-phase flow with the turbulent boundary layer. The results of several simulations are reported. The first study was performed for NACA0012 of hydrofoil with different meshes, this section is analyzed at h/c= 1, 0345 for 2D. In the second simulation, a mathematically defined Wigley hull form is used to investigate the application of a commercial CFD code in prediction of the total resistance and its components from tangential and normal forces on the hull wetted surface. The computed resistance and wave profiles are used to estimate the coefficient of the total resistance for Wigley hull advancing in calm water under steady conditions. The commercial CFD software FLUENT version 12 is used for the computations in the present study. The calculated grid is established using the code computer GAMBIT 2.3.26. The shear stress k-ωSST model is used for turbulence modeling and the volume of the fluid technique is employed to simulate the free-surface motion. The second order upwind scheme is used for discretizing the convection terms in the momentum transport equations, the Modified HRICscheme for VOF discretization. The results obtained compare well with the experimental data.

Keywords: free surface flows, breaking waves, boundary layer, Wigley hull, volume of fluid

Procedia PDF Downloads 369
709 Design of a CO₂-Reduced 3D Concrete Mixture Using Circular (Clay-Based) Building Materials

Authors: N. Z. van Hierden, Q. Yu, F. Gauvin

Abstract:

Cement manufacturing is, because of its production process, among the highest contributors to CO₂ emissions worldwide. As cement is one of the major components in 3D printed concrete, achieving sustainability and carbon neutrality can be particularly challenging. To improve the sustainability of 3D printed materials, different CO₂-reducing strategies can be used, each one with a distinct level of impact and complexity. In this work, we focus on the development of these sustainable mixtures and finding alternatives. Promising alternatives for cement and clinker replacement include the use of recycled building materials, amongst which (calcined) bricks and roof tiles. To study the potential of recycled clay-based building materials, the application of calcinated clay itself is studied as well. Compared to cement, the calcination temperature of clay-based materials is significantly lower, resulting in reduced CO₂ output. Reusing these materials is therefore a promising solution for utilizing waste streams while simultaneously reducing the cement content in 3D concrete mixtures. In addition, waste streams can be locally sourced, thereby reducing the emitted CO₂ during transportation. In this research, various alternative binders are examined, such as calcined clay blends (LC3) from recycled tiles and bricks, or locally obtained clay resources. Using various experiments, a high potential for mix designs including these resources has been shown with respect to material strength, while sustaining decent printability and buildability. Therefore, the defined strategies are promising and can lead to a more sustainable, low-CO₂ mixture suitable for 3D printing while using accessible materials.

Keywords: cement replacement, 3DPC, circular building materials, calcined clay, CO₂ reduction

Procedia PDF Downloads 80
708 Applying Concurrent Development Process for the Web Using Aspect-Oriented Approach

Authors: Hiroaki Fukuda

Abstract:

This paper shows a concurrent development process for modern web application, called Rich Internet Application (RIA), and describes its effect using a non-trivial application development. In the last years, RIAs such as Ajax and Flex have become popular based mainly on high-speed network. RIA provides sophisticated interfaces and user experiences, therefore, the development of RIA requires two kinds of engineer: a developer who implements business logic, and a designer who designs interface and experiences. Although collaborative works are becoming important for the development of RIAs, shared resources such as source code make it difficult. For example, if a design of interface is modified after developers have finished business logic implementations, they need to repeat the same implementations, and also tests to verify application’s behavior. MVC architecture and Object-oriented programming (OOP) enables to dividing an application into modules such as interfaces and logic, however, developers and/or designers have to write pieces of code (e.g., event handlers) that make these modules work as an application. On the other hand, Aspect-oriented programming (AOP) is ex- pected to solve complexity of application software development nowadays. AOP provides methods to separate crosscutting concerns that are scattered pieces of code from primary concerns. In this paper, we provide a concurrent development process for RIAs by introducing AOP concept. This process makes it possible to reduce shared resources between developers and designers, therefore they can perform their tasks concurrently. In addition, we describe experiences of development for a practical application using our proposed development process to show its availability.

Keywords: aspect-oriented programming, concurrent, development process, rich internet application

Procedia PDF Downloads 297
707 A Three-Dimensional (3D) Numerical Study of Roofs Shape Impact on Air Quality in Urban Street Canyons with Tree Planting

Authors: Bouabdellah Abed, Mohamed Bouzit, Lakhdar Bouarbi

Abstract:

The objective of this study is to investigate numerically the effect of roof shaped on wind flow and pollutant dispersion in a street canyon with one row of trees of pore volume, Pvol = 96%. A three-dimensional computational fluid dynamics (CFD) model for evaluating air flow and pollutant dispersion within an urban street canyon using Reynolds-averaged Navier–Stokes (RANS) equations and the k-Epsilon EARSM turbulence model as close of the equation system. The numerical model is performed with ANSYS-CFX code. Vehicle emissions were simulated as double line sources along the street. The numerical model was validated against the wind tunnel experiment. Having established this, the wind flow and pollutant dispersion in urban street canyons of six roof shapes are simulated. The numerical simulation agrees reasonably with the wind tunnel data. The results obtained in this work, indicate that the flow in 3D domain is more complicated, this complexity is increased with presence of tree and variability of the roof shapes. The results also indicated that the largest pollutant concentration level for two walls (leeward and windward wall) is observed with the upwind wedge-shaped roof. But the smallest pollutant concentration level is observed with the dome roof-shaped. The results also indicated that the corners eddies provide additional ventilation and lead to lower traffic pollutant concentrations at the street canyon ends.

Keywords: street canyon, pollutant dispersion, trees, building configuration, numerical simulation, k-Epsilon EARSM

Procedia PDF Downloads 353
706 The Effect of Sulfur and Calcium on the Formation of Dioxin in a Bubbling Fluidized Bed Incinerator

Authors: Chien-Song Chyang, Wei-Chih Wang

Abstract:

For the incineration process, the inhibition of dioxin formation is an important issue. Many investigations indicate that adding sulfur compounds in the combustion process can be an effectively inhibition for the dioxin formation. In the process, the ratio of sulfur-to-chlorine plays an important role for the reduction efficiency of dioxin formation. Ca-base sorbent is also a common used for the acid gas removing. Moreover, that is also the indirectly way for dioxin inhibition. Although sulfur and calcium can reduce the dioxin formation, it still have some confusion exists between these additives. To understand and clarify the relationship between the dioxin and simultaneous addition of sulfur and calcium are presented in this study. The experimental data conducted in a pilot scale fluidized bed combustion system at various operating conditions are analysis comprehensively. The focus is on the dioxin of fly ash in this study. The experimental data in this study showed that the PCDD/Fs concentration in the fly ash collected from the baghouse is increased slightly as the simultaneous addition of sulfur and calcium. This work described the CO concentration with the addition of sulfur and calcium at the freeboard temperature from 800°C to 900°C, which is raised by the fuel complexity. The positive correlation exists between the dioxin concentration and CO concentration and carbon contained in the fly ash.. At the same sulfur/chlorine ratio, the toxic equivalent quantity (TEQ) can be reduced by increasing the actual concentration of sulfur and calcium. The homologue profiles showed that the P₅CDD and P₅CDF were the two major sources for the toxicity of dioxin. 2,3,7,8-TCDD and 2,3,7,8-TCDF reduced by the addition of pyrite and hydrated lime. The experimental results showed that the trend of PCDD/Fs concentration in the fly ash was different by the different sulfur/chlorine ratio with the addition of sulfur at 800°C.

Keywords: reduction of dioxin emissions, sulfur-to-chlorine ratio, de-chlorination, Ca-based sorbent

Procedia PDF Downloads 143
705 A Standard Operating Procedure (SOP) for Forensic Soil Analysis: Tested Using a Simulated Crime Scene

Authors: Samara A. Testoni, Vander F. Melo, Lorna A. Dawson, Fabio A. S. Salvador

Abstract:

Soil traces are useful as forensic evidence due to their potential to transfer and adhere to different types of surfaces on a range of objects or persons. The great variability expressed by soil physical, chemical, biological and mineralogical properties show soil traces as complex mixtures. Soils are continuous and variable, no two soil samples being indistinguishable, nevertheless, the complexity of soil characteristics can provide powerful evidence for comparative forensic purposes. This work aimed to establish a Standard Operating Procedure (SOP) for forensic soil analysis in Brazil. We carried out a simulated crime scene with double blind sampling to calibrate the sampling procedures. Samples were collected at a range of locations covering a range of soil types found in South of Brazil: Santa Candida and Boa Vista, neighbourhoods from Curitiba (State of Parana) and in Guarani and Guaraituba, neighbourhoods from Colombo (Curitiba Metropolitan Region). A previously validated sequential analyses of chemical, physical and mineralogical analyses was developed in around 2 g of soil. The suggested SOP and the sequential range of analyses were effective in grouping the samples from the same place and from the same parent material together, as well as successfully discriminated samples from different locations and originated from different rocks. In addition, modifications to the sample treatment and analytical protocol can be made depending on the context of the forensic work.

Keywords: clay mineralogy, forensic soils analysis, sequential analyses, kaolinite, gibbsite

Procedia PDF Downloads 246
704 Assessing Performance of Data Augmentation Techniques for a Convolutional Network Trained for Recognizing Humans in Drone Images

Authors: Masood Varshosaz, Kamyar Hasanpour

Abstract:

In recent years, we have seen growing interest in recognizing humans in drone images for post-disaster search and rescue operations. Deep learning algorithms have shown great promise in this area, but they often require large amounts of labeled data to train the models. To keep the data acquisition cost low, augmentation techniques can be used to create additional data from existing images. There are many techniques of such that can help generate variations of an original image to improve the performance of deep learning algorithms. While data augmentation is potentially assumed to improve the accuracy and robustness of the models, it is important to ensure that the performance gains are not outweighed by the additional computational cost or complexity of implementing the techniques. To this end, it is important to evaluate the impact of data augmentation on the performance of the deep learning models. In this paper, we evaluated the most currently available 2D data augmentation techniques on a standard convolutional network which was trained for recognizing humans in drone images. The techniques include rotation, scaling, random cropping, flipping, shifting, and their combination. The results showed that the augmented models perform 1-3% better compared to a base network. However, as the augmented images only contain the human parts already visible in the original images, a new data augmentation approach is needed to include the invisible parts of the human body. Thus, we suggest a new method that employs simulated 3D human models to generate new data for training the network.

Keywords: human recognition, deep learning, drones, disaster mitigation

Procedia PDF Downloads 86