Search results for: aesthetic computing
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 1380

Search results for: aesthetic computing

270 Constructions of Linear and Robust Codes Based on Wavelet Decompositions

Authors: Alla Levina, Sergey Taranov

Abstract:

The classical approach to the providing noise immunity and integrity of information that process in computing devices and communication channels is to use linear codes. Linear codes have fast and efficient algorithms of encoding and decoding information, but this codes concentrate their detect and correct abilities in certain error configurations. To protect against any configuration of errors at predetermined probability can robust codes. This is accomplished by the use of perfect nonlinear and almost perfect nonlinear functions to calculate the code redundancy. The paper presents the error-correcting coding scheme using biorthogonal wavelet transform. Wavelet transform applied in various fields of science. Some of the wavelet applications are cleaning of signal from noise, data compression, spectral analysis of the signal components. The article suggests methods for constructing linear codes based on wavelet decomposition. For developed constructions we build generator and check matrix that contain the scaling function coefficients of wavelet. Based on linear wavelet codes we develop robust codes that provide uniform protection against all errors. In article we propose two constructions of robust code. The first class of robust code is based on multiplicative inverse in finite field. In the second robust code construction the redundancy part is a cube of information part. Also, this paper investigates the characteristics of proposed robust and linear codes.

Keywords: robust code, linear code, wavelet decomposition, scaling function, error masking probability

Procedia PDF Downloads 489
269 The Time-Frequency Domain Reflection Method for Aircraft Cable Defects Localization

Authors: Reza Rezaeipour Honarmandzad

Abstract:

This paper introduces an aircraft cable fault detection and location method in light of TFDR keeping in mind the end goal to recognize the intermittent faults adequately and to adapt to the serial and after-connector issues being hard to be distinguished in time domain reflection. In this strategy, the correlation function of reflected and reference signal is used to recognize and find the airplane fault as per the qualities of reflected and reference signal in time-frequency domain, so the hit rate of distinguishing and finding intermittent faults can be enhanced adequately. In the work process, the reflected signal is interfered by the noise and false caution happens frequently, so the threshold de-noising technique in light of wavelet decomposition is used to diminish the noise interference and lessen the shortcoming alert rate. At that point the time-frequency cross connection capacity of the reference signal and the reflected signal based on Wigner-Ville appropriation is figured so as to find the issue position. Finally, LabVIEW is connected to execute operation and control interface, the primary capacity of which is to connect and control MATLAB and LABSQL. Using the solid computing capacity and the bottomless capacity library of MATLAB, the signal processing turn to be effortlessly acknowledged, in addition LabVIEW help the framework to be more dependable and upgraded effectively.

Keywords: aircraft cable, fault location, TFDR, LabVIEW

Procedia PDF Downloads 476
268 Performance Analysis of High Temperature Heat Pump Cycle for Industrial Process

Authors: Seon Tae Kim, Robert Hegner, Goksel Ozuylasi, Panagiotis Stathopoulos, Eberhard Nicke

Abstract:

High-temperature heat pumps (HTHP) that can supply heat at temperatures above 200°C can enhance the energy efficiency of industrial processes and reduce the CO₂ emissions connected with the heat supply of these processes. In the current work, the thermodynamic performance of 3 different vapor compression cycles, which use R-718 (water) as a working medium, have been evaluated by using a commercial process simulation tool (EBSILON Professional). All considered cycles use two-stage vapor compression with intercooling between stages. The main aim of the study is to compare different intercooling strategies and study possible heat recovery scenarios within the intercooling process. This comparison has been carried out by computing the coefficient of performance (COP), the heat supply temperature level, and the respective mass flow rate of water for all cycle architectures. With increasing temperature difference between the heat source and heat sink, ∆T, the COP values decreased as expected, and the highest COP value was found for the cycle configurations where both compressors have the same pressure ratio (PR). The investigation on the HTHP capacities with optimized PR and exergy analysis has also been carried out. The internal heat exchanger cycle with the inward direction of secondary flow (IHX-in) showed a higher temperature level and exergy efficiency compared to other cycles. Moreover, the available operating range was estimated by considering mechanical limitations.

Keywords: high temperature heat pump, industrial process, vapor compression cycle, R-718 (water), thermodynamic analysis

Procedia PDF Downloads 149
267 The Perception on 21st Century Skills of Nursing Instructors and Nursing Students at Boromarajonani College of Nursing, Chonburi

Authors: Kamolrat Turner, Somporn Rakkwamsuk, Ladda Leungratanamart

Abstract:

The aim of this descriptive study was to determine the perception of 21st century skills among nursing professors and nursing students at Boromarajonani College of Nursing, Chonburi. A total of 38 nursing professors and 75 second year nursing students took part in the study. Data were collected by 21st century skills questionnaires comprised of 63 items. Descriptive statistics were used to describe the findings. The results have shown that the overall mean scores of the perception of nursing professors on 21st century skills were at a high level. The highest mean scores were recorded for computing and ICT literacy, and career and leaning skills. The lowest mean scores were recorded for reading and writing and mathematics. The overall mean scores on perception of nursing students on 21st century skills were at a high level. The highest mean scores were recorded for computer and ICT literacy, for which the highest item mean scores were recorded for competency on computer programs. The lowest mean scores were recorded for the reading, writing, and mathematics components, in which the highest item mean score was reading Thai correctly, and the lowest item mean score was English reading and translate to other correctly. The findings from this study have shown that the perceptions of nursing professors were consistent with those of nursing students. Moreover, any activities aiming to raise capacity on English reading and translate information to others should be taken into the consideration.

Keywords: 21st century skills, perception, nursing instructor, nursing student

Procedia PDF Downloads 316
266 Characteristic Sentence Stems in Academic English Texts: Definition, Identification, and Extraction

Authors: Jingjie Li, Wenjie Hu

Abstract:

Phraseological units in academic English texts have been a central focus in recent corpus linguistic research. A wide variety of phraseological units have been explored, including collocations, chunks, lexical bundles, patterns, semantic sequences, etc. This paper describes a special category of clause-level phraseological units, namely, Characteristic Sentence Stems (CSSs), with a view to describing their defining criteria and extraction method. CSSs are contiguous lexico-grammatical sequences which contain a subject-predicate structure and which are frame expressions characteristic of academic writing. The extraction of CSSs consists of six steps: Part-of-speech tagging, n-gram segmentation, structure identification, significance of occurrence calculation, text range calculation, and overlapping sequence reduction. Significance of occurrence calculation is the crux of this study. It includes the computing of both the internal association and the boundary independence of a CSS and tests the occurring significance of the CSS from both inside and outside perspectives. A new normalization algorithm is also introduced into the calculation of LocalMaxs for reducing overlapping sequences. It is argued that many sentence stems are so recurrent in academic texts that the most typical of them have become the habitual ways of making meaning in academic writing. Therefore, studies of CSSs could have potential implications and reference value for academic discourse analysis, English for Academic Purposes (EAP) teaching and writing.

Keywords: characteristic sentence stem, extraction method, phraseological unit, the statistical measure

Procedia PDF Downloads 166
265 Cloud Enterprise Application Provider Selection Model for the Small and Medium Enterprise: A Pilot Study

Authors: Rowland R. Ogunrinde, Yusmadi Y. Jusoh, Noraini Che Pa, Wan Nurhayati W. Rahman, Azizol B. Abdullah

Abstract:

Enterprise Applications (EAs) aid the organizations achieve operational excellence and competitive advantage. Over time, most Small and Medium Enterprises (SMEs), which are known to be the major drivers of most thriving global economies, use the costly on-premise versions of these applications thereby making business difficult to competitively thrive in the same market environment with their large enterprise counterparts. The advent of cloud computing presents the SMEs an affordable offer and great opportunities as such EAs can be cloud-hosted and rented on a pay-per-use basis which does not require huge initial capital. However, as there are numerous Cloud Service Providers (CSPs) offering EAs as Software-as-a-Service (SaaS), there is a challenge of choosing a suitable provider with Quality of Service (QoS) that meet the organizations’ customized requirements. The proposed model takes care of that and goes a step further to select the most affordable among a selected few of the CSPs. In the earlier stage, before developing the instrument and conducting the pilot test, the researchers conducted a structured interview with three experts to validate the proposed model. In conclusion, the validity and reliability of the instrument were tested through experts, typical respondents, and analyzed with SPSS 22. Results confirmed the validity of the proposed model and the validity and reliability of the instrument.

Keywords: cloud service provider, enterprise application, quality of service, selection criteria, small and medium enterprise

Procedia PDF Downloads 179
264 Theoretical Analysis of the Optical and Solid State Properties of Thin Film

Authors: E. I. Ugwu

Abstract:

Theoretical analysis of the optical and Solid State properties of ZnS thin film using beam propagation technique in which a scalar wave is propagated through the material thin film deposited on a substrate with the assumption that the dielectric medium is section into a homogenous reference dielectric constant term, and a perturbed dielectric term, representing the deposited thin film medium is presented in this work. These two terms, constitute arbitrary complex dielectric function that describes dielectric perturbation imposed by the medium of for the system. This is substituted into a defined scalar wave equation in which the appropriate Green’s Function was defined on it and solved using series technique. The green’s value obtained from Green’s Function was used in Dyson’s and Lippmann Schwinger equations in conjunction with Born approximation method in computing the propagated field for different input regions of field wavelength during which the influence of the dielectric constants and mesh size of the thin film on the propagating field were depicted. The results obtained from the computed field were used in turn to generate the data that were used to compute the band gaps, solid state and optical properties of the thin film such as reflectance, Transmittance and reflectance with which the band gap obtained was found to be in close approximate to that of experimental value.

Keywords: scalar wave, optical and solid state properties, thin film, dielectric medium, perturbation, Lippmann Schwinger equations, Green’s Function, propagation

Procedia PDF Downloads 438
263 Decision Making System for Clinical Datasets

Authors: P. Bharathiraja

Abstract:

Computer Aided decision making system is used to enhance diagnosis and prognosis of diseases and also to assist clinicians and junior doctors in clinical decision making. Medical Data used for decision making should be definite and consistent. Data Mining and soft computing techniques are used for cleaning the data and for incorporating human reasoning in decision making systems. Fuzzy rule based inference technique can be used for classification in order to incorporate human reasoning in the decision making process. In this work, missing values are imputed using the mean or mode of the attribute. The data are normalized using min-ma normalization to improve the design and efficiency of the fuzzy inference system. The fuzzy inference system is used to handle the uncertainties that exist in the medical data. Equal-width-partitioning is used to partition the attribute values into appropriate fuzzy intervals. Fuzzy rules are generated using Class Based Associative rule mining algorithm. The system is trained and tested using heart disease data set from the University of California at Irvine (UCI) Machine Learning Repository. The data was split using a hold out approach into training and testing data. From the experimental results it can be inferred that classification using fuzzy inference system performs better than trivial IF-THEN rule based classification approaches. Furthermore it is observed that the use of fuzzy logic and fuzzy inference mechanism handles uncertainty and also resembles human decision making. The system can be used in the absence of a clinical expert to assist junior doctors and clinicians in clinical decision making.

Keywords: decision making, data mining, normalization, fuzzy rule, classification

Procedia PDF Downloads 517
262 A Comparative Soft Computing Approach to Supplier Performance Prediction Using GEP and ANN Models: An Automotive Case Study

Authors: Seyed Esmail Seyedi Bariran, Khairul Salleh Mohamed Sahari

Abstract:

In multi-echelon supply chain networks, optimal supplier selection significantly depends on the accuracy of suppliers’ performance prediction. Different methods of multi criteria decision making such as ANN, GA, Fuzzy, AHP, etc have been previously used to predict the supplier performance but the “black-box” characteristic of these methods is yet a major concern to be resolved. Therefore, the primary objective in this paper is to implement an artificial intelligence-based gene expression programming (GEP) model to compare the prediction accuracy with that of ANN. A full factorial design with %95 confidence interval is initially applied to determine the appropriate set of criteria for supplier performance evaluation. A test-train approach is then utilized for the ANN and GEP exclusively. The training results are used to find the optimal network architecture and the testing data will determine the prediction accuracy of each method based on measures of root mean square error (RMSE) and correlation coefficient (R2). The results of a case study conducted in Supplying Automotive Parts Co. (SAPCO) with more than 100 local and foreign supply chain members revealed that, in comparison with ANN, gene expression programming has a significant preference in predicting supplier performance by referring to the respective RMSE and R-squared values. Moreover, using GEP, a mathematical function was also derived to solve the issue of ANN black-box structure in modeling the performance prediction.

Keywords: Supplier Performance Prediction, ANN, GEP, Automotive, SAPCO

Procedia PDF Downloads 419
261 Wind Load Reduction Effect of Exterior Porous Skin on Facade Performance

Authors: Ying-Chang Yu, Yuan-Lung Lo

Abstract:

Building envelope design is one of the most popular design fields of architectural profession in nowadays. The main design trend of such system is to highlight the designer's aesthetic intention from the outlook of building project. Due to the trend of current façade design, the building envelope contains more and more layers of components, such as double skin façade, photovoltaic panels, solar control system, or even ornamental components. These exterior components are designed for various functional purposes. Most researchers focus on how these exterior elements should be structurally sound secured. However, not many researchers consider these elements would help to improve the performance of façade system. When the exterior elements are deployed in large scale, it creates an additional layer outside of original façade system and acts like a porous interface which would interfere with the aerodynamic of façade surface in micro-scale. A standard façade performance consists with 'water penetration, air infiltration rate, operation force, and component deflection ratio', and these key performances are majorly driven by the 'Design Wind Load' coded in local regulation. A design wind load is usually determined by the maximum wind pressure which occurs on the surface due to the geometry or location of building in extreme conditions. This research was designed to identify the air damping phenomenon of micro turbulence caused by porous exterior layer leading to surface wind load reduction for improvement of façade system performance. A series of wind tunnel test on dynamic pressure sensor array covered by various scale of porous exterior skin was conducted to verify the effect of wind pressure reduction. The testing specimens were designed to simulate the typical building with two-meter extension offsetting from building surface. Multiple porous exterior skins were prepared to replicate various opening ratio of surface which may cause different level of damping effect. This research adopted 'Pitot static tube', 'Thermal anemometers', and 'Hot film probe' to collect the data of surface dynamic pressure behind porous skin. Turbulence and distributed resistance are the two main factors of aerodynamic which would reduce the actual wind pressure. From initiative observation, the reading of surface wind pressure was effectively reduced behind porous media. In such case, an actual building envelope system may be benefited by porous skin from the reduction of surface wind pressure, which may improve the performance of envelope system consequently.

Keywords: multi-layer facade, porous media, facade performance, turbulence and distributed resistance, wind tunnel test

Procedia PDF Downloads 217
260 Decision Support: How Explainable A.I. Can Improve Transparency and Trust with Human Users

Authors: Devon Brown, Liu Chunmei

Abstract:

This paper will present an analysis as part of the researchers dissertation topic focusing on the intersection of affective and analytical directed acyclic graphs (DAGs) in the context of Decision Support Systems (DSS). The researcher’s work involves analyzing decision theory models like Affective and Bayesian Decision theory models and how they could be implemented under an Affective Computing Framework using Information Fusion and Human-Centered Design. Additionally, the researcher is beginning research on an Affective-Analytic Decision Framework (AADF) model for their dissertation research and are looking to merge logic and analytic models with empathetic insights into affective DAGs. Data-collection efforts begin Fall 2024 and in preparation for the efforts this paper looks to analyze previous research in this area and introduce the AADF framework and propose conceptual models for consideration. For this paper, the research emphasis is placed on analyzing Bayesian networks and Markov models which offer probabilistic techniques during uncertainty in decision-making. Ideally, including affect into analytic models will ensure algorithms can increase user trust with algorithms by including emotional states and the user’s experience with the goal of developing emotionally intelligent A.I. systems that can start to navigate the complex fabric of human emotion during decision-making.

Keywords: decision support systems, explainable AI, HCAI techniques, affective-analytical decision framework

Procedia PDF Downloads 20
259 Fragile Mires as Living Heritage: Human-Nature Relations in Contemporary Digital Life

Authors: Kirsi Laurén, Tiina Seppä

Abstract:

This study focuses on human-mire relations in the context of digital aestheticization and the long-standing tradition of folklore concerning mires. The study concentrates on the Patvinsuo mire in Eastern Finland and the Viiankiaapa mire in Finnish Lapland. Patvinsuo is a national park, and Viiankiaapa is a protected mire area with hiking trails and other recreational infrastructure. Perceiving the environment through digital technology can help to notice aesthetic details in nature. In addition, sharing images and texts digitally through social media adds a sense of community to the relationship with nature and, at the same time, creates a different kind of living heritage where old and new traditions meet and mingle. People visiting and camping in these areas 'self-care' themselves through recreation in nature. However, these practices and digital aestheticization can sometimes lead to the erosion of fragile mires. The research focuses on understanding the impact of digital aestheticization, such as taking digital photos, on the relationship with nature for individuals moving and working in mires. Additionally, the study aims to explore the contemporary perception of the water environment in mires and its cultural heritage, including mythical and folkloric elements. The research material consists of senso-digital walking interviews and digital recordings (audio recordings, photographs, videos) made during the mire walks, as well as archival material from the Finnish Literature Society’s Archives on mire folklore. The analysis of the material relies centrally on theories from sensory anthropology on the relationship between sensory perception and culture. The modern-day interviewees include outdoor enthusiasts spending their leisure time in mires, artists treating mires in their art, and nature experts (scientists, civil servants, and nature guides). The senso-digital walking interviews were conducted in Patvinsuo and Viiankiaapa mires on a trail chosen by the interviewees themselves. The material selected from the archive consists mainly of folk beliefs and folk poetry from the 19th and 20th centuries that express the relationship of the narrator to the mires. The interview and archival materials date from different periods and are different in character, which has to be taken into account in the analysis. However, in the analysis of both materials, particular attention is paid to the descriptions of sensations that appear in them. Analyzing the materials in parallel is limited by the fact that they date from different periods, but on the other hand, it is their different ages that make it possible to perceive the changes in the cultural heritage of mires.

Keywords: mires, living heritage, digital aestheticization, folklore, sensory anthropology

Procedia PDF Downloads 99
258 Genomic Sequence Representation Learning: An Analysis of K-Mer Vector Embedding Dimensionality

Authors: James Jr. Mashiyane, Risuna Nkolele, Stephanie J. Müller, Gciniwe S. Dlamini, Rebone L. Meraba, Darlington S. Mapiye

Abstract:

When performing language tasks in natural language processing (NLP), the dimensionality of word embeddings is chosen either ad-hoc or is calculated by optimizing the Pairwise Inner Product (PIP) loss. The PIP loss is a metric that measures the dissimilarity between word embeddings, and it is obtained through matrix perturbation theory by utilizing the unitary invariance of word embeddings. Unlike in natural language, in genomics, especially in genome sequence processing, unlike in natural language processing, there is no notion of a “word,” but rather, there are sequence substrings of length k called k-mers. K-mers sizes matter, and they vary depending on the goal of the task at hand. The dimensionality of word embeddings in NLP has been studied using the matrix perturbation theory and the PIP loss. In this paper, the sufficiency and reliability of applying word-embedding algorithms to various genomic sequence datasets are investigated to understand the relationship between the k-mer size and their embedding dimension. This is completed by studying the scaling capability of three embedding algorithms, namely Latent Semantic analysis (LSA), Word2Vec, and Global Vectors (GloVe), with respect to the k-mer size. Utilising the PIP loss as a metric to train embeddings on different datasets, we also show that Word2Vec outperforms LSA and GloVe in accurate computing embeddings as both the k-mer size and vocabulary increase. Finally, the shortcomings of natural language processing embedding algorithms in performing genomic tasks are discussed.

Keywords: word embeddings, k-mer embedding, dimensionality reduction

Procedia PDF Downloads 137
257 Enhanced Model for Risk-Based Assessment of Employee Security with Bring Your Own Device Using Cyber Hygiene

Authors: Saidu I. R., Shittu S. S.

Abstract:

As the trend of personal devices accessing corporate data continues to rise through Bring Your Own Device (BYOD) practices, organizations recognize the potential cost reduction and productivity gains. However, the associated security risks pose a significant threat to these benefits. Often, organizations adopt BYOD environments without fully considering the vulnerabilities introduced by human factors in this context. This study presents an enhanced assessment model that evaluates the security posture of employees in BYOD environments using cyber hygiene principles. The framework assesses users' adherence to best practices and guidelines for maintaining a secure computing environment, employing scales and the Euclidean distance formula. By utilizing this algorithm, the study measures the distance between users' security practices and the organization's optimal security policies. To facilitate user evaluation, a simple and intuitive interface for automated assessment is developed. To validate the effectiveness of the proposed framework, design science research methods are employed, and empirical assessments are conducted using five artifacts to analyze user suitability in BYOD environments. By addressing the human factor vulnerabilities through the assessment of cyber hygiene practices, this study aims to enhance the overall security of BYOD environments and enable organizations to leverage the advantages of this evolving trend while mitigating potential risks.

Keywords: security, BYOD, vulnerability, risk, cyber hygiene

Procedia PDF Downloads 76
256 Criticism and Theorizing of Architecture and Urbanism in the Creativity Cinematographic Film

Authors: Wafeek Mohamed Ibrahim Mohamed

Abstract:

In the era of globalization, the camera of the cinematographic film plays a very important role in terms of monitoring and documenting what it was and distinguished the built environment of architectural and Urbanism. Moving the audience to the out-going backward through the cinematographic film and its stereophonic screen by which the picture appears at its best and its coexistence reached now its third dimension. The camera has indicated to the city shape with its paths, (alley) lanes, buildings and its architectural style. We have seen the architectural styles in its cinematic scenes which remained a remembrance in its history, in spite of the fact that some of which has been disappearing as what happened to ‘Boulak Bridge’ in Cairo built by ‘Eiffel’ and it has been demolished, but it remains a remembrance we can see it in the films of ’Usta Hassan’and A Crime in the Quiet Neighborhood. The purpose of the fundamental research is an attempt to reach a critical view of the idea of criticism and theorizing for Architecture and Urbanism in the cinematographic film and their relationship and reflection on the ‘audience’ understanding of the public opinion related to our built environment of Architectural and Urbanism with its problems and hardness. It is like as a trial to study the Architecture and Urbanism of the built environment in the cinematographic film and hooking up (linking) a realistic view of the governing conceptual significance thereof. The aesthetic thought of our traditional environment, in a psychological and anthropological framework, derives from the cinematic concept of the Architecture and Urbanism of the place and the dynamics of the space. The architectural space considers the foundation stone of the cinematic story and the main background of the events therein, which integrate the audience into a romantic trip to the city through its symbolized image of the spaces, lanes [alley], etc. This will be done through two main branches: firstly, Reviewing during time pursuit of the Architecture and Urbanism in the cinematographic films the thirties ago in the Egyptian cinema [onset from the film ‘Bab El Hadid’ to the American University at a film of ‘Saidi at the American University’]. The research concludes the importance of the need to study the cinematic films which deal with our societies, their architectural and Urbanism concerns whether the traditional ones or the contemporary and their crisis (such as the housing crisis in the film of ‘Krakoun in the street’, etc) to study the built environment with its architectural dynamic spaces through a modernist view. In addition, using the cinema as an important Media for spreading the ideas, documenting and monitoring the current changes in the built environment through its various dramas and comedies, etc. The cinema is considered as a mirror of the society and its built environment over the epochs. It assured the unique case constituted by cinema with the audience (public opinion) through a sense of emptiness and forming the mental image related to the city and the built environment.

Keywords: architectural and urbanism, cinematographic architectural, film, space in the film, media

Procedia PDF Downloads 237
255 Review of Theories and Applications of Genetic Programing in Sediment Yield Modeling

Authors: Adesoji Tunbosun Jaiyeola, Josiah Adeyemo

Abstract:

Sediment yield can be considered to be the total sediment load that leaves a drainage basin. The knowledge of the quantity of sediments present in a river at a particular time can lead to better flood capacity in reservoirs and consequently help to control over-bane flooding. Furthermore, as sediment accumulates in the reservoir, it gradually loses its ability to store water for the purposes for which it was built. The development of hydrological models to forecast the quantity of sediment present in a reservoir helps planners and managers of water resources systems, to understand the system better in terms of its problems and alternative ways to address them. The application of artificial intelligence models and technique to such real-life situations have proven to be an effective approach of solving complex problems. This paper makes an extensive review of literature relevant to the theories and applications of evolutionary algorithms, and most especially genetic programming. The successful applications of genetic programming as a soft computing technique were reviewed in sediment modelling and other branches of knowledge. Some fundamental issues such as benchmark, generalization ability, bloat and over-fitting and other open issues relating to the working principles of GP, which needs to be addressed by the GP community were also highlighted. This review aim to give GP theoreticians, researchers and the general community of GP enough research direction, valuable guide and also keep all stakeholders abreast of the issues which need attention during the next decade for the advancement of GP.

Keywords: benchmark, bloat, generalization, genetic programming, over-fitting, sediment yield

Procedia PDF Downloads 446
254 An Investigation of Performance Versus Security in Cognitive Radio Networks with Supporting Cloud Platforms

Authors: Kurniawan D. Irianto, Demetres D. Kouvatsos

Abstract:

The growth of wireless devices affects the availability of limited frequencies or spectrum bands as it has been known that spectrum bands are a natural resource that cannot be added. Many studies about available spectrum have been done and it shows that licensed frequencies are idle most of the time. Cognitive radio is one of the solutions to solve those problems. Cognitive radio is a promising technology that allows the unlicensed users known as secondary users (SUs) to access licensed bands without making interference to licensed users or primary users (PUs). As cloud computing has become popular in recent years, cognitive radio networks (CRNs) can be integrated with cloud platform. One of the important issues in CRNs is security. It becomes a problem since CRNs use radio frequencies as a medium for transmitting and CRNs share the same issues with wireless communication systems. Another critical issue in CRNs is performance. Security has adverse effect to performance and there are trade-offs between them. The goal of this paper is to investigate the performance related to security trade-off in CRNs with supporting cloud platforms. Furthermore, Queuing Network Models with preemptive resume and preemptive repeat identical priority are applied in this project to measure the impact of security to performance in CRNs with or without cloud platform. The generalized exponential (GE) type distribution is used to reflect the bursty inter-arrival and service times at the servers. The results show that the best performance is obtained when security is disable and cloud platform is enable.

Keywords: performance vs. security, cognitive radio networks, cloud platforms, GE-type distribution

Procedia PDF Downloads 346
253 Extended Kalman Filter and Markov Chain Monte Carlo Method for Uncertainty Estimation: Application to X-Ray Fluorescence Machine Calibration and Metal Testing

Authors: S. Bouhouche, R. Drai, J. Bast

Abstract:

This paper is concerned with a method for uncertainty evaluation of steel sample content using X-Ray Fluorescence method. The considered method of analysis is a comparative technique based on the X-Ray Fluorescence; the calibration step assumes the adequate chemical composition of metallic analyzed sample. It is proposed in this work a new combined approach using the Kalman Filter and Markov Chain Monte Carlo (MCMC) for uncertainty estimation of steel content analysis. The Kalman filter algorithm is extended to the model identification of the chemical analysis process using the main factors affecting the analysis results; in this case, the estimated states are reduced to the model parameters. The MCMC is a stochastic method that computes the statistical properties of the considered states such as the probability distribution function (PDF) according to the initial state and the target distribution using Monte Carlo simulation algorithm. Conventional approach is based on the linear correlation, the uncertainty budget is established for steel Mn(wt%), Cr(wt%), Ni(wt%) and Mo(wt%) content respectively. A comparative study between the conventional procedure and the proposed method is given. This kind of approaches is applied for constructing an accurate computing procedure of uncertainty measurement.

Keywords: Kalman filter, Markov chain Monte Carlo, x-ray fluorescence calibration and testing, steel content measurement, uncertainty measurement

Procedia PDF Downloads 283
252 Flexible Design Solutions for Complex Free form Geometries Aimed to Optimize Performances and Resources Consumption

Authors: Vlad Andrei Raducanu, Mariana Lucia Angelescu, Ion Cinca, Vasile Danut Cojocaru, Doina Raducanu

Abstract:

By using smart digital tools, such as generative design (GD) and digital fabrication (DF), problems of high actuality concerning resources optimization (materials, energy, time) can be solved and applications or products of free-form type can be created. In the new digital technology materials are active, designed in response to a set of performance requirements, which impose a total rethinking of old material practices. The article presents the design procedure key steps of a free-form architectural object - a column type one with connections to get an adaptive 3D surface, by using the parametric design methodology and by exploiting the properties of conventional metallic materials. In parametric design the form of the created object or space is shaped by varying the parameters values and relationships between the forms are described by mathematical equations. Digital parametric design is based on specific procedures, as shape grammars, Lindenmayer - systems, cellular automata, genetic algorithms or swarm intelligence, each of these procedures having limitations which make them applicable only in certain cases. In the paper the design process stages and the shape grammar type algorithm are presented. The generative design process relies on two basic principles: the modeling principle and the generative principle. The generative method is based on a form finding process, by creating many 3D spatial forms, using an algorithm conceived in order to apply its generating logic onto different input geometry. Once the algorithm is realized, it can be applied repeatedly to generate the geometry for a number of different input surfaces. The generated configurations are then analyzed through a technical or aesthetic selection criterion and finally the optimal solution is selected. Endless range of generative capacity of codes and algorithms used in digital design offers various conceptual possibilities and optimal solutions for both technical and environmental increasing demands of building industry and architecture. Constructions or spaces generated by parametric design can be specifically tuned, in order to meet certain technical or aesthetical requirements. The proposed approach has direct applicability in sustainable architecture, offering important potential economic advantages, a flexible design (which can be changed until the end of the design process) and unique geometric models of high performance.

Keywords: parametric design, algorithmic procedures, free-form architectural object, sustainable architecture

Procedia PDF Downloads 377
251 Results and Insights from a Developmental Psychology Study on the Presentation of Juvenility in Punk Fanzines

Authors: Marc Dietrich

Abstract:

Youth cultures like Punk as much as media relevant to the specific scenes associated with them offer ample opportunity for young people or juvenile adults to construct their personal identities. However, developmental psychology has largely neglected such identity construction processes during the last decades. Such was not always the case: Early developmental psychologists intensely studied youth cultures and their meaningful objects and media in the early 20th century but lost interest when cultural studies and the social sciences occupied the field after World War II. Our project Constructions of Juvenility and Generation(ality), funded by the German Federal Ministry for Education and Research, reintegrates the study of youth cultures and their meaningful objects and media in a developmental psychology perspective. We present an empirical study of the ways in which youth, juvenility, and generation (ality) are constructed and negotiated in underground media like punk fanzines (a portmanteau of fan and magazine), including both semantic and aesthetic aspects of these construction processes within punk culture. The fanzine sample was accessed by the theoretical sampling strategy typical for GTM studies. Acknowledging fanzines as artful self-produced media by scene members for scene members, we conceptualize them as authentic documents of scene norms and values. Drawing on an analysis of both text and (cover) images in Punk fanzines published in Germany (and within a sample dating from 1981 until 2015) using a novel Visual Grounded Theory approach, we found that: a) Juvenility is a highly contested concept in punk culture. Its semantic quality and valuation varies with the perspectives present within the culture (e.g. embryo punks versus older punks); b) Juvenility is constructed as having energy and being socio-critical that does not depend on biological age; c) Juvenility is regarded not an ideal per se in German Punk culture; Punk culture constructs old age in a largely positive way (e.g., as marker of being real and a historical innovator); d) Juvenility is constructed as a habit that should be kept for life as it is constantly adapted to individual biographical trajectories like specific job situations or having a family. Consequently, identity negotiation as documented in the zines attempts to balance subculturally driven perspectives on life and society with the pragmatic requirements of a bourgeois life. The proposed paper will present the main results of this large-scale study of punk fanzines and show how developmental psychology perspectives as represented in the novel methodology applied in it can advance the study of youth cultures.

Keywords: construction of juvenility, developmental psychology, visual GTM, youth culture, fanzines

Procedia PDF Downloads 292
250 Seismic Reinforcement of Existing Japanese Wooden Houses Using Folded Exterior Thin Steel Plates

Authors: Jiro Takagi

Abstract:

Approximately 90 percent of the casualties in the near-fault-type Kobe earthquake in 1995 resulted from the collapse of wooden houses, although a limited number of collapses of this type of building were reported in the more recent off-shore-type Tohoku Earthquake in 2011 (excluding direct damage by the Tsunami). Kumamoto earthquake in 2016 also revealed the vulnerability of old wooden houses in Japan. There are approximately 24.5 million wooden houses in Japan and roughly 40 percent of them are considered to have the inadequate seismic-resisting capacity. Therefore, seismic strengthening of these wooden houses is an urgent task. However, it has not been quickly done for various reasons, including cost and inconvenience during the reinforcing work. Residents typically spend their money on improvements that more directly affect their daily housing environment (such as interior renovation, equipment renewal, and placement of thermal insulation) rather than on strengthening against extremely rare events such as large earthquakes. Considering this tendency of residents, a new approach to developing a seismic strengthening method for wooden houses is needed. The seismic reinforcement method developed in this research uses folded galvanized thin steel plates as both shear walls and the new exterior architectural finish. The existing finish is not removed. Because galvanized steel plates are aesthetic and durable, they are commonly used in modern Japanese buildings on roofs and walls. Residents could feel a physical change through the reinforcement, covering existing exterior walls with steel plates. Also, this exterior reinforcement can be installed with only outdoor work, thereby reducing inconvenience for residents since they would not be required to move out temporarily during construction. The Durability of the exterior is enhanced, and the reinforcing work can be done efficiently since perfect water protection is not required for the new finish. In this method, the entire exterior surface would function as shear walls and thus the pull-out force induced by seismic lateral load would be significantly reduced as compared with a typical reinforcement scheme of adding braces in selected frames. Consequently, reinforcing details of anchors to the foundations would be less difficult. In order to attach the exterior galvanized thin steel plates to the houses, new wooden beams are placed next to the existing beams. In this research, steel connections between the existing and new beams are developed, which contain a gap for the existing finish between the two beams. The thin steel plates are screwed to the new beams and the connecting vertical members. The seismic-resisting performance of the shear walls with thin steel plates is experimentally verified both for the frames and connections. It is confirmed that the performance is high enough for bracing general wooden houses.

Keywords: experiment, seismic reinforcement, thin steel plates, wooden houses

Procedia PDF Downloads 226
249 Improving Cheon-Kim-Kim-Song (CKKS) Performance with Vector Computation and GPU Acceleration

Authors: Smaran Manchala

Abstract:

Homomorphic Encryption (HE) enables computations on encrypted data without requiring decryption, mitigating data vulnerability during processing. Usable Fully Homomorphic Encryption (FHE) could revolutionize secure data operations across cloud computing, AI training, and healthcare, providing both privacy and functionality, however, the computational inefficiency of schemes like Cheon-Kim-Kim-Song (CKKS) hinders their widespread practical use. This study focuses on optimizing CKKS for faster matrix operations through the implementation of vector computation parallelization and GPU acceleration. The variable effects of vector parallelization on GPUs were explored, recognizing that while parallelization typically accelerates operations, it could introduce overhead that results in slower runtimes, especially in smaller, less computationally demanding operations. To assess performance, two neural network models, MLPN and CNN—were tested on the MNIST dataset using both ARM and x86-64 architectures, with CNN chosen for its higher computational demands. Each test was repeated 1,000 times, and outliers were removed via Z-score analysis to measure the effect of vector parallelization on CKKS performance. Model accuracy was also evaluated under CKKS encryption to ensure optimizations did not compromise results. According to the results of the trail runs, applying vector parallelization had a 2.63X efficiency increase overall with a 1.83X performance increase for x86-64 over ARM architecture. Overall, these results suggest that the application of vector parallelization in tandem with GPU acceleration significantly improves the efficiency of CKKS even while accounting for vector parallelization overhead, providing impact in future zero trust operations.

Keywords: CKKS scheme, runtime efficiency, fully homomorphic encryption (FHE), GPU acceleration, vector parallelization

Procedia PDF Downloads 23
248 Bi-Criteria Vehicle Routing Problem for Possibility Environment

Authors: Bezhan Ghvaberidze

Abstract:

A multiple criteria optimization approach for the solution of the Fuzzy Vehicle Routing Problem (FVRP) is proposed. For the possibility environment the levels of movements between customers are calculated by the constructed simulation interactive algorithm. The first criterion of the bi-criteria optimization problem - minimization of the expectation of total fuzzy travel time on closed routes is constructed for the FVRP. A new, second criterion – maximization of feasibility of movement on the closed routes is constructed by the Choquet finite averaging operator. The FVRP is reduced to the bi-criteria partitioning problem for the so called “promising” routes which were selected from the all admissible closed routes. The convenient selection of the “promising” routes allows us to solve the reduced problem in the real-time computing. For the numerical solution of the bi-criteria partitioning problem the -constraint approach is used. An exact algorithm is implemented based on D. Knuth’s Dancing Links technique and the algorithm DLX. The Main objective was to present the new approach for FVRP, when there are some difficulties while moving on the roads. This approach is called FVRP for extreme conditions (FVRP-EC) on the roads. Also, the aim of this paper was to construct the solving model of the constructed FVRP. Results are illustrated on the numerical example where all Pareto-optimal solutions are found. Also, an approach for more complex model FVRP with time windows was developed. A numerical example is presented in which optimal routes are constructed for extreme conditions on the roads.

Keywords: combinatorial optimization, Fuzzy Vehicle routing problem, multiple objective programming, possibility theory

Procedia PDF Downloads 485
247 Bending the Consciousnesses: Uncovering Environmental Issues Through Circuit Bending

Authors: Enrico Dorigatti

Abstract:

The growing pile of hazardous e-waste produced especially by those developed and wealthy countries gets relentlessly bigger, composed of the EEDs (Electric and Electronic Device) that are often thrown away although still well functioning, mainly due to (programmed) obsolescence. As a consequence, e-waste has taken, over the last years, the shape of a frightful, uncontrollable, and unstoppable phenomenon, mainly fuelled by market policies aiming to maximize sales—and thus profits—at any cost. Against it, governments and organizations put some efforts in developing ambitious frameworks and policies aiming to regulate, in some cases, the whole lifecycle of EEDs—from the design to the recycling. Incidentally, however, such regulations sometimes make the disposal of the devices economically unprofitable, which often translates into growing illegal e-waste trafficking—an activity usually undertaken by criminal organizations. It seems that nothing, at least in the near future, can stop the phenomenon of e-waste production and accumulation. But while, from a practical standpoint, a solution seems hard to find, much can be done regarding people's education, which translates into informing and promoting good practices such as reusing and repurposing. This research argues that circuit bending—an activity rooted in neo-materialist philosophy and post-digital aesthetic, and based on repurposing EEDs into novel music instruments and sound generators—could have a great potential in this. In particular, it asserts that circuit bending could expose ecological, environmental, and social criticalities related to the current market policies and economic model. Not only thanks to its practical side (e.g., sourcing and repurposing devices) but also to the artistic one (e.g., employing bent instruments for ecological-aware installations, performances). Currently, relevant literature and debate lack interest and information about the ecological aspects and implications of the practical and artistic sides of circuit bending. This research, therefore, although still at an early stage, aims to fill in this gap by investigating, on the one side, the ecologic potential of circuit bending and, on the other side, its capacity of sensitizing people, through artistic practice, about e-waste-related issues. The methodology will articulate in three main steps. Firstly, field research will be undertaken—with the purpose of understanding where and how to source, in an ecologic and sustainable way, (discarded) EEDs for circuit bending. Secondly, artistic installations and performances will be organized—to sensitize the audience about environmental concerns through sound art and music derived from bent instruments. Data, such as audiences' feedback, will be collected at this stage. The last step will consist in realising workshops to spread an ecologically-aware circuit bending practice. Additionally, all the data and findings collected will be made available and disseminated as resources.

Keywords: circuit bending, ecology, sound art, sustainability

Procedia PDF Downloads 171
246 Semiotics of the New Commercial Music Paradigm

Authors: Mladen Milicevic

Abstract:

This presentation will address how the statistical analysis of digitized popular music influences the music creation and emotionally manipulates consumers.Furthermore, it will deal with semiological aspect of uniformization of musical taste in order to predict the potential revenues generated by popular music sales. In the USA, we live in an age where most of the popular music (i.e. music that generates substantial revenue) has been digitized. It is safe to say that almost everything that was produced in last 10 years is already digitized (either available on iTunes, Spotify, YouTube, or some other platform). Depending on marketing viability and its potential to generate additional revenue most of the “older” music is still being digitized. Once the music gets turned into a digital audio file,it can be computer-analyzed in all kinds of respects, and the similar goes for the lyrics because they also exist as a digital text file, to which any kin of N Capture-kind of analysis may be applied. So, by employing statistical examination of different popular music metrics such as tempo, form, pronouns, introduction length, song length, archetypes, subject matter,and repetition of title, the commercial result may be predicted. Polyphonic HMI (Human Media Interface) introduced the concept of the hit song science computer program in 2003.The company asserted that machine learning could create a music profile to predict hit songs from its audio features Thus,it has been established that a successful pop song must include: 100 bpm or more;an 8 second intro;use the pronoun 'you' within 20 seconds of the start of the song; hit the bridge middle 8 between 2 minutes and 2 minutes 30 seconds; average 7 repetitions of the title; create some expectations and fill that expectation in the title. For the country song: 100 bpm or less for a male artist; 14-second intro; uses the pronoun 'you' within the first 20 seconds of the intro; has a bridge middle 8 between 2 minutes and 2 minutes 30 seconds; has 7 repetitions of title; creates an expectation,fulfills it in 60 seconds.This approach to commercial popular music minimizes the human influence when it comes to which “artist” a record label is going to sign and market. Twenty years ago,music experts in the A&R (Artists and Repertoire) departments of the record labels were making personal aesthetic judgments based on their extensive experience in the music industry. Now, the computer music analyzing programs, are replacing them in an attempt to minimize investment risk of the panicking record labels, in an environment where nobody can predict the future of the recording industry.The impact on the consumers taste through the narrow bottleneck of the above mentioned music selection by the record labels,created some very peculiar effects not only on the taste of popular music consumers, but also the creative chops of the music artists as well. What is the meaning of this semiological shift is the main focus of this research and paper presentation.

Keywords: music, semiology, commercial, taste

Procedia PDF Downloads 393
245 4D Modelling of Low Visibility Underwater Archaeological Excavations Using Multi-Source Photogrammetry in the Bulgarian Black Sea

Authors: Rodrigo Pacheco-Ruiz, Jonathan Adams, Felix Pedrotti

Abstract:

This paper introduces the applicability of underwater photogrammetric survey within challenging conditions as the main tool to enhance and enrich the process of documenting archaeological excavation through the creation of 4D models. Photogrammetry was being attempted on underwater archaeological sites at least as early as the 1970s’ and today the production of traditional 3D models is becoming a common practice within the discipline. Photogrammetry underwater is more often implemented to record exposed underwater archaeological remains and less so as a dynamic interpretative tool.  Therefore, it tends to be applied in bright environments and when underwater visibility is > 1m, reducing its implementation on most submerged archaeological sites in more turbid conditions. Recent years have seen significant development of better digital photographic sensors and the improvement of optical technology, ideal for darker environments. Such developments, in tandem with powerful processing computing systems, have allowed underwater photogrammetry to be used by this research as a standard recording and interpretative tool. Using multi-source photogrammetry (5, GoPro5 Hero Black cameras) this paper presents the accumulation of daily (4D) underwater surveys carried out in the Early Bronze Age (3,300 BC) to Late Ottoman (17th Century AD) archaeological site of Ropotamo in the Bulgarian Black Sea under challenging conditions (< 0.5m visibility). It proves that underwater photogrammetry can and should be used as one of the main recording methods even in low light and poor underwater conditions as a way to better understand the complexity of the underwater archaeological record.

Keywords: 4D modelling, Black Sea Maritime Archaeology Project, multi-source photogrammetry, low visibility underwater survey

Procedia PDF Downloads 236
244 Change Detection Analysis on Support Vector Machine Classifier of Land Use and Land Cover Changes: Case Study on Yangon

Authors: Khin Mar Yee, Mu Mu Than, Kyi Lint, Aye Aye Oo, Chan Mya Hmway, Khin Zar Chi Winn

Abstract:

The dynamic changes of Land Use and Land Cover (LULC) changes in Yangon have generally resulted the improvement of human welfare and economic development since the last twenty years. Making map of LULC is crucially important for the sustainable development of the environment. However, the exactly data on how environmental factors influence the LULC situation at the various scales because the nature of the natural environment is naturally composed of non-homogeneous surface features, so the features in the satellite data also have the mixed pixels. The main objective of this study is to the calculation of accuracy based on change detection of LULC changes by Support Vector Machines (SVMs). For this research work, the main data was satellite images of 1996, 2006 and 2015. Computing change detection statistics use change detection statistics to compile a detailed tabulation of changes between two classification images and Support Vector Machines (SVMs) process was applied with a soft approach at allocation as well as at a testing stage and to higher accuracy. The results of this paper showed that vegetation and cultivated area were decreased (average total 29 % from 1996 to 2015) because of conversion to the replacing over double of the built up area (average total 30 % from 1996 to 2015). The error matrix and confidence limits led to the validation of the result for LULC mapping.

Keywords: land use and land cover change, change detection, image processing, support vector machines

Procedia PDF Downloads 138
243 Gulfnet: The Advent of Computer Networking in Saudi Arabia and Its Social Impact

Authors: Abdullah Almowanes

Abstract:

The speed of adoption of new information and communication technologies is often seen as an indicator of the growth of knowledge- and technological innovation-based regional economies. Indeed, technological progress and scientific inquiry in any society have undergone a particularly profound transformation with the introduction of computer networks. In the spring of 1981, the Bitnet network was launched to link thousands of nodes all over the world. In 1985 and as one of the first adopters of Bitnet, Saudi Arabia launched a Bitnet-based network named Gulfnet that linked computer centers, universities, and libraries of Saudi Arabia and other Gulf countries through high speed communication lines. In this paper, the origins and the deployment of Gulfnet are discussed as well as social, economical, political, and cultural ramifications of the new information reality created by the network. Despite its significance, the social and cultural aspects of Gulfnet have not been investigated in history of science and technology literature to a satisfactory degree before. The presented research is based on an extensive archival research aimed at seeking out and analyzing of primary evidence from archival sources and records. During its decade and a half-long existence, Gulfnet demonstrated that the scope and functionality of public computer networks in Saudi Arabia have to be fine-tuned for compliance with Islamic culture and political system of the country. It also helped lay the groundwork for the subsequent introduction of the Internet. Since 1980s, in just few decades, the proliferation of computer networks has transformed communications world-wide.

Keywords: Bitnet, computer networks, computing and culture, Gulfnet, Saudi Arabia

Procedia PDF Downloads 245
242 Managed Aquifer Recharge (MAR) for the Management of Stormwater on the Cape Flats, Cape Town

Authors: Benjamin Mauck, Kevin Winter

Abstract:

The city of Cape Town in South Africa, has shown consistent economic and population growth in the last few decades and that growth is expected to continue to increase into the future. These projected economic and population growth rates are set to place additional pressure on the city’s already strained water supply system. Thus, given Cape Town’s water scarcity, increasing water demands and stressed water supply system, coupled with global awareness around the issues of sustainable development, environmental protection and climate change, alternative water management strategies are required to ensure water is sustainably managed. Water Sensitive Urban Design (WSUD) is an approach to sustainable urban water management that attempts to assign a resource value to all forms of water in the urban context, viz. stormwater, wastewater, potable water and groundwater. WSUD employs a wide range of strategies to improve the sustainable management of urban water such as the water reuse, developing alternative available supply sources, sustainable stormwater management and enhancing the aesthetic and recreational value of urban water. Managed Aquifer Recharge (MAR) is one WSUD strategy which has proven to be a successful reuse strategy in a number of places around the world. MAR is the process where an aquifer is intentionally or artificially recharged, which provides a valuable means of water storage while enhancing the aquifers supply potential. This paper investigates the feasibility of implementing MAR in the sandy, unconfined Cape Flats Aquifer (CFA) in Cape Town. The main objective of the study is to assess if MAR is a viable strategy for stormwater management on the Cape Flats, aiding the prevention or mitigation of the seasonal flooding that occurs on the Cape Flats, while also improving the supply potential of the aquifer. This involves the infiltration of stormwater into the CFA during the wet winter months and in turn, abstracting from the CFA during the dry summer months for fit-for-purpose uses in order to optimise the recharge and storage capacity of the CFA. The fully-integrated MIKE SHE model is used in this study to simulate both surface water and groundwater hydrology. This modelling approach enables the testing of various potential recharge and abstraction scenarios required for implementation of MAR on the Cape Flats. Further MIKE SHE scenario analysis under projected future climate scenarios provides insight into the performance of MAR as a stormwater management strategy under climate change conditions. The scenario analysis using an integrated model such as MIKE SHE is a valuable tool for evaluating the feasibility of the MAR as a stormwater management strategy and its potential to contribute towards improving Cape Town’s water security into the future.

Keywords: managed aquifer recharge, stormwater management, cape flats aquifer, MIKE SHE

Procedia PDF Downloads 248
241 Field-Free Orbital Hall Current-Induced Deterministic Switching in the MO/Co₇₁Gd₂₉/Ru Structure

Authors: Zelalem Abebe Bekele, Kun Lei, Xiukai Lan, Xiangyu Liu, Hui Wen, Kaiyou Wang

Abstract:

Spin-polarized currents offer an efficient means of manipulating the magnetization of a ferromagnetic layer for big data and neuromorphic computing. Research has shown that the orbital Hall effect (OHE) can produce orbital currents, potentially surpassing the counter spin currents induced by the spin Hall effect. However, it’s essential to note that orbital currents alone cannot exert torque directly on a ferromagnetic layer, necessitating a conversion process from orbital to spin currents. Here, we present an efficient method for achieving perpendicularly magnetized spin-orbit torque (SOT) switching by harnessing the localized orbital Hall current generated from a Mo layer within a Mo/CoGd device. Our investigation reveals a remarkable enhancement in the interface-induced planar Hall effect (PHE) within the Mo/CoGd bilayer, resulting in the generation of a z-polarized planar current for manipulating the magnetization of CoGd layer without the need for an in-plane magnetic field. Furthermore, the Mo layer induces out-of-plane orbital current, boosting the in-plane and out-of-plane spin polarization by converting the orbital current into spin current within the dual-property CoGd layer. At the optimal Mo layer thickness, a low critical magnetization switching current density of 2.51×10⁶ A cm⁻² is achieved. This breakthrough opens avenues for all-electrical control energy-efficient magnetization switching through orbital current, advancing the field of spin-orbitronics.

Keywords: spin-orbit torque, orbital hall effect, spin hall current, orbital hall current, interface-generated planar hall current, anisotropic magnetoresistance

Procedia PDF Downloads 55