Search results for: architectures
206 Implementation and Modeling of a Quadrotor
Authors: Ersan Aktas, Eren Turanoğuz
Abstract:
In this study, the quad-electrical rotor driven unmanned aerial vehicle system is designed and modeled using fundamental dynamic equations. After that, mechanical, electronical and control system of the air vehicle are designed and implemented. Brushless motor speeds are altered via electronic speed controllers in order to achieve desired controllability. The vehicle's fundamental Euler angles (i.e., roll angle, pitch angle, and yaw angle) are obtained via AHRS sensor. These angles are provided as an input to the control algorithm that run on soft the processor on the electronic card. The vehicle control algorithm is implemented in the electronic card. Controller is designed and improved for each Euler angles. Finally, flight tests have been performed to observe and improve the flight characteristics.Keywords: quadrotor, UAS applications, control architectures, PID
Procedia PDF Downloads 362205 Learning to Transform, Transforming to Learn: An Exploration of Teacher Professional Learning in the 4Cs (Communication, Collaboration, Creativity and Critical Reflection) in the Primary (K-6) Setting
Authors: Susan E Orlovich
Abstract:
Ongoing, effective teacher professional learning is acknowledged as a critical influence on teacher practice. However, it is unclear whether the elements of effective professional learning result in transformed teacher practice in the classroom. This research project is interested in 4C teacher professional learning. The professional learning practices to assist teachers in transforming their practice to integrate the 4C capabilities seldom feature in the academic literature. The 4Cs are a shorthand way of representing the concepts of communication, collaboration, creativity, and critical reflection and refer to the capabilities needed for deeper learning, personal growth, and effective participation in society. The New South Wales curriculum review (2020) acknowledges that identifying, teaching, and assessing the 4C capabilities are areas of challenge for teachers. However, it also recognises that it is essential for teachers to build the confidence and capacity to understand, teach and assess the capabilities necessary for learners to thrive in the 21st century. This qualitative research project explores the professional learning experiences of sixteen teachers in four different primaries (K-6) settings in Sydney, Australia, who are learning to integrate, teach and assess the 4Cs. The project draws on the Theory of Practice Architecture as a framework to analyse and interpret teachers' experiences in each site. The sixteen participants in the study are teachers from four primary settings and include early career, experienced, and teachers in leadership roles (including the principal). In addition, some of the participants are also teachers who are learning within a Community of Practice (CoP) as their school setting is engaged in a 4C professional learning, Community of Practice. Qualitative and arts-informed research methods are utilised to examine the cultural-discursive, social-political, and material-economic practice arrangements of the site, explore how these arrangements may have shaped the professional learning experiences of teachers, and in turn, influence the teaching practices of the 4Cs in the setting. The research is in the data analysis stage (October 2022), with preliminary findings pending. The research objective is to investigate the elements of the professional learning experiences undertaken by teachers to teach the 4Cs in the primary setting. The lens of practice architectures theory is used to identify the influence of the practice architectures on critical praxis in each site and examine how the practice arrangements enable or constrain the teaching of 4C capabilities. This research aims to offer deep insight into the practice arrangements which may enable or constrain teacher professional learning in the 4Cs. Such insight from this study may contribute to a better understanding of the practices that enable teachers to transform their practice to achieve the integration, teaching, and assessment of the 4C capabilities.Keywords: 4Cs, communication, collaboration, creativity, critical reflection, teacher professional learning
Procedia PDF Downloads 107204 MULTI-FLGANs: Multi-Distributed Adversarial Networks for Non-Independent and Identically Distributed Distribution
Authors: Akash Amalan, Rui Wang, Yanqi Qiao, Emmanouil Panaousis, Kaitai Liang
Abstract:
Federated learning is an emerging concept in the domain of distributed machine learning. This concept has enabled General Adversarial Networks (GANs) to benefit from the rich distributed training data while preserving privacy. However, in a non-IID setting, current federated GAN architectures are unstable, struggling to learn the distinct features, and vulnerable to mode collapse. In this paper, we propose an architecture MULTI-FLGAN to solve the problem of low-quality images, mode collapse, and instability for non-IID datasets. Our results show that MULTI-FLGAN is four times as stable and performant (i.e., high inception score) on average over 20 clients compared to baseline FLGAN.Keywords: federated learning, generative adversarial network, inference attack, non-IID data distribution
Procedia PDF Downloads 158203 Bioclimatic Devices in the Historical Rural Building: A Carried out Analysis on Some Rural Architectures in Puglia
Authors: Valentina Adduci
Abstract:
The developing research aims to define in general the criteria of environmental sustainability of rural buildings in Puglia and particularly in the manor farm. The main part of the study analyzes the relationship / dependence between the rural building and the landscape which, after many stratifications, results clearly identified and sometimes also characterized in a positive way. The location of the manor farm, in fact, is often conditioned by the infrastructural network and by the structure of the agricultural landscape. The manor farm, without the constraints due to the urban pattern’s density, was developed in accordance with a logical settlement that gives priority to the environmental aspects. These vernacular architectures are the most valuable example of how our ancestors have planned their dwellings according to nature. The 237 farms, analysis’ object, have been reported in cartography through the GIS system; a symbol has been assigned to each of them to identify the architectural typology and a different color for the historical period of construction. A datasheet template has been drawn up, and it has made possible a deeper understanding of each manor farm. This method provides a faster comparison of the most recurring characters in all the considered buildings, except for those farms which benefited from special geographical conditions, such as proximity to the road network or waterways. Below there are some of the most frequently constants derived from the statistical study of the examined buildings: southeast orientation of the main facade; placement of the sheep pen on the ground tilted and exposed to the south side; larger windowed surface on the south elevation; smaller windowed surface on the north elevation; presence of shielding vegetation near the more exposed elevations to the solar radiation; food storage’s rooms located on the ground floor or in the basement; animal shelter located in north side of the farm; presence of tanks and wells, sometimes combined with a very accurate channeling storm water system; thick layers of masonry walls, inside of which were often obtained hollow spaces to house stairwells or depots for the food storage; exclusive use of local building materials. The research aims to trace the ancient use of bioclimatic constructive techniques in the Apulian rural architecture and to define those that derive from an empirical knowledge and those that respond to an already encoded design. These constructive expedients are especially useful to obtain an effective passive cooling, to promote the natural ventilation and to built ingenious systems for the recovery and the preservation of rainwater and are still found in some of the manor farms analyzed, most of them are, today, in a serious state of neglect.Keywords: bioclimatic devices, farmstead, rural landscape, sustainability
Procedia PDF Downloads 381202 Trusted Neural Network: Reversibility in Neural Networks for Network Integrity Verification
Authors: Malgorzata Schwab, Ashis Kumer Biswas
Abstract:
In this concept paper, we explore the topic of Reversibility in Neural Networks leveraged for Network Integrity Verification and crafted the term ''Trusted Neural Network'' (TNN), paired with the API abstraction around it, to embrace the idea formally. This newly proposed high-level generalizable TNN model builds upon the Invertible Neural Network architecture, trained simultaneously in both forward and reverse directions. This allows for the original system inputs to be compared with the ones reconstructed from the outputs in the reversed flow to assess the integrity of the end-to-end inference flow. The outcome of that assessment is captured as an Integrity Score. Concrete implementation reflecting the needs of specific problem domains can be derived from this general approach and is demonstrated in the experiments. The model aspires to become a useful practice in drafting high-level systems architectures which incorporate AI capabilities.Keywords: trusted, neural, invertible, API
Procedia PDF Downloads 145201 Preparation of Papers - Developing a Leukemia Diagnostic System Based on Hybrid Deep Learning Architectures in Actual Clinical Environments
Authors: Skyler Kim
Abstract:
An early diagnosis of leukemia has always been a challenge to doctors and hematologists. On a worldwide basis, it was reported that there were approximately 350,000 new cases in 2012, and diagnosing leukemia was time-consuming and inefficient because of an endemic shortage of flow cytometry equipment in current clinical practice. As the number of medical diagnosis tools increased and a large volume of high-quality data was produced, there was an urgent need for more advanced data analysis methods. One of these methods was the AI approach. This approach has become a major trend in recent years, and several research groups have been working on developing these diagnostic models. However, designing and implementing a leukemia diagnostic system in real clinical environments based on a deep learning approach with larger sets remains complex. Leukemia is a major hematological malignancy that results in mortality and morbidity throughout different ages. We decided to select acute lymphocytic leukemia to develop our diagnostic system since acute lymphocytic leukemia is the most common type of leukemia, accounting for 74% of all children diagnosed with leukemia. The results from this development work can be applied to all other types of leukemia. To develop our model, the Kaggle dataset was used, which consists of 15135 total images, 8491 of these are images of abnormal cells, and 5398 images are normal. In this paper, we design and implement a leukemia diagnostic system in a real clinical environment based on deep learning approaches with larger sets. The proposed diagnostic system has the function of detecting and classifying leukemia. Different from other AI approaches, we explore hybrid architectures to improve the current performance. First, we developed two independent convolutional neural network models: VGG19 and ResNet50. Then, using both VGG19 and ResNet50, we developed a hybrid deep learning architecture employing transfer learning techniques to extract features from each input image. In our approach, fusing the features from specific abstraction layers can be deemed as auxiliary features and lead to further improvement of the classification accuracy. In this approach, features extracted from the lower levels are combined into higher dimension feature maps to help improve the discriminative capability of intermediate features and also overcome the problem of network gradient vanishing or exploding. By comparing VGG19 and ResNet50 and the proposed hybrid model, we concluded that the hybrid model had a significant advantage in accuracy. The detailed results of each model’s performance and their pros and cons will be presented in the conference.Keywords: acute lymphoblastic leukemia, hybrid model, leukemia diagnostic system, machine learning
Procedia PDF Downloads 186200 Soybean Seed Composition Prediction From Standing Crops Using Planet Scope Satellite Imagery and Machine Learning
Authors: Supria Sarkar, Vasit Sagan, Sourav Bhadra, Meghnath Pokharel, Felix B.Fritschi
Abstract:
Soybean and their derivatives are very important agricultural commodities around the world because of their wide applicability in human food, animal feed, biofuel, and industries. However, the significance of soybean production depends on the quality of the soybean seeds rather than the yield alone. Seed composition is widely dependent on plant physiological properties, aerobic and anaerobic environmental conditions, nutrient content, and plant phenological characteristics, which can be captured by high temporal resolution remote sensing datasets. Planet scope (PS) satellite images have high potential in sequential information of crop growth due to their frequent revisit throughout the world. In this study, we estimate soybean seed composition while the plants are in the field by utilizing PlanetScope (PS) satellite images and different machine learning algorithms. Several experimental fields were established with varying genotypes and different seed compositions were measured from the samples as ground truth data. The PS images were processed to extract 462 hand-crafted vegetative and textural features. Four machine learning algorithms, i.e., partial least squares (PLSR), random forest (RFR), gradient boosting machine (GBM), support vector machine (SVM), and two recurrent neural network architectures, i.e., long short-term memory (LSTM) and gated recurrent unit (GRU) were used in this study to predict oil, protein, sucrose, ash, starch, and fiber of soybean seed samples. The GRU and LSTM architectures had two separate branches, one for vegetative features and the other for textures features, which were later concatenated together to predict seed composition. The results show that sucrose, ash, protein, and oil yielded comparable prediction results. Machine learning algorithms that best predicted the six seed composition traits differed. GRU worked well for oil (R-Squared: of 0.53) and protein (R-Squared: 0.36), whereas SVR and PLSR showed the best result for sucrose (R-Squared: 0.74) and ash (R-Squared: 0.60), respectively. Although, the RFR and GBM provided comparable performance, the models tended to extremely overfit. Among the features, vegetative features were found as the most important variables compared to texture features. It is suggested to utilize many vegetation indices for machine learning training and select the best ones by using feature selection methods. Overall, the study reveals the feasibility and efficiency of PS images and machine learning for plot-level seed composition estimation. However, special care should be given while designing the plot size in the experiments to avoid mixed pixel issues.Keywords: agriculture, computer vision, data science, geospatial technology
Procedia PDF Downloads 136199 USE-Net: SE-Block Enhanced U-Net Architecture for Robust Speaker Identification
Authors: Kilari Nikhil, Ankur Tibrewal, Srinivas Kruthiventi S. S.
Abstract:
Conventional speaker identification systems often fall short of capturing the diverse variations present in speech data due to fixed-scale architectures. In this research, we propose a CNN-based architecture, USENet, designed to overcome these limitations. Leveraging two key techniques, our approach achieves superior performance on the VoxCeleb 1 Dataset without any pre-training. Firstly, we adopt a U-net-inspired design to extract features at multiple scales, empowering our model to capture speech characteristics effectively. Secondly, we introduce the squeeze and excitation block to enhance spatial feature learning. The proposed architecture showcases significant advancements in speaker identification, outperforming existing methods, and holds promise for future research in this domain.Keywords: multi-scale feature extraction, squeeze and excitation, VoxCeleb1 speaker identification, mel-spectrograms, USENet
Procedia PDF Downloads 72198 Assessment of High Frequency Solidly Mounted Resonator as Viscosity Sensor
Authors: Vinita Choudhary
Abstract:
Solidly Acoustic Resonators (SMR) based on ZnO piezoelectric material operating at a frequency of 3.96 GHz and 6.49% coupling factor are used to characterize liquids with different viscosities. This behavior of the sensor is analyzed using Finite Element Modeling. Device architectures encapsulate bulk acoustic wave resonators with MO/SiO₂ Bragg mirror reflector and the silicon substrate. The proposed SMR is based on the mass loading effect response of the sensor to the change in the resonant frequency of the resonator that is caused by the increased density due to the absorption of liquids (water, acetone, olive oil) used in theoretical calculation. The sensitivity of sensors ranges from 0.238 MHz/mPa.s to 83.33 MHz/mPa.s, supported by the Kanazawa model. Obtained results are also compared with previous works on BAW viscosity sensors.Keywords: solidly mounted resonator, bragg mirror, kanazawa model, finite element model
Procedia PDF Downloads 80197 Performance Analysis and Optimization for Diagonal Sparse Matrix-Vector Multiplication on Machine Learning Unit
Authors: Qiuyu Dai, Haochong Zhang, Xiangrong Liu
Abstract:
Diagonal sparse matrix-vector multiplication is a well-studied topic in the fields of scientific computing and big data processing. However, when diagonal sparse matrices are stored in DIA format, there can be a significant number of padded zero elements and scattered points, which can lead to a degradation in the performance of the current DIA kernel. This can also lead to excessive consumption of computational and memory resources. In order to address these issues, the authors propose the DIA-Adaptive scheme and its kernel, which leverages the parallel instruction sets on MLU. The researchers analyze the effect of allocating a varying number of threads, clusters, and hardware architectures on the performance of SpMV using different formats. The experimental results indicate that the proposed DIA-Adaptive scheme performs well and offers excellent parallelism.Keywords: adaptive method, DIA, diagonal sparse matrices, MLU, sparse matrix-vector multiplication
Procedia PDF Downloads 132196 Visual Odometry and Trajectory Reconstruction for UAVs
Authors: Sandro Bartolini, Alessandro Mecocci, Alessio Medaglini
Abstract:
The growing popularity of systems based on unmanned aerial vehicles (UAVs) is highlighting their vulnerability, particularly in relation to the positioning system used. Typically, UAV architectures use the civilian GPS, which is exposed to a number of different attacks, such as jamming or spoofing. This is why it is important to develop alternative methodologies to accurately estimate the actual UAV position without relying on GPS measurements only. In this paper, we propose a position estimate method for UAVs based on monocular visual odometry. We have developed a flight control system capable of keeping track of the entire trajectory travelled, with a reduced dependency on the availability of GPS signals. Moreover, the simplicity of the developed solution makes it applicable to a wide range of commercial drones. The final goal is to allow for safer flights in all conditions, even under cyber-attacks trying to deceive the drone.Keywords: visual odometry, autonomous uav, position measurement, autonomous outdoor flight
Procedia PDF Downloads 215195 IACOP - Route Optimization in Wireless Networks Using Improved Ant Colony Optimization Protocol
Authors: S. Vasundra, D. Venkatesh
Abstract:
Wireless networks have gone through an extraordinary growth in the past few years, and will keep on playing a crucial role in future data communication. The present wireless networks aim to make communication possible anywhere and anytime. With the converging of mobile and wireless communications with Internet services, the boundary between mobile personal telecommunications and wireless computer networks is disappearing. Wireless networks of the next generation need the support of all the advances on new architectures, standards, and protocols. Since an ad hoc network may consist of a large number of mobile hosts, this imposes a significant challenge on the design of an effective and efficient routing protocol that can work well in an environment with frequent topological changes. This paper proposes improved ant colony optimization (IACO) technique. It also maintains load balancing in wireless networks. The simulation results show that the proposed IACO performs better than existing routing techniques.Keywords: wireless networks, ant colony optimization, load balancing, architecture
Procedia PDF Downloads 420194 Understanding and Improving Neural Network Weight Initialization
Authors: Diego Aguirre, Olac Fuentes
Abstract:
In this paper, we present a taxonomy of weight initialization schemes used in deep learning. We survey the most representative techniques in each class and compare them in terms of overhead cost, convergence rate, and applicability. We also introduce a new weight initialization scheme. In this technique, we perform an initial feedforward pass through the network using an initialization mini-batch. Using statistics obtained from this pass, we initialize the weights of the network, so the following properties are met: 1) weight matrices are orthogonal; 2) ReLU layers produce a predetermined number of non-zero activations; 3) the output produced by each internal layer has a unit variance; 4) weights in the last layer are chosen to minimize the error in the initial mini-batch. We evaluate our method on three popular architectures, and a faster converge rates are achieved on the MNIST, CIFAR-10/100, and ImageNet datasets when compared to state-of-the-art initialization techniques.Keywords: deep learning, image classification, supervised learning, weight initialization
Procedia PDF Downloads 134193 HD-WSComp: Hypergraph Decomposition for Web Services Composition Based on QoS
Authors: Samah Benmerbi, Kamal Amroun, Abdelkamel Tari
Abstract:
The increasing number of Web service (WS)providers throughout the globe, have produced numerous Web services providing the same or similar functionality. Therefore, there is a need of tools developing the best answer of queries by selecting and composing services with total transparency. This paper reviews various QoS based Web service selection mechanisms and architectures which facilitate qualitatively optimal selection, in other fact Web service composition is required when a request cannot be fulfilled by a single web service. In such cases, it is preferable to integrate existing web services to satisfy user’s request. We introduce an automatic Web service composition method based on hypergraph decomposition using hypertree decomposition method. The problem of selection and the composition of the web services is transformed into a resolution in a hypertree by exploring the relations of dependency between web services to get composite web service via employing an execution order of WS satisfying global request.Keywords: web service, web service selection, web service composition, QoS, hypergraph decomposition, BE hypergraph decomposition, hypertree resolution
Procedia PDF Downloads 507192 MarginDistillation: Distillation for Face Recognition Neural Networks with Margin-Based Softmax
Authors: Svitov David, Alyamkin Sergey
Abstract:
The usage of convolutional neural networks (CNNs) in conjunction with the margin-based softmax approach demonstrates the state-of-the-art performance for the face recognition problem. Recently, lightweight neural network models trained with the margin-based softmax have been introduced for the face identification task for edge devices. In this paper, we propose a distillation method for lightweight neural network architectures that outperforms other known methods for the face recognition task on LFW, AgeDB-30 and Megaface datasets. The idea of the proposed method is to use class centers from the teacher network for the student network. Then the student network is trained to get the same angles between the class centers and face embeddings predicted by the teacher network.Keywords: ArcFace, distillation, face recognition, margin-based softmax
Procedia PDF Downloads 146191 Numerical Solution Speedup of the Laplace Equation Using FPGA Hardware
Authors: Abbas Ebrahimi, Mohammad Zandsalimy
Abstract:
The main purpose of this study is to investigate the feasibility of using FPGA (Field Programmable Gate Arrays) chips as alternatives for the conventional CPUs to accelerate the numerical solution of the Laplace equation. FPGA is an integrated circuit that contains an array of logic blocks, and its architecture can be reprogrammed and reconfigured after manufacturing. Complex circuits for various applications can be designed and implemented using FPGA hardware. The reconfigurable hardware used in this paper is an SoC (System on a Chip) FPGA type that integrates both microprocessor and FPGA architectures into a single device. In the present study the Laplace equation is implemented and solved numerically on both reconfigurable hardware and CPU. The precision of results and speedups of the calculations are compared together. The computational process on FPGA, is up to 20 times faster than a conventional CPU, with the same data precision. An analytical solution is used to validate the results.Keywords: accelerating numerical solutions, CFD, FPGA, hardware definition language, numerical solutions, reconfigurable hardware
Procedia PDF Downloads 379190 An Ensemble-based Method for Vehicle Color Recognition
Authors: Saeedeh Barzegar Khalilsaraei, Manoocheher Kelarestaghi, Farshad Eshghi
Abstract:
The vehicle color, as a prominent and stable feature, helps to identify a vehicle more accurately. As a result, vehicle color recognition is of great importance in intelligent transportation systems. Unlike conventional methods which use only a single Convolutional Neural Network (CNN) for feature extraction or classification, in this paper, four CNNs, with different architectures well-performing in different classes, are trained to extract various features from the input image. To take advantage of the distinct capability of each network, the multiple outputs are combined using a stack generalization algorithm as an ensemble technique. As a result, the final model performs better than each CNN individually in vehicle color identification. The evaluation results in terms of overall average accuracy and accuracy variance show the proposed method’s outperformance compared to the state-of-the-art rivals.Keywords: Vehicle Color Recognition, Ensemble Algorithm, Stack Generalization, Convolutional Neural Network
Procedia PDF Downloads 81189 Software Evolution Based Activity Diagrams
Authors: Zine-Eddine Bouras, Abdelouaheb Talai
Abstract:
During the last two decades, the software evolution community has intensively tackled the software merging issue whose main objective is to merge in a consistent way different versions of software in order to obtain a new version. Well-established approaches, mainly based on the dependence analysis techniques, have been used to bring suitable solutions. These approaches concern the source code or software architectures. However, these solutions are more expensive due to the complexity and size. In this paper, we overcome this problem by operating at a high level of abstraction. The objective of this paper is to investigate the software merging at the level of UML activity diagrams, which is a new interesting issue. Its purpose is to merge activity diagrams instead of source code. The proposed approach, based on dependence analysis techniques, is illustrated through an appropriate case study.Keywords: activity diagram, activity diagram slicing, dependency analysis, software merging
Procedia PDF Downloads 325188 Contextual Toxicity Detection with Data Augmentation
Authors: Julia Ive, Lucia Specia
Abstract:
Understanding and detecting toxicity is an important problem to support safer human interactions online. Our work focuses on the important problem of contextual toxicity detection, where automated classifiers are tasked with determining whether a short textual segment (usually a sentence) is toxic within its conversational context. We use “toxicity” as an umbrella term to denote a number of variants commonly named in the literature, including hate, abuse, offence, among others. Detecting toxicity in context is a non-trivial problem and has been addressed by very few previous studies. These previous studies have analysed the influence of conversational context in human perception of toxicity in controlled experiments and concluded that humans rarely change their judgements in the presence of context. They have also evaluated contextual detection models based on state-of-the-art Deep Learning and Natural Language Processing (NLP) techniques. Counterintuitively, they reached the general conclusion that computational models tend to suffer performance degradation in the presence of context. We challenge these empirical observations by devising better contextual predictive models that also rely on NLP data augmentation techniques to create larger and better data. In our study, we start by further analysing the human perception of toxicity in conversational data (i.e., tweets), in the absence versus presence of context, in this case, previous tweets in the same conversational thread. We observed that the conclusions of previous work on human perception are mainly due to data issues: The contextual data available does not provide sufficient evidence that context is indeed important (even for humans). The data problem is common in current toxicity datasets: cases labelled as toxic are either obviously toxic (i.e., overt toxicity with swear, racist, etc. words), and thus context does is not needed for a decision, or are ambiguous, vague or unclear even in the presence of context; in addition, the data contains labeling inconsistencies. To address this problem, we propose to automatically generate contextual samples where toxicity is not obvious (i.e., covert cases) without context or where different contexts can lead to different toxicity judgements for the same tweet. We generate toxic and non-toxic utterances conditioned on the context or on target tweets using a range of techniques for controlled text generation(e.g., Generative Adversarial Networks and steering techniques). On the contextual detection models, we posit that their poor performance is due to limitations on both of the data they are trained on (same problems stated above) and the architectures they use, which are not able to leverage context in effective ways. To improve on that, we propose text classification architectures that take the hierarchy of conversational utterances into account. In experiments benchmarking ours against previous models on existing and automatically generated data, we show that both data and architectural choices are very important. Our model achieves substantial performance improvements as compared to the baselines that are non-contextual or contextual but agnostic of the conversation structure.Keywords: contextual toxicity detection, data augmentation, hierarchical text classification models, natural language processing
Procedia PDF Downloads 169187 Big Brain: A Single Database System for a Federated Data Warehouse Architecture
Authors: X. Gumara Rigol, I. Martínez de Apellaniz Anzuola, A. Garcia Serrano, A. Franzi Cros, O. Vidal Calbet, A. Al Maruf
Abstract:
Traditional federated architectures for data warehousing work well when corporations have existing regional data warehouses and there is a need to aggregate data at a global level. Schibsted Media Group has been maturing from a decentralised organisation into a more globalised one and needed to build both some of the regional data warehouses for some brands at the same time as the global one. In this paper, we present the architectural alternatives studied and why a custom federated approach was the notable recommendation to go further with the implementation. Although the data warehouses are logically federated, the implementation uses a single database system which presented many advantages like: cost reduction and improved data access to global users allowing consumers of the data to have a common data model for detailed analysis across different geographies and a flexible layer for local specific needs in the same place.Keywords: data integration, data warehousing, federated architecture, Online Analytical Processing (OLAP)
Procedia PDF Downloads 235186 Nonparametric Sieve Estimation with Dependent Data: Application to Deep Neural Networks
Authors: Chad Brown
Abstract:
This paper establishes general conditions for the convergence rates of nonparametric sieve estimators with dependent data. We present two key results: one for nonstationary data and another for stationary mixing data. Previous theoretical results often lack practical applicability to deep neural networks (DNNs). Using these conditions, we derive convergence rates for DNN sieve estimators in nonparametric regression settings with both nonstationary and stationary mixing data. The DNN architectures considered adhere to current industry standards, featuring fully connected feedforward networks with rectified linear unit activation functions, unbounded weights, and a width and depth that grows with sample size.Keywords: sieve extremum estimates, nonparametric estimation, deep learning, neural networks, rectified linear unit, nonstationary processes
Procedia PDF Downloads 40185 Bridging the Data Gap for Sexism Detection in Twitter: A Semi-Supervised Approach
Authors: Adeep Hande, Shubham Agarwal
Abstract:
This paper presents a study on identifying sexism in online texts using various state-of-the-art deep learning models based on BERT. We experimented with different feature sets and model architectures and evaluated their performance using precision, recall, F1 score, and accuracy metrics. We also explored the use of pseudolabeling technique to improve model performance. Our experiments show that the best-performing models were based on BERT, and their multilingual model achieved an F1 score of 0.83. Furthermore, the use of pseudolabeling significantly improved the performance of the BERT-based models, with the best results achieved using the pseudolabeling technique. Our findings suggest that BERT-based models with pseudolabeling hold great promise for identifying sexism in online texts with high accuracy.Keywords: large language models, semi-supervised learning, sexism detection, data sparsity
Procedia PDF Downloads 68184 Dynamic Bandwidth Allocation in Fiber-Wireless (FiWi) Networks
Authors: Eman I. Raslan, Haitham S. Hamza, Reda A. El-Khoribi
Abstract:
Fiber-Wireless (FiWi) networks are a promising candidate for future broadband access networks. These networks combine the optical network as the back end where different passive optical network (PON) technologies are realized and the wireless network as the front end where different wireless technologies are adopted, e.g. LTE, WiMAX, Wi-Fi, and Wireless Mesh Networks (WMNs). The convergence of both optical and wireless technologies requires designing architectures with robust efficient and effective bandwidth allocation schemes. Different bandwidth allocation algorithms have been proposed in FiWi networks aiming to enhance the different segments of FiWi networks including wireless and optical subnetworks. In this survey, we focus on the differentiating between the different bandwidth allocation algorithms according to their enhancement segment of FiWi networks. We classify these techniques into wireless, optical and Hybrid bandwidth allocation techniques.Keywords: fiber-wireless (FiWi), dynamic bandwidth allocation (DBA), passive optical networks (PON), media access control (MAC)
Procedia PDF Downloads 530183 A New Design Methodology for Partially Reconfigurable Systems-on-Chip
Authors: Roukaya Dalbouchi, Abdelkrin Zitouni
Abstract:
In this paper, we propose a novel design methodology for Dynamic Partial Reconfigurable (DPR) system. This type of system has the property of being able to be modified after its design and during its execution. The suggested design methodology is generic in terms of granularity, number of modules, and reconfigurable region and suitable for any type of modern application. It is based on the interconnection between several design stages. The recommended methodology represents a guide for the design of DPR architectures that meet compromise reconfiguration/performance. To validate the proposed methodology, we use as an application a video watermarking. The comparison result shows that the proposed methodology supports all stages of DPR architecture design and characterized by a high abstraction level. It provides a dynamic/partial reconfigurable architecture; it guarantees material efficiency, the flexibility of reconfiguration, and superior performance in terms of frequency and power consumption.Keywords: dynamically reconfigurable system, block matching algorithm, partial reconfiguration, motion vectors, video watermarking
Procedia PDF Downloads 93182 An Embedded High Speed Adder for Arithmetic Computations
Authors: Kala Bharathan, R. Seshasayanan
Abstract:
In this paper, a 1-bit Embedded Logic Full Adder (EFA) circuit in transistor level is proposed, which reduces logic complexity, gives low power and high speed. The design is further extended till 64 bits. To evaluate the performance of EFA, a 16, 32, 64-bit both Linear and Square root Carry Select Adder/Subtractor (CSLAS) Structure is also proposed. Realistic testing of proposed circuits is done on 8 X 8 Modified Booth multiplier and comparison in terms of power and delay is done. The EFA is implemented for different multiplier architectures for performance parameter comparison. Overall delay for CSLAS is reduced to 78% when compared to conventional one. The circuit implementations are done on TSMC 28nm CMOS technology using Cadence Virtuoso tool. The EFA has power savings of up to 14% when compared to the conventional adder. The present implementation was found to offer significant improvement in terms of power and speed in comparison to other full adder circuits.Keywords: embedded logic, full adder, pdp, xor gate
Procedia PDF Downloads 447181 An Adiabatic Quantum Optimization Approach for the Mixed Integer Nonlinear Programming Problem
Authors: Maxwell Henderson, Tristan Cook, Justin Chan Jin Le, Mark Hodson, YoungJung Chang, John Novak, Daniel Padilha, Nishan Kulatilaka, Ansu Bagchi, Sanjoy Ray, John Kelly
Abstract:
We present a method of using adiabatic quantum optimization (AQO) to solve a mixed integer nonlinear programming (MINLP) problem instance. The MINLP problem is a general form of a set of NP-hard optimization problems that are critical to many business applications. It requires optimizing a set of discrete and continuous variables with nonlinear and potentially nonconvex constraints. Obtaining an exact, optimal solution for MINLP problem instances of non-trivial size using classical computation methods is currently intractable. Current leading algorithms leverage heuristic and divide-and-conquer methods to determine approximate solutions. Creating more accurate and efficient algorithms is an active area of research. Quantum computing (QC) has several theoretical benefits compared to classical computing, through which QC algorithms could obtain MINLP solutions that are superior to current algorithms. AQO is a particular form of QC that could offer more near-term benefits compared to other forms of QC, as hardware development is in a more mature state and devices are currently commercially available from D-Wave Systems Inc. It is also designed for optimization problems: it uses an effect called quantum tunneling to explore all lowest points of an energy landscape where classical approaches could become stuck in local minima. Our work used a novel algorithm formulated for AQO to solve a special type of MINLP problem. The research focused on determining: 1) if the problem is possible to solve using AQO, 2) if it can be solved by current hardware, 3) what the currently achievable performance is, 4) what the performance will be on projected future hardware, and 5) when AQO is likely to provide a benefit over classical computing methods. Two different methods, integer range and 1-hot encoding, were investigated for transforming the MINLP problem instance constraints into a mathematical structure that can be embedded directly onto the current D-Wave architecture. For testing and validation a D-Wave 2X device was used, as well as QxBranch’s QxLib software library, which includes a QC simulator based on simulated annealing. Our results indicate that it is mathematically possible to formulate the MINLP problem for AQO, but that currently available hardware is unable to solve problems of useful size. Classical general-purpose simulated annealing is currently able to solve larger problem sizes, but does not scale well and such methods would likely be outperformed in the future by improved AQO hardware with higher qubit connectivity and lower temperatures. If larger AQO devices are able to show improvements that trend in this direction, commercially viable solutions to the MINLP for particular applications could be implemented on hardware projected to be available in 5-10 years. Continued investigation into optimal AQO hardware architectures and novel methods for embedding MINLP problem constraints on to those architectures is needed to realize those commercial benefits.Keywords: adiabatic quantum optimization, mixed integer nonlinear programming, quantum computing, NP-hard
Procedia PDF Downloads 524180 Evaluating the Impact of Replacement Policies on the Cache Performance and Energy Consumption in Different Multicore Embedded Systems
Authors: Sajjad Rostami-Sani, Mojtaba Valinataj, Amir-Hossein Khojir-Angasi
Abstract:
The cache has an important role in the reduction of access delay between a processor and memory in high-performance embedded systems. In these systems, the energy consumption is one of the most important concerns, and it will become more important with smaller processor feature sizes and higher frequencies. Meanwhile, the cache system dissipates a significant portion of energy compared to the other components of a processor. There are some elements that can affect the energy consumption of the cache such as replacement policy and degree of associativity. Due to these points, it can be inferred that selecting an appropriate configuration for the cache is a crucial part of designing a system. In this paper, we investigate the effect of different cache replacement policies on both cache’s performance and energy consumption. Furthermore, the impact of different Instruction Set Architectures (ISAs) on cache’s performance and energy consumption has been investigated.Keywords: energy consumption, replacement policy, instruction set architecture, multicore processor
Procedia PDF Downloads 152179 Gene Names Identity Recognition Using Siamese Network for Biomedical Publications
Authors: Micheal Olaolu Arowolo, Muhammad Azam, Fei He, Mihail Popescu, Dong Xu
Abstract:
As the quantity of biological articles rises, so does the number of biological route figures. Each route figure shows gene names and relationships. Annotating pathway diagrams manually is time-consuming. Advanced image understanding models could speed up curation, but they must be more precise. There is rich information in biological pathway figures. The first step to performing image understanding of these figures is to recognize gene names automatically. Classical optical character recognition methods have been employed for gene name recognition, but they are not optimized for literature mining data. This study devised a method to recognize an image bounding box of gene name as a photo using deep Siamese neural network models to outperform the existing methods using ResNet, DenseNet and Inception architectures, the results obtained about 84% accuracy.Keywords: biological pathway, gene identification, object detection, Siamese network
Procedia PDF Downloads 288178 Challenge Response-Based Authentication for a Mobile Voting System
Authors: Tohari Ahmad, Hudan Studiawan, Iwang Aryadinata, Royyana M. Ijtihadie, Waskitho Wibisono
Abstract:
A manual voting system has been implemented worldwide. It has some weaknesses which may decrease the legitimacy of the voting result. An electronic voting system is introduced to minimize this weakness. It has been able to provide a better result, in terms of the total time taken in the voting process and accuracy. Nevertheless, people may be reluctant to go to the polling location because of some reasons, such as distance and time. In order to solve this problem, mobile voting is implemented by utilizing mobile devices. There are many mobile voting architectures available. Overall, authenticity of the users is the common problem of all voting systems. There must be a mechanism which can verify the users’ authenticity such that only verified users can give their vote once; others cannot vote. In this paper, a challenge response-based authentication is proposed by utilizing properties of the users, for example, something they have and know. In terms of speed, the proposed system provides good result, in addition to other capabilities offered by the system.Keywords: authentication, data protection, mobile voting, security
Procedia PDF Downloads 415177 Exploring Wheel-Motion Energy Sources for Energy Harvesting Based on Electromagnetic Effect: Experimental and Numerical Investigation
Authors: Mohammed Alaa Alwafaie, Bela Kovacs
Abstract:
With the rapid emergence and evolution of renewable energy sources like wind and solar power, there is an increasing demand for effective energy harvester architectures. This paper focuses on investigating the concept of energy harvesting using a wheel-motion energy source. The proposed method involves the placement of magnets and copper coils inside the hubcap rod of a wheel. When the wheel is set in motion, following Faraday's Law, the movement of the magnet within the coil induces an electric current. The paper includes an experiment to measure the output voltage of electromagnetics, as well as a numerical simulation to further explore the potential of this energy harvesting approach. By harnessing the rotational motion of wheels, this research aims to contribute to the development of innovative techniques for generating electrical power in a sustainable and efficient manner.Keywords: harvesting energy, electromagnetic, hubcap rod wheel, magnet movement inside coil, faraday law
Procedia PDF Downloads 73