Search results for: edge computing
1101 Comparison of Flow and Mixing Characteristics between Non-Oscillating and Transversely Oscillating Jet
Authors: Dinku Seyoum Zeleke, Rong Fung Huang, Ching Min Hsu
Abstract:
Comparison of flow and mixing characteristics between non-oscillating jet and transversely oscillating jet was investigated experimentally. Flow evolution process was detected by using high-speed digital camera, and jet spread width was calculated using binary edge detection techniques by using the long-exposure images. The velocity characteristics of transversely oscillating jet induced by a V-shaped fluidic oscillator were measured using single component hot-wire anemometer. The jet spread width of non-oscillating jet was much smaller than the jet exit gap because of behaving natural jet behaviors. However, the transversely oscillating jet has a larger jet spread width, which was associated with the excitation of the flow by self-induced oscillation. As a result, the flow mixing characteristics desperately improved both near-field and far-field. Therefore, this transversely oscillating jet has a better turbulence intensity, entrainment, and spreading width so that it augments flow-mixing characteristics desperately.Keywords: flow mixing, transversely oscillating, spreading width, velocity characteristics
Procedia PDF Downloads 2501100 MarginDistillation: Distillation for Face Recognition Neural Networks with Margin-Based Softmax
Authors: Svitov David, Alyamkin Sergey
Abstract:
The usage of convolutional neural networks (CNNs) in conjunction with the margin-based softmax approach demonstrates the state-of-the-art performance for the face recognition problem. Recently, lightweight neural network models trained with the margin-based softmax have been introduced for the face identification task for edge devices. In this paper, we propose a distillation method for lightweight neural network architectures that outperforms other known methods for the face recognition task on LFW, AgeDB-30 and Megaface datasets. The idea of the proposed method is to use class centers from the teacher network for the student network. Then the student network is trained to get the same angles between the class centers and face embeddings predicted by the teacher network.Keywords: ArcFace, distillation, face recognition, margin-based softmax
Procedia PDF Downloads 1481099 'Utsadhara': Rejuvenating the Dead River Edge into an Urban Activity Space along the Banks of River Hooghly
Authors: Aparna Saha, Tuhin Ahmed
Abstract:
West Bengal has a number of important rivers, each with its distinctive character and a story. Traditionally, cities have ‘divulged’ to rivers at the river edges and rivers have been an inseparable part of the urban experience. Considering the research aspect, the area is taken in Barrackpore, a small but important outgrowth of Kolkata Municipal Association, West Bengal. Barrackpore, at present, has ample inadequate public open spaces at the neighborhood level where people of different socio-cultural, economic, and religious backgrounds can come together and engage in various leisure activities, but there is no opportunity either, where people can learn about and explore the rich history of the settlement. Pertaining to these issues forms the backdrop of this research paper which has been conceptualized as a place from space that will bring people back to the river and increase community interactions and will also celebrate and commemorate towards the historical importance of the river and its edges. The entire precinct bordering the river represents the transition from pre-independence (Raj era) to Sepoy phase (Swaraj era), finally culminating into the Gandhian philosophy which is being projected into the already existing Gandhi Ghat. The ultimate aim of the paper entitled ‘Utsadhara- Rejuvenating the dead river edge into an urban activity space along the banks of river Hooghly’ is to create a socio-cultural space keeping the heritage identity intact through judicious use of the water body. Also, a balance is kept between the natural ecosystem and the cosmetic development of the surrounding open spaces. It can be duly achieved by the aforementioned methodology provided in the document, but mainly it would focus into preserving the historic ethnicity of the place by holding its character through various facts and figures as well as features. Most importantly the natural topography of the place is left intact. The second priority is given in terms of hierarchy of well connected public plazas, podiums where people from different socio-economic backgrounds irrespective of age and sex could socialize and reach towards venturing into a cordial relationship with one another. The third priority is to provide a platform for the common mass for showcasing their skills and talent through different art and craft forms which in turn would enhance their individual self and also the community as a whole through economic rise. Apart from this here some spaces are created in accordance to different age groups or class of people. The paper intends to see the river as a major multifunctional public space to attract people for different activities and re-establish the relationship of the river with the settlement. Hence, it is apprehended that the paper is not only intended to a simple riverfront conservation project but unlike others it is a place which is created for the people, by the people and of the people towards a holistic community development through a sustainable approach.Keywords: holistic community development, public activity space, river-urban precinct, urban dead space
Procedia PDF Downloads 1371098 Data Confidentiality in Public Cloud: A Method for Inclusion of ID-PKC Schemes in OpenStack Cloud
Authors: N. Nalini, Bhanu Prakash Gopularam
Abstract:
The term data security refers to the degree of resistance or protection given to information from unintended or unauthorized access. The core principles of information security are the confidentiality, integrity and availability, also referred as CIA triad. Cloud computing services are classified as SaaS, IaaS and PaaS services. With cloud adoption the confidential enterprise data are moved from organization premises to untrusted public network and due to this the attack surface has increased manifold. Several cloud computing platforms like OpenStack, Eucalyptus, Amazon EC2 offer users to build and configure public, hybrid and private clouds. While the traditional encryption based on PKI infrastructure still works in cloud scenario, the management of public-private keys and trust certificates is difficult. The Identity based Public Key Cryptography (also referred as ID-PKC) overcomes this problem by using publicly identifiable information for generating the keys and works well with decentralized systems. The users can exchange information securely without having to manage any trust information. Another advantage is that access control (role based access control policy) information can be embedded into data unlike in PKI where it is handled by separate component or system. In OpenStack cloud platform the keystone service acts as identity service for authentication and authorization and has support for public key infrastructure for auto services. In this paper, we explain OpenStack security architecture and evaluate the PKI infrastructure piece for data confidentiality. We provide method to integrate ID-PKC schemes for securing data while in transit and stored and explain the key measures for safe guarding data against security attacks. The proposed approach uses JPBC crypto library for key-pair generation based on IEEE P1636.3 standard and secure communication to other cloud services.Keywords: data confidentiality, identity based cryptography, secure communication, open stack key stone, token scoping
Procedia PDF Downloads 3851097 Mixed Mode Fracture Analyses Using Finite Element Method of Edge Cracked Heavy Annulus Pulley
Authors: Bijit Kalita, K. V. N. Surendra
Abstract:
The pulley works under both compressive loading due to contacting belt in tension and central torque due to cause rotation. In a power transmission system, the belt pulley assemblies offer a contact problem in the form of two mating cylindrical parts. In this work, we modeled a pulley as a heavy two-dimensional circular disk. Stress analysis due to contact loading in the pulley mechanism is performed. Finite element analysis (FEA) is conducted for a pulley to investigate the stresses experienced on its inner and outer periphery. In most of the heavy-duty applications, most frequently used mechanisms to transmit power in applications such as automotive engines, industrial machines, etc. is Belt Drive. Usually, very heavy circular disks are used as pulleys. A pulley could be entitled as a drum and may have a groove between two flanges around the circumference. A rope, belt, cable or chain can be the driving element of a pulley system that runs over the pulley inside the groove. A pulley is experienced by normal and shear tractions on its contact region in the process of motion transmission. The region may be belt-pulley contact surface or pulley-shaft contact surface. In 1895, Hertz solved the elastic contact problem for point contact and line contact of an ideal smooth object. Afterward, this hypothesis is generally utilized for computing the actual contact zone. Detailed stress analysis in such contact region of such pulleys is quite necessary to prevent early failure. In this paper, the results of the finite element analyses carried out on the compressed disk of a belt pulley arrangement using fracture mechanics concepts are shown. Based on the literature on contact stress problem induced in the wide field of applications, generated stress distribution on the shaft-pulley and belt-pulley interfaces due to the application of high-tension and torque was evaluated in this study using FEA concepts. Finally, the results obtained from ANSYS (APDL) were compared with the Hertzian contact theory. The study is mainly focused on the fatigue life estimation of a rotating part as a component of an engine assembly using the most famous Paris equation. Digital Image Correlation (DIC) analyses have been performed using the open-source software. From the displacement computed using the images acquired at a minimum and maximum force, displacement field amplitude is computed. From these fields, the crack path is defined and stress intensity factors and crack tip position are extracted. A non-linear least-squares projection is used for the purpose of the estimation of fatigue crack growth. Further study will be extended for the various application of rotating machinery such as rotating flywheel disk, jet engine, compressor disk, roller disk cutter etc., where Stress Intensity Factor (SIF) calculation plays a significant role on the accuracy and reliability of a safe design. Additionally, this study will be progressed to predict crack propagation in the pulley using maximum tangential stress (MTS) criteria for mixed mode fracture.Keywords: crack-tip deformations, contact stress, stress concentration, stress intensity factor
Procedia PDF Downloads 1251096 Testing Chat-GPT: An AI Application
Authors: Jana Ismail, Layla Fallatah, Maha Alshmaisi
Abstract:
ChatGPT, a cutting-edge language model built on the GPT-3.5 architecture, has garnered attention for its profound natural language processing capabilities, holding promise for transformative applications in customer service and content creation. This study delves into ChatGPT's architecture, aiming to comprehensively understand its strengths and potential limitations. Through systematic experiments across diverse domains, such as general knowledge and creative writing, we evaluated the model's coherence, context retention, and task-specific accuracy. While ChatGPT excels in generating human-like responses and demonstrates adaptability, occasional inaccuracies and sensitivity to input phrasing were observed. The study emphasizes the impact of prompt design on output quality, providing valuable insights for the nuanced deployment of ChatGPT in conversational AI and contributing to the ongoing discourse on the evolving landscape of natural language processing in artificial intelligence.Keywords: artificial Inelegance, chatGPT, open AI, NLP
Procedia PDF Downloads 781095 Post Growth Annealing Effect on Deep Level Emission and Raman Spectra of Hydrothermally Grown ZnO Nanorods Assisted by KMnO4
Authors: Ashish Kumar, Tejendra Dixit, I. A. Palani, Vipul Singh
Abstract:
Zinc oxide, with its interesting properties such as large band gap (3.37eV), high exciton binding energy (60 meV) and intense UV absorption has been studied in literature for various applications viz. optoelectronics, biosensors, UV-photodetectors etc. The performance of ZnO devices is highly influenced by morphologies, size, crystallinity of the ZnO active layer and processing conditions. Recently, our group has shown the influence of the in situ addition of KMnO4 in the precursor solution during the hydrothermal growth of ZnO nanorods (NRs) on their near band edge (NBE) emission. In this paper, we have investigated the effect of post-growth annealing on the variations in NBE and deep level (DL) emissions of as grown ZnO nanorods. These observed results have been explained on the basis of X-ray Diffraction (XRD) and Raman spectroscopic analysis, which clearly show that improved crystalinity and quantum confinement in ZnO nanorods.Keywords: ZnO, nanorods, hydrothermal, KMnO4
Procedia PDF Downloads 4021094 GPU-Accelerated Triangle Mesh Simplification Using Parallel Vertex Removal
Authors: Thomas Odaker, Dieter Kranzlmueller, Jens Volkert
Abstract:
We present an approach to triangle mesh simplification designed to be executed on the GPU. We use a quadric error metric to calculate an error value for each vertex of the mesh and order all vertices based on this value. This step is followed by the parallel removal of a number of vertices with the lowest calculated error values. To allow for the parallel removal of multiple vertices we use a set of per-vertex boundaries that prevent mesh foldovers even when simplification operations are performed on neighbouring vertices. We execute multiple iterations of the calculation of the vertex errors, ordering of the error values and removal of vertices until either a desired number of vertices remains in the mesh or a minimum error value is reached. This parallel approach is used to speed up the simplification process while maintaining mesh topology and avoiding foldovers at every step of the simplification.Keywords: computer graphics, half edge collapse, mesh simplification, precomputed simplification, topology preserving
Procedia PDF Downloads 3671093 Developing a Framework for Open Source Software Adoption in a Higher Education Institution in Uganda. A case of Kyambogo University
Authors: Kafeero Frank
Abstract:
This study aimed at developing a frame work for open source software adoption in an institution of higher learning in Uganda, with the case of KIU as a study area. There were mainly four research questions based on; individual staff interaction with open source software forum, perceived FOSS characteristics, organizational characteristics and external characteristics as factors that affect open source software adoption. The researcher used causal-correlation research design to study effects of these variables on open source software adoption. A quantitative approach was used in this study with self-administered questionnaire on a purposively and randomly sampled sample of university ICT staff. Resultant data was analyzed using means, correlation coefficients and multivariate multiple regression analysis as statistical tools. The study reveals that individual staff interaction with open source software forum and perceived FOSS characteristics were the primary factors that significantly affect FOSS adoption while organizational and external factors were secondary with no significant effect but significant correlation to open source software adoption. It was concluded that for effective open source software adoption to occur there must be more effort on primary factors with subsequent reinforcement of secondary factors to fulfill the primary factors and adoption of open source software. Lastly recommendations were made in line with conclusions for coming up with Kyambogo University frame work for open source software adoption in institutions of higher learning. Areas of further research recommended include; Stakeholders’ analysis of open source software adoption in Uganda; Challenges and way forward. Evaluation of Kyambogo University frame work for open source software adoption in institutions of higher learning. Framework development for cloud computing adoption in Ugandan universities. Framework for FOSS development in Uganda IT industryKeywords: open source software., organisational characteristics, external characteristics, cloud computing adoption
Procedia PDF Downloads 721092 Advancements in Electronic Sensor Technologies for Tea Quality Evaluation
Authors: Raana Babadi Fathipour
Abstract:
Tea, second only to water in global consumption rates, holds a significant place as the beverage of choice for many around the world. The process of fermenting tea leaves plays a crucial role in determining its ultimate quality, traditionally assessed through meticulous observation by tea tasters and laboratory analysis. However, advancements in technology have paved the way for innovative electronic sensing platforms like the electronic nose (e-nose), electronic tongue (e-tongue), and electronic eye (e-eye). These cutting-edge tools, coupled with sophisticated data processing algorithms, not only expedite the assessment of tea's sensory qualities based on consumer preferences but also establish new benchmarks for this esteemed bioactive product to meet burgeoning market demands worldwide. By harnessing intricate data sets derived from electronic signals and deploying multivariate statistical techniques, these technological marvels can enhance accuracy in predicting and distinguishing tea quality with unparalleled precision. In this contemporary exploration, a comprehensive overview is provided of the most recent breakthroughs and viable solutions aimed at addressing forthcoming challenges in the realm of tea analysis. Utilizing bio-mimicking Electronic Sensory Perception systems (ESPs), researchers have developed innovative technologies that enable precise and instantaneous evaluation of the sensory-chemical attributes inherent in tea and its derivatives. These sophisticated sensing mechanisms are adept at deciphering key elements such as aroma, taste, and color profiles, transitioning valuable data into intricate mathematical algorithms for classification purposes. Through their adept capabilities, these cutting-edge devices exhibit remarkable proficiency in discerning various teas with respect to their distinct pricing structures, geographic origins, harvest epochs, fermentation processes, storage durations, quality classifications, and potential adulteration levels. While voltammetric and fluorescent sensor arrays have emerged as promising tools for constructing electronic tongue systems proficient in scrutinizing tea compositions, potentiometric electrodes continue to serve as reliable instruments for meticulously monitoring taste dynamics within different tea varieties. By implementing a feature-level fusion strategy within predictive models, marked enhancements can be achieved regarding efficiency and accuracy levels. Moreover, by establishing intrinsic linkages through pattern recognition methodologies between sensory traits and biochemical makeup found within tea samples, further strides are made toward enhancing our understanding of this venerable beverage's complex nature.Keywords: classifier system, tea, polyphenol, sensor, taste sensor
Procedia PDF Downloads 21091 Teachers and Innovations in Information and Communication Technology
Authors: Martina Manenova, Lukas Cirus
Abstract:
This article introduces research focused on elementary school teachers’ approach to innovations in ICT. The diffusion of innovations theory, which was written by E. M. Rogers, captures the processes of innovation adoption. The research method derived from this theory and the Rogers’ questionnaire focused on the diffusion of innovations was used as the basic research method. The research sample consisted of elementary school teachers. The comparison of results with the Rogers’ results shows that among the teachers in the research sample the so-called early majority, as well as the overall division of the data, was rather central (early adopter, early majority, and later majority). The teachers very rarely appeared on the edge positions (innovator, laggard). The obtained results can be applied to teaching practice and used especially in the implementation of new technologies and techniques into the educational process.Keywords: innovation, diffusion of innovation, information and communication technology, teachers
Procedia PDF Downloads 2931090 Liver Tumor Detection by Classification through FD Enhancement of CT Image
Authors: N. Ghatwary, A. Ahmed, H. Jalab
Abstract:
In this paper, an approach for the liver tumor detection in computed tomography (CT) images is represented. The detection process is based on classifying the features of target liver cell to either tumor or non-tumor. Fractional differential (FD) is applied for enhancement of Liver CT images, with the aim of enhancing texture and edge features. Later on, a fusion method is applied to merge between the various enhanced images and produce a variety of feature improvement, which will increase the accuracy of classification. Each image is divided into NxN non-overlapping blocks, to extract the desired features. Support vector machines (SVM) classifier is trained later on a supplied dataset different from the tested one. Finally, the block cells are identified whether they are classified as tumor or not. Our approach is validated on a group of patients’ CT liver tumor datasets. The experiment results demonstrated the efficiency of detection in the proposed technique.Keywords: fractional differential (FD), computed tomography (CT), fusion, aplha, texture features.
Procedia PDF Downloads 3591089 A Survey on Constraint Solving Approaches Using Parallel Architectures
Authors: Nebras Gharbi, Itebeddine Ghorbel
Abstract:
In the latest years and with the advancements of the multicore computing world, the constraint programming community tried to benefit from the capacity of new machines and make the best use of them through several parallel schemes for constraint solving. In this paper, we propose a survey of the different proposed approaches to solve Constraint Satisfaction Problems using parallel architectures. These approaches use in a different way a parallel architecture: the problem itself could be solved differently by several solvers or could be split over solvers.Keywords: constraint programming, parallel programming, constraint satisfaction problem, speed-up
Procedia PDF Downloads 3201088 E-Learning in Life-Long Learning: Best Practices from the University of the Aegean
Authors: Chryssi Vitsilaki, Apostolos Kostas, Ilias Efthymiou
Abstract:
This paper presents selected best practices on online learning and teaching derived from a novel and innovating Lifelong Learning program through e-Learning, which has during the last five years been set up at the University of the Aegean in Greece. The university, capitalizing on an award-winning, decade-long experience in e-learning and blended learning in undergraduate and postgraduate studies, recently expanded into continuous education and vocational training programs in various cutting-edge fields. So, in this article we present: (a) the academic structure/infrastructure which has been developed for the administrative, organizational and educational support of the e-Learning process, including training the trainers, (b) the mode of design and implementation based on a sound pedagogical framework of open and distance education, and (c) the key results of the assessment of the e-learning process by the participants, as they are used to feedback on continuous organizational and teaching improvement and quality control.Keywords: distance education, e-learning, life-long programs, synchronous/asynchronous learning
Procedia PDF Downloads 3341087 Discerning Divergent Nodes in Social Networks
Authors: Mehran Asadi, Afrand Agah
Abstract:
In data mining, partitioning is used as a fundamental tool for classification. With the help of partitioning, we study the structure of data, which allows us to envision decision rules, which can be applied to classification trees. In this research, we used online social network dataset and all of its attributes (e.g., Node features, labels, etc.) to determine what constitutes an above average chance of being a divergent node. We used the R statistical computing language to conduct the analyses in this report. The data were found on the UC Irvine Machine Learning Repository. This research introduces the basic concepts of classification in online social networks. In this work, we utilize overfitting and describe different approaches for evaluation and performance comparison of different classification methods. In classification, the main objective is to categorize different items and assign them into different groups based on their properties and similarities. In data mining, recursive partitioning is being utilized to probe the structure of a data set, which allow us to envision decision rules and apply them to classify data into several groups. Estimating densities is hard, especially in high dimensions, with limited data. Of course, we do not know the densities, but we could estimate them using classical techniques. First, we calculated the correlation matrix of the dataset to see if any predictors are highly correlated with one another. By calculating the correlation coefficients for the predictor variables, we see that density is strongly correlated with transitivity. We initialized a data frame to easily compare the quality of the result classification methods and utilized decision trees (with k-fold cross validation to prune the tree). The method performed on this dataset is decision trees. Decision tree is a non-parametric classification method, which uses a set of rules to predict that each observation belongs to the most commonly occurring class label of the training data. Our method aggregates many decision trees to create an optimized model that is not susceptible to overfitting. When using a decision tree, however, it is important to use cross-validation to prune the tree in order to narrow it down to the most important variables.Keywords: online social networks, data mining, social cloud computing, interaction and collaboration
Procedia PDF Downloads 1601086 Simulation of the FDA Centrifugal Blood Pump Using High Performance Computing
Authors: Mehdi Behbahani, Sebastian Rible, Charles Moulinec, Yvan Fournier, Mike Nicolai, Paolo Crosetto
Abstract:
Computational Fluid Dynamics blood-flow simulations are increasingly used to develop and validate blood-contacting medical devices. This study shows that numerical simulations can provide additional and accurate estimates of relevant hemodynamic indicators (e.g., recirculation zones or wall shear stresses), which may be difficult and expensive to obtain from in-vivo or in-vitro experiments. The most recent FDA (Food and Drug Administration) benchmark consisted of a simplified centrifugal blood pump model that contains fluid flow features as they are commonly found in these devices with a clear focus on highly turbulent phenomena. The FDA centrifugal blood pump study is composed of six test cases with different volumetric flow rates ranging from 2.5 to 7.0 liters per minute, pump speeds, and Reynolds numbers ranging from 210,000 to 293,000. Within the frame of this study different turbulence models were tested including RANS models, e.g. k-omega, k-epsilon and a Reynolds Stress Model (RSM) and, LES. The partitioners Hilbert, METIS, ParMETIS and SCOTCH were used to create an unstructured mesh of 76 million elements and compared in their efficiency. Computations were performed on the JUQUEEN BG/Q architecture applying the highly parallel flow solver Code SATURNE and typically using 32768 or more processors in parallel. Visualisations were performed by means of PARAVIEW. Different turbulence models including all six flow situations could be successfully analysed and validated against analytical considerations and from comparison to other data-bases. It showed that an RSM represents an appropriate choice with respect to modeling high-Reynolds number flow cases. Especially, the Rij-SSG (Speziale, Sarkar, Gatzki) variant turned out to be a good approach. Visualisation of complex flow features could be obtained and the flow situation inside the pump could be characterized.Keywords: blood flow, centrifugal blood pump, high performance computing, scalability, turbulence
Procedia PDF Downloads 3821085 Authentic Leadership, Task Performance, and Organizational Citizenship Behavior
Authors: C. V. Chen, Y. H. Jeng, S. J. Wang
Abstract:
Leadership is essential to enhancing followers’ psychological empowerment and has an effect on their willingness to take on extra-role behavior and aim for greater performance. Authentic leadership is confirmed to promote employees’ positive affect, psychological empowerment, well-being, and performance. Employees’ spontaneous undertaking of organizationally desired behaviors allows organizations’ gaining the edge in the fiercely competitive business environment. Apart from the contextual factor of leadership, individuals’ goal orientation is found to be highly related to his/her performance. To better understand the psychological process and potential moderation of personal goal orientation, this study investigates the effect of authentic leadership on employees’ task performance and organizational citizenship behavior by including psychological empowerment as the mediating factor and goal orientation as the moderating factor.Keywords: authentic leadership, task performance, organizational citizenship behavior, goal orientation
Procedia PDF Downloads 7921084 Landslide Susceptibility Mapping Using Soft Computing in Amhara Saint
Authors: Semachew M. Kassa, Africa M Geremew, Tezera F. Azmatch, Nandyala Darga Kumar
Abstract:
Frequency ratio (FR) and analytical hierarchy process (AHP) methods are developed based on past landslide failure points to identify the landslide susceptibility mapping because landslides can seriously harm both the environment and society. However, it is still difficult to select the most efficient method and correctly identify the main driving factors for particular regions. In this study, we used fourteen landslide conditioning factors (LCFs) and five soft computing algorithms, including Random Forest (RF), Support Vector Machine (SVM), Logistic Regression (LR), Artificial Neural Network (ANN), and Naïve Bayes (NB), to predict the landslide susceptibility at 12.5 m spatial scale. The performance of the RF (F1-score: 0.88, AUC: 0.94), ANN (F1-score: 0.85, AUC: 0.92), and SVM (F1-score: 0.82, AUC: 0.86) methods was significantly better than the LR (F1-score: 0.75, AUC: 0.76) and NB (F1-score: 0.73, AUC: 0.75) method, according to the classification results based on inventory landslide points. The findings also showed that around 35% of the study region was made up of places with high and very high landslide risk (susceptibility greater than 0.5). The very high-risk locations were primarily found in the western and southeastern regions, and all five models showed good agreement and similar geographic distribution patterns in landslide susceptibility. The towns with the highest landslide risk include Amhara Saint Town's western part, the Northern part, and St. Gebreal Church villages, with mean susceptibility values greater than 0.5. However, rainfall, distance to road, and slope were typically among the top leading factors for most villages. The primary contributing factors to landslide vulnerability were slightly varied for the five models. Decision-makers and policy planners can use the information from our study to make informed decisions and establish policies. It also suggests that various places should take different safeguards to reduce or prevent serious damage from landslide events.Keywords: artificial neural network, logistic regression, landslide susceptibility, naïve Bayes, random forest, support vector machine
Procedia PDF Downloads 841083 Intelligent Grading System of Apple Using Neural Network Arbitration
Authors: Ebenezer Obaloluwa Olaniyi
Abstract:
In this paper, an intelligent system has been designed to grade apple based on either its defective or healthy for production in food processing. This paper is segmented into two different phase. In the first phase, the image processing techniques were employed to extract the necessary features required in the apple. These techniques include grayscale conversion, segmentation where a threshold value is chosen to separate the foreground of the images from the background. Then edge detection was also employed to bring out the features in the images. These extracted features were then fed into the neural network in the second phase of the paper. The second phase is a classification phase where neural network employed to classify the defective apple from the healthy apple. In this phase, the network was trained with back propagation and tested with feed forward network. The recognition rate obtained from our system shows that our system is more accurate and faster as compared with previous work.Keywords: image processing, neural network, apple, intelligent system
Procedia PDF Downloads 3991082 Gender Norms and Psychological Mechanisms that Make Sexual Assault Possible
Authors: Moor Avigail
Abstract:
This research examines gender norms that underlie the propensity to commit sexual assault and to carry it out. Factors that have been shown to relate to such propensity will be enumerated and tied to their ramifications. These include sexual objectification of women, endorsement of gender-based rape myths blaming the victim, masculine entitlement, low empathy to victims, along with elevated empathy towards rapists. Heavy use of pornography as well as a misconstruing of the meaning of refusal to sex, have also been implicated. Additionally, a cutting-edge investigation, which we have just completed, examined what seems to occur in the perpetrator's mind during the assault. No research to date has ventured to uncover what essentially allows the rape to be carried out in real time, in the sense of what mental mechanisms go into operation in rapists during the assault itself. Our findings demonstrate that dehumanization and rationalization are pivotal. On the one hand, the perpetrator apparently allows himself to disregard the victim's humanity while simultaneously justifying his actions in relation to the victim's behavior.Keywords: gender norms, gender psychology, sexual assault, gender
Procedia PDF Downloads 721081 Torsional Design Method of Asymmetric and Irregular Building under Horizontal Earthquake Action
Authors: Radhwane Boudjelthia
Abstract:
Based upon elaborate analysis on torsional design methods of asymmetric and irregular structure under horizontal earthquake action, it points out that the main design principles of an asymmetric building subjected to horizontal earthquake are: the torsion of vertical members induced by the torsion angle of the floor (rigid diaphragm) cannot exceed the allowable value, the inter-story displacement at outermost frame or shear wall should be less than that required by design code, stresses in plane of the slab should be controlled within acceptable extent under different intensity earthquakes. That current seismic design code only utilizes the torsion displacement ratio to control the floor torsion, which seems not reasonable enough since its connotation is the multiple of the floor torsion angle and the distance of floor mass center to the edge frame or shear wall.Keywords: earthquake, building, seismic forces, displacement, resonance, response
Procedia PDF Downloads 3471080 Single Crystal Growth in Floating-Zone Method and Properties of Spin Ladders: Quantum Magnets
Authors: Rabindranath Bag, Surjeet Singh
Abstract:
Materials in which the electrons are strongly correlated provide some of the most challenging and exciting problems in condensed matter physics today. After the discovery of high critical temperature superconductivity in layered or two-dimensional copper oxides, many physicists got attention in cuprates and it led to an upsurge of interest in the synthesis and physical properties of copper-oxide based material. The quest to understand superconducting mechanism in high-temperature cuprates, drew physicist’s attention to somewhat simpler compounds consisting of spin-chains or one-dimensional lattice of coupled spins. Low-dimensional quantum magnets are of huge contemporary interest in basic sciences as well emerging technologies such as quantum computing and quantum information theory, and heat management in microelectronic devices. Spin ladder is an example of quasi one-dimensional quantum magnets which provides a bridge between one and two dimensional materials. One of the examples of quasi one-dimensional spin-ladder compounds is Sr14Cu24O41, which exhibits a lot of interesting and exciting physical phenomena in low dimensional systems. Very recently, the ladder compound Sr14Cu24O41 was shown to exhibit long-distance quantum entanglement crucial to quantum information theory. Also, it is well known that hole-compensation in this material results in very high (metal-like) anisotropic thermal conductivity at room temperature. These observations suggest that Sr14Cu24O41 is a potential multifunctional material which invites further detailed investigations. To investigate these properties one must needs a large and high quality of single crystal. But these systems are showing incongruently melting behavior, which brings many difficulties to grow a large and quality of single crystals. Hence, we are using TSFZ (Travelling Solvent Floating Zone) method to grow the high quality of single crystals of the low dimensional magnets. Apart from this, it has unique crystal structure (alternating stacks of plane containing edge-sharing CuO2 chains, and the plane containing two-leg Cu2O3 ladder with intermediate Sr layers along the b- axis), which is also incommensurate in nature. It exhibits abundant physical phenomenon such as spin dimerization, crystallization of charge holes and charge density wave. The maximum focus of research so far involved in introducing defects on A-site (Sr). However, apart from the A-site (Sr) doping, there are only few studies in which the B-site (Cu) doping of polycrystalline Sr14Cu24O41 have been discussed and the reason behind this is the possibility of two doping sites for Cu (CuO2 chain and Cu2O3 ladder). Therefore, in our present work, the crystals (pristine and Cu-site doped) were grown by using TSFZ method by tuning the growth parameters. The Laue diffraction images, optical polarized microscopy and Scanning Electron Microscopy (SEM) images confirm the quality of the grown crystals. Here, we report the single crystal growth, magnetic and transport properties of Sr14Cu24O41 and its lightly doped variants (magnetic and non-magnetic) containing less than 1% of Co, Ni, Al and Zn impurities. Since, any real system will have some amount of weak disorder, our studies on these ladder compounds with controlled dilute disorder would be significant in the present context.Keywords: low-dimensional quantum magnets, single crystal, spin-ladder, TSFZ technique
Procedia PDF Downloads 2751079 A Design of Beam-Steerable Antenna Array for Use in Future Mobile Handsets
Authors: Naser Ojaroudi Parchin, Atta Ullah, Haleh Jahanbakhsh Basherlou, Raed A. Abd-Alhameed, Peter S. Excell
Abstract:
A design of beam-steerable antenna array for the future cellular communication (5G) is presented. The proposed design contains eight elements of compact end-fire antennas arranged on the top edge of smartphone printed circuit board (PCB). Configuration of the antenna element consists of the conductive patterns on the top and bottom copper foil layers and a substrate layer with a via-hole. The simulated results including input-impedance and also fundamental radiation properties have been presented and discussed. The impedance bandwidth (S11 ≤ -10 dB) of the antenna spans from 17.5 to 21 GHz (more than 3 GHz bandwidth) with a resonance at 19 GHz. The antenna exhibits end-fire (directional) radiation beams with wide-angle scanning property and could be used for the future 5G beam-forming. Furthermore, the characteristics of the array design in the vicinity of user-hand are studied.Keywords: beam-steering, end-fire radiation mode, mobile-phone antenna, phased array
Procedia PDF Downloads 1581078 Advancements in Truss Design for High-Performance Facades and Roof System: A Structural Analysis
Authors: Milind Anurag
Abstract:
This study investigates cutting-edge truss design improvements, which are specifically adapted to satisfy the structural demands and difficulties associated with high-performance facades and roofs in modern architectural environments. With a growing emphasis on sustainability, energy efficiency, and eye-catching architectural aesthetics, the structural components that support these characteristics play an important part in attaining the right balance of form and function. The paper seeks to contribute to the evolution of truss design methods by combining data from these investigations, giving significant insights for architects, engineers, and researchers interested in the creation of high-performance building envelopes. The findings of this study are meant to inform future design standards and practices, promoting the development of structures that seamlessly integrate architectural innovation with structural robustness and environmental responsibility.Keywords: truss design, high-performance, facades, finite element analysis, structural efficiency
Procedia PDF Downloads 561077 Analysis of Three-Dimensional Cracks in an Isotropic Medium by the Semi-Analytical Method
Authors: Abdoulnabi Tavangari, Nasim Salehzadeh
Abstract:
We presume a cylindrical medium that is under a uniform loading and there is a penny shaped crack located in the center of cylinder. In the crack growth analysis, the Stress Intensity Factor (SIF) is a fundamental prerequisite. In the present study, according to the RITZ method and by considering a cylindrical coordinate system as the main coordinate and a local polar coordinate, the mode-I SIF of threedimensional penny-shaped crack is obtained. In this method the unknown coefficients will be obtained with minimizing the potential energy that is including the strain energy and the external force work. By using the hook's law, stress fields will be obtained and then by using the Irvine equations, the amount of SIF will be obtained near the edge of the crack. This question has been solved for extreme medium in the Tada handbook and the result of the present research has been compared with that.Keywords: three-dimensional cracks, penny-shaped crack, stress intensity factor, fracture mechanics, Ritz method
Procedia PDF Downloads 3971076 Kinematic of Thrusts and Tectonic Vergence in the Paleogene Orogen of Eastern Iran, Sechangi Area
Authors: Shahriyar Keshtgar, Mahmoud Reza Heyhat, Sasan Bagheri, Ebrahim Gholami, Seyed Naser Raiisosadat
Abstract:
The eastern Iranian range is a Z-shaped sigmoidal outcrop appearing with a NS-trending general strike on the satellite images, has already been known as the Sistan suture zone, recently identified as the product of an orogenic event introduced either by the Paleogene or Sistan orogen names. The flysch sedimentary basin of eastern Iran was filled by a huge volume of fine-grained Eocene turbiditic sediments, smaller amounts of pelagic deposits and Cretaceous ophiolitic slices, which are entirely remnants of older accretionary prisms appeared in a fold-thrust belt developed onto a subduction zone under the Lut/Afghan block, portions of the Cimmerian superterrane. In these ranges, there are Triassic sedimentary and carbonate sequences (equivalent to Nayband and Shotori Formations) along with scattered outcrops of Permian limestones (equivalent to Jamal limestone) and greenschist-facies metamorphic rocks, probably belonging to the basement of the Lut block, which have tectonic contacts with younger rocks. Moreover, the younger Eocene detrital-volcanic rocks were also thrusted onto the Cretaceous or younger turbiditic deposits. The first generation folds (parallel folds) and thrusts with slaty cleavage appeared parallel to the NE edge of the Lut block. Structural analysis shows that the most vergence of thrusts is toward the southeast so that the Permo-Triassic units in Lut have been thrusted on the younger rocks, including older (probably Jurassic) granites. Additional structural studies show that the regional transport direction in this deformation event is from northwest to the southeast where, from the outside to the inside of the orogen in the Sechengi area. Younger thrusts of the second deformation event were either directly formed as a result of the second deformation event, or they were older thrusts that reactivated and folded so that often, two sets or more slickenlines can be recognized on the thrust planes. The recent thrusts have been redistributed in directions nearly perpendicular to the edge of the Lut block and parallel to the axial surfaces of the northwest second generation large-scale folds (radial folds). Some of these younger thrusts follow the out-of-the-syncline thrust system. The both axial planes of these folds and associated penetrative shear cleavage extended towards northwest appeared with both northeast and southwest dips parallel to the younger thrusts. The large-scale buckling with the layer-parallel stress field has created this deformation event. Such consecutive deformation events perpendicular to each other cannot be basically explained by the simple linear orogen models presented for eastern Iran so far and are more consistent with the oroclinal buckling model.Keywords: thrust, tectonic vergence, orocline buckling, sechangi, eastern iranian ranges
Procedia PDF Downloads 801075 Morphological Features Fusion for Identifying INBREAST-Database Masses Using Neural Networks and Support Vector Machines
Authors: Nadia el Atlas, Mohammed el Aroussi, Mohammed Wahbi
Abstract:
In this paper a novel technique of mass characterization based on robust features-fusion is presented. The proposed method consists of mainly four stages: (a) the first phase involves segmenting the masses using edge information’s. (b) The second phase is to calculate and fuse the most relevant morphological features. (c) The last phase is the classification step which allows us to classify the images into benign and malignant masses. In this step we have implemented Support Vectors Machines (SVM) and Artificial Neural Networks (ANN), which were evaluated with the following performance criteria: confusion matrix, accuracy, sensitivity, specificity, receiver operating characteristic ROC, and error histogram. The effectiveness of this new approach was evaluated by a recently developed database: INBREAST database. The fusion of the most appropriate morphological features provided very good results. The SVM gives accuracy to within 64.3%. Whereas the ANN classifier gives better results with an accuracy of 97.5%.Keywords: breast cancer, mammography, CAD system, features, fusion
Procedia PDF Downloads 6011074 Creating Smart and Healthy Cities by Exploring the Potentials of Emerging Technologies and Social Innovation for Urban Efficiency: Lessons from the Innovative City of Boston
Authors: Mohammed Agbali, Claudia Trillo, Yusuf Arayici, Terrence Fernando
Abstract:
The wide-spread adoption of the Smart City concept has introduced a new era of computing paradigm with opportunities for city administrators and stakeholders in various sectors to re-think the concept of urbanization and development of healthy cities. With the world population rapidly becoming urban-centric especially amongst the emerging economies, social innovation will assist greatly in deploying emerging technologies to address the development challenges in core sectors of the future cities. In this context, sustainable health-care delivery and improved quality of life of the people is considered at the heart of the healthy city agenda. This paper examines the Boston innovation landscape from the perspective of smart services and innovation ecosystem for sustainable development, especially in transportation and healthcare. It investigates the policy implementation process of the Healthy City agenda and eHealth economy innovation based on the experience of Massachusetts’s City of Boston initiatives. For this purpose, three emerging areas are emphasized, namely the eHealth concept, the innovation hubs, and the emerging technologies that drive innovation. This was carried out through empirical analysis on results of public sector and industry-wide interviews/survey about Boston’s current initiatives and the enabling environment. The paper highlights few potential research directions for service integration and social innovation for deploying emerging technologies in the healthy city agenda. The study therefore suggests the need to prioritize social innovation as an overarching strategy to build sustainable Smart Cities in order to avoid technology lock-in. Finally, it concludes that the Boston example of innovation economy is unique in view of the existing platforms for innovation and proper understanding of its dynamics, which is imperative in building smart and healthy cities where quality of life of the citizenry can be improved.Keywords: computing paradigm, emerging technologies, equitable healthcare, healthy cities, open data, smart city, social innovation
Procedia PDF Downloads 3371073 Determining Components of Deflection of the Vertical in Owerri West Local Government, Imo State Nigeria Using Least Square Method
Authors: Chukwu Fidelis Ndubuisi, Madufor Michael Ozims, Asogwa Vivian Ndidiamaka, Egenamba Juliet Ngozi, Okonkwo Stephen C., Kamah Chukwudi David
Abstract:
Deflection of the vertical is a quantity used in reducing geodetic measurements related to geoidal networks to the ellipsoidal plane; and it is essential in Geoid modeling processes. Computing the deflection of the vertical component of a point in a given area is necessary in evaluating the standard errors along north-south and east-west direction. Using combined approach for the determination of deflection of the vertical component provides improved result but labor intensive without appropriate method. Least square method is a method that makes use of redundant observation in modeling a given sets of problem that obeys certain geometric condition. This research work is aimed to computing the deflection of vertical component of Owerri West local government area of Imo State using geometric method as field technique. In this method combination of Global Positioning System on static mode and precise leveling observation were utilized in determination of geodetic coordinate of points established within the study area by GPS observation and the orthometric heights through precise leveling. By least square using Matlab programme; the estimated deflections of vertical component parameters for the common station were -0.0286 and -0.0001 arc seconds for the north-south and east-west components respectively. The associated standard errors of the processed vectors of the network were computed. The computed standard errors of the North-south and East-west components were 5.5911e-005 and 1.4965e-004 arc seconds, respectively. Therefore, including the derived component of deflection of the vertical to the ellipsoidal model will yield high observational accuracy since an ellipsoidal model is not tenable due to its far observational error in the determination of high quality job. It is important to include the determined deflection of the vertical component for Owerri West Local Government in Imo State, Nigeria.Keywords: deflection of vertical, ellipsoidal height, least square, orthometric height
Procedia PDF Downloads 2131072 Ensuring Uniform Energy Consumption in Non-Deterministic Wireless Sensor Network to Protract Networks Lifetime
Authors: Vrince Vimal, Madhav J. Nigam
Abstract:
Wireless sensor networks have enticed much of the spotlight from researchers all around the world, owing to its extensive applicability in agricultural, industrial and military fields. Energy conservation node deployment stratagems play a notable role for active implementation of Wireless Sensor Networks. Clustering is the approach in wireless sensor networks which improves energy efficiency in the network. The clustering algorithm needs to have an optimum size and number of clusters, as clustering, if not implemented properly, cannot effectively increase the life of the network. In this paper, an algorithm has been proposed to address connectivity issues with the aim of ensuring the uniform energy consumption of nodes in every part of the network. The results obtained after simulation showed that the proposed algorithm has an edge over existing algorithms in terms of throughput and networks lifetime.Keywords: Wireless Sensor network (WSN), Random Deployment, Clustering, Isolated Nodes, Networks Lifetime
Procedia PDF Downloads 337