Search results for: granular computing
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 1198

Search results for: granular computing

718 Detailed Quantum Circuit Design and Evaluation of Grover's Algorithm for the Bounded Degree Traveling Salesman Problem Using the Q# Language

Authors: Wenjun Hou, Marek Perkowski

Abstract:

The Traveling Salesman problem is famous in computing and graph theory. In short, it asks for the Hamiltonian cycle of the least total weight in a given graph with N nodes. All variations on this problem, such as those with K-bounded-degree nodes, are classified as NP-complete in classical computing. Although several papers propose theoretical high-level designs of quantum algorithms for the Traveling Salesman Problem, no quantum circuit implementation of these algorithms has been created up to our best knowledge. In contrast to previous papers, the goal of this paper is not to optimize some abstract complexity measures based on the number of oracle iterations, but to be able to evaluate the real circuit and time costs of the quantum computer. Using the emerging quantum programming language Q# developed by Microsoft, which runs quantum circuits in a quantum computer simulation, an implementation of the bounded-degree problem and its respective quantum circuit were created. To apply Grover’s algorithm to this problem, a quantum oracle was designed, evaluating the cost of a particular set of edges in the graph as well as its validity as a Hamiltonian cycle. Repeating the Grover algorithm with an oracle that finds successively lower cost each time allows to transform the decision problem to an optimization problem, finding the minimum cost of Hamiltonian cycles. N log₂ K qubits are put into an equiprobablistic superposition by applying the Hadamard gate on each qubit. Within these N log₂ K qubits, the method uses an encoding in which every node is mapped to a set of its encoded edges. The oracle consists of several blocks of circuits: a custom-written edge weight adder, node index calculator, uniqueness checker, and comparator, which were all created using only quantum Toffoli gates, including its special forms, which are Feynman and Pauli X. The oracle begins by using the edge encodings specified by the qubits to calculate each node that this path visits and adding up the edge weights along the way. Next, the oracle uses the calculated nodes from the previous step and check that all the nodes are unique. Finally, the oracle checks that the calculated cost is less than the previously-calculated cost. By performing the oracle an optimal number of times, a correct answer can be generated with very high probability. The oracle of the Grover Algorithm is modified using the recalculated minimum cost value, and this procedure is repeated until the cost cannot be further reduced. This algorithm and circuit design have been verified, using several datasets, to generate correct outputs.

Keywords: quantum computing, quantum circuit optimization, quantum algorithms, hybrid quantum algorithms, quantum programming, Grover’s algorithm, traveling salesman problem, bounded-degree TSP, minimal cost, Q# language

Procedia PDF Downloads 189
717 Case Study; Drilled Shafts Installation in Difficult Site Conditions; Loose Sand and High Water Table

Authors: Anthony El Hachem, Hosam Salman

Abstract:

Selecting the most effective construction method for drilled shafts under the high phreatic surface can be a challenging task that requires effective communication between the design and construction teams. Slurry placement, temporary casing, and permanent casing are the three most commonly used installation techniques to ensure the stability of the drilled hole before casting the concrete. Each one of these methods has its implications on the installation and performance of the drilled piers. Drilled shafts were designed to support a fire wall for an Energy project in Central Texas. The subsurface consisted of interlayers of sands and clays of varying shear strengths. The design recommended that the shafts be installed with temporary casing or slurry displacement due to the anticipated groundwater seepage through granular soils. During the foundation construction, it was very difficult to maintain the stability of the hole, and the contractor requested to install the shafts using permanent casings. Therefore, the foundation design was modified to ensure that the cased shafts achieve the required load capacity. Effective and continuous communications between the owner, contractor and design team during field shaft installations to mitigate the unforeseen challenges helped the team to successfully complete the project.

Keywords: construction challenges, deep foundations, drilled shafts, loose sands underwater table, permanent casing

Procedia PDF Downloads 191
716 Methods for Solving Identification Problems

Authors: Fadi Awawdeh

Abstract:

In this work, we highlight the key concepts in using semigroup theory as a methodology used to construct efficient formulas for solving inverse problems. The proposed method depends on some results concerning integral equations. The experimental results show the potential and limitations of the method and imply directions for future work.

Keywords: identification problems, semigroup theory, methods for inverse problems, scientific computing

Procedia PDF Downloads 480
715 Comparison of Regime Transition between Ellipsoidal and Spherical Particle Assemblies in a Model Shear Cell

Authors: M. Hossain, H. P. Zhu, A. B. Yu

Abstract:

This paper presents a numerical investigation of regime transition of flow of ellipsoidal particles and a comparison with that of spherical particle assembly. Particle assemblies constituting spherical and ellipsoidal particle of 2.5:1 aspect ratio are examined at separate instances in similar flow conditions in a shear cell model that is numerically developed based on the discrete element method. Correlations among elastically scaled stress, kinetically scaled stress, coordination number and volume fraction are investigated, and show important similarities and differences for the spherical and ellipsoidal particle assemblies. In particular, volume fractions at points of regime transition are identified for both types of particles. It is found that compared with spherical particle assembly, ellipsoidal particle assembly has higher volume fraction for the quasistatic to intermediate regime transition and lower volume fraction for the intermediate to inertial regime transition. Finally, the relationship between coordination number and volume fraction shows strikingly distinct features for the two cases, suggesting that different from spherical particles, the effect of the shear rate on the coordination number is not significant for ellipsoidal particles. This work provides a glimpse of currently running work on one of the most attractive scopes of research in this field and has a wide prospect in understanding rheology of more complex shaped particles in light of the strong basis of simpler spherical particle rheology.

Keywords: DEM, granular rheology, non-spherical particles, regime transition

Procedia PDF Downloads 261
714 Internet of Things, Edge and Cloud Computing in Rock Mechanical Investigation for Underground Surveys

Authors: Esmael Makarian, Ayub Elyasi, Fatemeh Saberi, Olusegun Stanley Tomomewo

Abstract:

Rock mechanical investigation is one of the most crucial activities in underground operations, especially in surveys related to hydrocarbon exploration and production, geothermal reservoirs, energy storage, mining, and geotechnics. There is a wide range of traditional methods for driving, collecting, and analyzing rock mechanics data. However, these approaches may not be suitable or work perfectly in some situations, such as fractured zones. Cutting-edge technologies have been provided to solve and optimize the mentioned issues. Internet of Things (IoT), Edge, and Cloud Computing technologies (ECt & CCt, respectively) are among the most widely used and new artificial intelligence methods employed for geomechanical studies. IoT devices act as sensors and cameras for real-time monitoring and mechanical-geological data collection of rocks, such as temperature, movement, pressure, or stress levels. Structural integrity, especially for cap rocks within hydrocarbon systems, and rock mass behavior assessment, to further activities such as enhanced oil recovery (EOR) and underground gas storage (UGS), or to improve safety risk management (SRM) and potential hazards identification (P.H.I), are other benefits from IoT technologies. EC techniques can process, aggregate, and analyze data immediately collected by IoT on a real-time scale, providing detailed insights into the behavior of rocks in various situations (e.g., stress, temperature, and pressure), establishing patterns quickly, and detecting trends. Therefore, this state-of-the-art and useful technology can adopt autonomous systems in rock mechanical surveys, such as drilling and production (in hydrocarbon wells) or excavation (in mining and geotechnics industries). Besides, ECt allows all rock-related operations to be controlled remotely and enables operators to apply changes or make adjustments. It must be mentioned that this feature is very important in environmental goals. More often than not, rock mechanical studies consist of different data, such as laboratory tests, field operations, and indirect information like seismic or well-logging data. CCt provides a useful platform for storing and managing a great deal of volume and different information, which can be very useful in fractured zones. Additionally, CCt supplies powerful tools for predicting, modeling, and simulating rock mechanical information, especially in fractured zones within vast areas. Also, it is a suitable source for sharing extensive information on rock mechanics, such as the direction and size of fractures in a large oil field or mine. The comprehensive review findings demonstrate that digital transformation through integrated IoT, Edge, and Cloud solutions is revolutionizing traditional rock mechanical investigation. These advanced technologies have empowered real-time monitoring, predictive analysis, and data-driven decision-making, culminating in noteworthy enhancements in safety, efficiency, and sustainability. Therefore, by employing IoT, CCt, and ECt, underground operations have experienced a significant boost, allowing for timely and informed actions using real-time data insights. The successful implementation of IoT, CCt, and ECt has led to optimized and safer operations, optimized processes, and environmentally conscious approaches in underground geological endeavors.

Keywords: rock mechanical studies, internet of things, edge computing, cloud computing, underground surveys, geological operations

Procedia PDF Downloads 59
713 ABET Accreditation Process for Engineering and Technology Programs: Detailed Process Flow from Criteria 1 to Criteria 8

Authors: Amit Kumar, Rajdeep Chakrabarty, Ganesh Gupta

Abstract:

This paper illustrates the detailed accreditation process of Accreditation Board of Engineering and Technology (ABET) for accrediting engineering and Technology programs. ABET is a non-governmental agency that accredits engineering and technology, applied and natural sciences, and computing sciences programs. ABET was founded on 10th May 1932 and was founded by Institute of Electrical and Electronics Engineering. International industries accept ABET accredited institutes having the highest standards in their academic programs. In this accreditation, there are eight criteria in general; criterion 1 describes the student outcome evaluations, criteria 2 measures the program's educational objectives, criteria 3 is the student outcome calculated from the marks obtained by students, criteria 4 establishes continuous improvement, criteria 5 focus on curriculum of the institute, criteria 6 is about faculties of this institute, criteria 7 measures the facilities provided by the institute and finally, criteria 8 focus on institutional support towards staff of the institute. In this paper, we focused on the calculative part of each criterion with equations and suitable examples, the files and documentation required for each criterion, and the total workflow of the process. The references and the values used to illustrate the calculations are all taken from the samples provided at ABET's official website. In the final section, we also discuss the criterion-wise score weightage followed by evaluation with timeframe and deadlines.

Keywords: Engineering Accreditation Committee, Computing Accreditation Committee, performance indicator, Program Educational Objective, ABET Criterion 1 to 7, IEEE, National Board of Accreditation, MOOCS, Board of Studies, stakeholders, course objective, program outcome, articulation, attainment, CO-PO mapping, CO-PO-SO mapping, PDCA cycle, degree certificates, course files, course catalogue

Procedia PDF Downloads 57
712 Strength of Soft Clay Reinforced with Polypropylene Column

Authors: Muzamir Hasan, Anas Bazirgan

Abstract:

Granular columns is a technique that has the properties of improving bearing capacity, accelerating the dissipation of excess pore water pressure and reducing settlement in a weak soft soil. This research aims to investigate the role of Polypropylene column in improving the shear strength and compressibility of soft reconstituted kaolin clay by determining the effects of area replacement ratio, height penetrating ratio and volume replacement ratio of a singular Polypropylene column on the strength characteristics. Reinforced kaolin samples were subjected to Unconfined Compression (UCT) and Unconsolidated Undrained (UU) triaxial tests. The kaolin samples were 50 mm in diameter and 100 mm in height. Using the PP column reinforcement, with an area replacement ratio of 0.8, 0.5 and 0.3, shear strength increased approximately 5.27%, 26.22% and 64.28%, and 37.14%, 42.33% and 51.17%, for area replacement ratios of 25% and 10.24%. Meanwhile, UU testing showed an increase in shear strength of 24.01%, 23.17% and 23.49% and 28.79%, 27.29 and 30.81% for the same ratios. Based on the UCT results, the undrained shear strength generally increased with the decrease in height penetration ratio. However, based on the UU test results Mohr-Coulomb failure criteria, the installation of Polypropylene columns did not show any significant difference in effective friction angle. However, there was an increase in the apparent cohesion and undrained shear strength of the kaolin clay. In conclusion, Polypropylene column greatly improved the shear strength; and could therefore be implemented in reducing the cost of soil improvement as a replacement for non-renewable materials.

Keywords: polypropylene, UCT, UU test, Kaolin S300, ground improvement

Procedia PDF Downloads 328
711 Morphological, Mechanical, and Tribological Properties Investigations of CMTed Parts of Al-5356 Alloy

Authors: Antar Bouhank, Youcef Beellal, Samir Adjel, Abdelmadjid Ababsa

Abstract:

This paper investigates the impact of 3D printing parameters using the cold metal transfer (CMT) technique on the morphological, mechanical, and tribological properties of walls and massive parts made from aluminum alloy. The parameters studied include current intensity, torch movement speed, printing increment, and the flow rate of shielding gas. The manufactured parts, using the technique mentioned above, are walls and massive parts with different filling strategies, using grid and zigzag patterns and at different current intensities. The main goal of the article is to find out the welding parameters suitable for having parts with low defects and improved properties from the previously mentioned properties point of view. It has been observed from the results thus obtained that the high current intensity causes rapid solidification, resulting in high porosity and low hardness values. However, the high current intensity can cause very rapid solidification, which increases the melting point, and the part remains in the most stable shape. Furthermore, the results show that there is an evident relationship between hardness, coefficient of friction and wear test where the high intensity is, the low hardness is. The same note is for the coefficient of friction. The micrography of the walls shows a random granular structure with fine grain boundaries with a different grain size. Some interesting results are presented in this paper.

Keywords: aluminum alloy, porosity, microstructures, hardness

Procedia PDF Downloads 45
710 A Study on the Reinforced Earth Walls Using Sandwich Backfills under Seismic Loads

Authors: Kavitha A.S., L.Govindaraju

Abstract:

Reinforced earth walls offer excellent solution to many problems associated with earth retaining structures especially under seismic conditions. Use of cohesive soils as backfill material reduces the cost of reinforced soil walls if proper drainage measures are taken. This paper presents a numerical study on the application of a new technique called sandwich technique in reinforced earth walls. In this technique, a thin layer of granular soil is placed above and below the reinforcement layer to initiate interface friction and the remaining portion of the backfill is filled up using the existing insitu cohesive soil. A 6 m high reinforced earth wall has been analysed as a two-dimensional plane strain finite element model. Three types of reinforcing elements such as geotextile, geogrid and metallic strips were used. The horizontal wall displacements and the tensile loads in the reinforcement were used as the criteria to evaluate the results at the end of construction and dynamic excitation phases. Also to verify the effectiveness of sandwich layer on the performance of the wall, the thickness of sand fill surrounding the reinforcement was varied. At the end of construction stage it is found that the wall with sandwich type backfill yielded lower displacements when compared to the wall with cohesive soil as backfill. Also with sandwich backfill, the reinforcement loads reduced substantially when compared to the wall with cohesive soil as backfill. Further, it is found that sandwich technique as backfill and geogrid as reinforcement is a good combination to reduce the deformations of geosynthetic reinforced walls during seismic loading.

Keywords: geogrid, geotextile, reinforced earth, sandwich technique

Procedia PDF Downloads 285
709 Research and Implementation of Cross-domain Data Sharing System in Net-centric Environment

Authors: Xiaoqing Wang, Jianjian Zong, Li Li, Yanxing Zheng, Jinrong Tong, Mao Zhan

Abstract:

With the rapid development of network and communication technology, a great deal of data has been generated in different domains of a network. These data show a trend of increasing scale and more complex structure. Therefore, an effective and flexible cross-domain data-sharing system is needed. The Cross-domain Data Sharing System(CDSS) in a net-centric environment is composed of three sub-systems. The data distribution sub-system provides data exchange service through publish-subscribe technology that supports asynchronism and multi-to-multi communication, which adapts to the needs of the dynamic and large-scale distributed computing environment. The access control sub-system adopts Attribute-Based Access Control(ABAC) technology to uniformly model various data attributes such as subject, object, permission and environment, which effectively monitors the activities of users accessing resources and ensures that legitimate users get effective access control rights within a legal time. The cross-domain access security negotiation subsystem automatically determines the access rights between different security domains in the process of interactive disclosure of digital certificates and access control policies through trust policy management and negotiation algorithms, which provides an effective means for cross-domain trust relationship establishment and access control in a distributed environment. The CDSS’s asynchronous,multi-to-multi and loosely-coupled communication features can adapt well to data exchange and sharing in dynamic, distributed and large-scale network environments. Next, we will give CDSS new features to support the mobile computing environment.

Keywords: data sharing, cross-domain, data exchange, publish-subscribe

Procedia PDF Downloads 122
708 Enhancing Scalability in Ethereum Network Analysis: Methods and Techniques

Authors: Stefan K. Behfar

Abstract:

The rapid growth of the Ethereum network has brought forth the urgent need for scalable analysis methods to handle the increasing volume of blockchain data. In this research, we propose efficient methodologies for making Ethereum network analysis scalable. Our approach leverages a combination of graph-based data representation, probabilistic sampling, and parallel processing techniques to achieve unprecedented scalability while preserving critical network insights. Data Representation: We develop a graph-based data representation that captures the underlying structure of the Ethereum network. Each block transaction is represented as a node in the graph, while the edges signify temporal relationships. This representation ensures efficient querying and traversal of the blockchain data. Probabilistic Sampling: To cope with the vastness of the Ethereum blockchain, we introduce a probabilistic sampling technique. This method strategically selects a representative subset of transactions and blocks, allowing for concise yet statistically significant analysis. The sampling approach maintains the integrity of the network properties while significantly reducing the computational burden. Graph Convolutional Networks (GCNs): We incorporate GCNs to process the graph-based data representation efficiently. The GCN architecture enables the extraction of complex spatial and temporal patterns from the sampled data. This combination of graph representation and GCNs facilitates parallel processing and scalable analysis. Distributed Computing: To further enhance scalability, we adopt distributed computing frameworks such as Apache Hadoop and Apache Spark. By distributing computation across multiple nodes, we achieve a significant reduction in processing time and enhanced memory utilization. Our methodology harnesses the power of parallelism, making it well-suited for large-scale Ethereum network analysis. Evaluation and Results: We extensively evaluate our methodology on real-world Ethereum datasets covering diverse time periods and transaction volumes. The results demonstrate its superior scalability, outperforming traditional analysis methods. Our approach successfully handles the ever-growing Ethereum data, empowering researchers and developers with actionable insights from the blockchain. Case Studies: We apply our methodology to real-world Ethereum use cases, including detecting transaction patterns, analyzing smart contract interactions, and predicting network congestion. The results showcase the accuracy and efficiency of our approach, emphasizing its practical applicability in real-world scenarios. Security and Robustness: To ensure the reliability of our methodology, we conduct thorough security and robustness evaluations. Our approach demonstrates high resilience against adversarial attacks and perturbations, reaffirming its suitability for security-critical blockchain applications. Conclusion: By integrating graph-based data representation, GCNs, probabilistic sampling, and distributed computing, we achieve network scalability without compromising analytical precision. This approach addresses the pressing challenges posed by the expanding Ethereum network, opening new avenues for research and enabling real-time insights into decentralized ecosystems. Our work contributes to the development of scalable blockchain analytics, laying the foundation for sustainable growth and advancement in the domain of blockchain research and application.

Keywords: Ethereum, scalable network, GCN, probabilistic sampling, distributed computing

Procedia PDF Downloads 75
707 Use Cloud-Based Watson Deep Learning Platform to Train Models Faster and More Accurate

Authors: Susan Diamond

Abstract:

Machine Learning workloads have traditionally been run in high-performance computing (HPC) environments, where users log in to dedicated machines and utilize the attached GPUs to run training jobs on huge datasets. Training of large neural network models is very resource intensive, and even after exploiting parallelism and accelerators such as GPUs, a single training job can still take days. Consequently, the cost of hardware is a barrier to entry. Even when upfront cost is not a concern, the lead time to set up such an HPC environment takes months from acquiring hardware to set up the hardware with the right set of firmware, software installed and configured. Furthermore, scalability is hard to achieve in a rigid traditional lab environment. Therefore, it is slow to react to the dynamic change in the artificial intelligent industry. Watson Deep Learning as a service, a cloud-based deep learning platform that mitigates the long lead time and high upfront investment in hardware. It enables robust and scalable sharing of resources among the teams in an organization. It is designed for on-demand cloud environments. Providing a similar user experience in a multi-tenant cloud environment comes with its own unique challenges regarding fault tolerance, performance, and security. Watson Deep Learning as a service tackles these challenges and present a deep learning stack for the cloud environments in a secure, scalable and fault-tolerant manner. It supports a wide range of deep-learning frameworks such as Tensorflow, PyTorch, Caffe, Torch, Theano, and MXNet etc. These frameworks reduce the effort and skillset required to design, train, and use deep learning models. Deep Learning as a service is used at IBM by AI researchers in areas including machine translation, computer vision, and healthcare. 

Keywords: deep learning, machine learning, cognitive computing, model training

Procedia PDF Downloads 207
706 Comparison of Several Diagnostic Methods for Detecting Bovine Viral Diarrhea Virus Infection in Cattle

Authors: Azizollah Khodakaram- Tafti, Ali Mohammadi, Ghasem Farjanikish

Abstract:

Bovine viral diarrhea virus (BVDV) is one of the most important viral pathogens of cattle worldwide caused by Pestivirus genus, Flaviviridae family.The aim of the present study was to comparison several diagnostic methods and determine the prevalence of BVDV infection for the first time in dairy herds of Fars province, Iran. For initial screening, a total of 400 blood samples were randomly collected from 12 industrial dairy herds and analyzed using reverse transcription (RT)-PCR on the buffy coat. In the second step, blood samples and also ear notch biopsies were collected from 100 cattle of infected farms and tested by antigen capture ELISA (ACE), RT-PCR and immunohistochemistry (IHC). The results of nested RT-PCR (outer primers 0I100/1400R and inner primers BD1/BD2) was successful in 16 out of 400 buffy coat samples (4%) as acute infection in initial screening. Also, 8 out of 100 samples (2%) were positive as persistent infection (PI) by all of the diagnostic tests similarly including RT-PCR, ACE and IHC on buffy coat, serum and skin samples, respectively. Immunoreactivity for bovine BVDV antigen as brown, coarsely to finely granular was observed within the cytoplasm of epithelial cells of epidermis and hair follicles and also subcutaneous stromal cells. These findings confirm the importance of monitoring BVDV infection in cattle of this region and suggest detection and elimination of PI calves for controlling and eradication of this disease.

Keywords: antigen capture ELISA, bovine viral diarrhea virus, immunohistochemistry, RT-PCR, cattle

Procedia PDF Downloads 363
705 Computational Fluid Dynamics (CFD) Simulation Approach for Developing New Powder Dispensing Device

Authors: Revanth Rallapalli

Abstract:

Manually dispensing solids and powders can be difficult as it requires gradually pour and check the amount on the scale to be dispensed. Current systems are manual and non-continuous in nature and are user-dependent and difficult to control powder dispensation. Recurrent dosing of powdered medicines in precise amounts quickly and accurately has been an all-time challenge. Various new powder dispensing mechanisms are being designed to overcome these challenges. A battery-operated screw conveyor mechanism is being innovated to overcome the above problems faced. These inventions are numerically evaluated at the concept development level by employing Computational Fluid Dynamics (CFD) of gas-solids multiphase flow systems. CFD has been very helpful in development of such devices saving time and money by reducing the number of prototypes and testing. Furthermore, this paper describes a simulation of powder dispensation from the trocar’s end by considering the powder as secondary flow in air, is simulated by using the technique called Dense Discrete Phase Model incorporated with Kinetic Theory of Granular Flow (DDPM-KTGF). By considering the volume fraction of powder as 50%, the transportation of powder from the inlet side to trocar’s end side is done by rotation of the screw conveyor. Thus, the performance is calculated for a 1-sec time frame in an unsteady computation manner. This methodology will help designers in developing design concepts to improve the dispensation and also at the effective area within a quick turnaround time frame.

Keywords: DDPM-KTGF, gas-solids multiphase flow, screw conveyor, Unsteady

Procedia PDF Downloads 180
704 A Review of the Factors That Influence on Nutrient Removal in Upflow Filters

Authors: Ali Alzeyadi, Edward Loffill, Rafid Alkhaddar Ali Alattabi

Abstract:

Phosphate, ammonium, and nitrates are forms of nutrients; they are released from different sources. High nutrient levels contribute to the eutrophication of water bodies by accelerating the extraordinary growth of algae. Recently, many filtration and treatment systems were developed and used for different removal processes. Due to enhanced operational aspects for the up-flow, continuous, granular Media filter researchers became more interested in further developing this technology and its performance for nutrient removal from wastewater. Environmental factors significantly affect the filtration process performance, and understanding their impact will help to maintain the nutrient removal process. Phosphate removal by phosphate sorption materials PSMs and nitrogen removal biologically are the methods of nutrient removal that have been discussed in this paper. Hence, the focus on the factors that influence these processes is the scope of this work. The finding showed the presence of factors affecting both removal processes; the size, shape, and roughness of the filter media particles play a crucial role in supporting biofilm formation. On the other hand, all of which are effected on the reactivity of surface between the media and phosphate. Many studies alluded to factors that have significant influence on the biological removal for nitrogen such as dissolved oxygen, temperature, and pH; this is due to the sensitivity of biological processes while the phosphate removal by PSMs showed less affected by these factors. This review work provides help to the researchers in create a comprehensive approach in regards study the nutrient removal in up flow filtration systems.

Keywords: nitrogen biological treatment, nutrients, psms, upflow filter, wastewater treatment

Procedia PDF Downloads 321
703 Understanding Student Engagement through Sentiment Analytics of Response Times to Electronically Shared Feedback

Authors: Yaxin Bi, Peter Nicholl

Abstract:

The rapid advancement of Information and communication technologies (ICT) is extremely influencing every aspect of Higher Education. It has transformed traditional teaching, learning, assessment and feedback into a new era of Digital Education. This also introduces many challenges in capturing and understanding student engagement with their studies in Higher Education. The School of Computing at Ulster University has developed a Feedback And Notification (FAN) Online tool that has been used to send students links to personalized feedback on their submitted assessments and record students’ frequency of review of the shared feedback as well as the speed of collection. The feedback that the students initially receive is via a personal email directing them through to the feedback via a URL link that maps to the feedback created by the academic marker. This feedback is typically a Word or PDF report including comments and the final mark for the work submitted approximately three weeks before. When the student clicks on the link, the student’s personal feedback is viewable in the browser and they can view the contents. The FAN tool provides the academic marker with a report that includes when and how often a student viewed the feedback via the link. This paper presents an investigation into student engagement through analyzing the interaction timestamps and frequency of review by the student. We have proposed an approach to modeling interaction timestamps and use sentiment classification techniques to analyze the data collected over the last five years for a set of modules. The data studied is across a number of final years and second-year modules in the School of Computing. The paper presents the details of quantitative analysis methods and describes further their interactions with the feedback overtime on each module studied. We have projected the students into different groups of engagement based on sentiment analysis results and then provide a suggestion of early targeted intervention for the set of students seen to be under-performing via our proposed model.

Keywords: feedback, engagement, interaction modelling, sentiment analysis

Procedia PDF Downloads 102
702 Increasing Employee Productivity and Work Well-Being by Employing Affective Decision Support and a Knowledge-Based System

Authors: Loreta Kaklauskiene, Arturas Kaklauskas

Abstract:

This employee productivity and work well-being effective system aims to maximise the work performance of personnel and boost well-being in offices. Affective computing, decision support, and knowledge-based systems were used in our research. The basis of this effective system is our European Patent application (No: EP 4 020 134 A1) and two Lithuanian patents (LT 6841, LT 6866). Our study examines ways to support efficient employee productivity and well-being by employing mass-customised, personalised office environment. Efficient employee performance and well-being are managed by changing mass-customised office environment factors such as air pollution levels, humidity, temperature, data, information, knowledge, activities, lighting colours and intensity, scents, media, games, videos, music, and vibrations. These aspects of management generate a customised, adaptive environment for users taking into account their emotional, affective, and physiological (MAP) states measured and fed into the system. This research aims to develop an innovative method and system which would analyse, customise and manage a personalised office environment according to a specific user’s MAP states in a cohesive manner. Various values of work spaces (e.g., employee utilitarian, hedonic, perceived values) are also established throughout this process, based on the measurements that describe MAP states and other aspects related to the office environment. The main contribution of our research is the development of a real-time mass-customised office environment to boost employee performance and well-being. Acknowledgment: This work was supported by Project No. 2020-1-LT01-KA203-078100 “Minimizing the influence of coronavirus in a built environment” (MICROBE) from the European Union’s Erasmus + program.

Keywords: effective decision support and a knowledge-based system, human resource management, employee productivity and work well-being, affective computing

Procedia PDF Downloads 106
701 Extra Skeletal Manifestations of Histocytosis in Pediatrics

Authors: Ayda Youssef, Mohammed Ali Khalaf, Tarek Rafaat

Abstract:

Background: Langerhans cell histiocytosis (LCH) is a rare multi-systemic disease that shows an abnormal proliferation of these kinds of cells associated with a granular infiltration that affects different structures of the human body, including the lung, liver, spleen, lymph nodes, brain, mucocutaneous, soft tissue (head and neck), and salivary glands. Evaluation of the extent of disease is one of the major predictors of patient outcome. Objectives: To recognize the pathogenesis of Langerhans cell histiocytosis (LCH), describe the radiologic criteria that are suggestive of LCH in different organs rather than the bones and to illustrate the appropriate differential diagnoses for LCH in each of the common extra-osseous sites. Material and methods: A retrospective study was done on 150 biopsy-proven LCH patients from 2007 to 2012. All patients underwent imaging studies, mostly US, CT, and MRI. These patients were reviewed to assess the extra-skeletal manifestations of LCH. Results: In 150 patients with biopsy-proven LCH, There were 33 patients with liver affection, 5 patients with splenic lesions, 55 patients with enlarged lymph nodes, 9 patient with CNS disease and 11 patients with lung involvement. Conclusions: Because of the frequent LCH children and evaluation of the extent of disease is one of the major predictors of patient outcome. Radiologist need to be familiar with its presentation in different organs and regions of body outside the commonest site of affection (bones). A high-index suspicion should be raised a biopsy is recommended in the presence of radiological suspicion. Chemotherapy is the preferred therapeutic modality.

Keywords: langerhans cell histiocytosis, extra-skeletal, pediatrics, radiology

Procedia PDF Downloads 436
700 Physical Tests on Localized Fluidization in Offshore Suction Bucket Foundations

Authors: Li-Hua Luu, Alexis Doghmane, Abbas Farhat, Mohammad Sanayei, Pierre Philippe, Pablo Cuellar

Abstract:

Suction buckets are promising innovative foundations for offshore wind turbines. They generally feature the shape of an inverted bucket and rely on a suction system as a driving agent for their installation into the seabed. Water is pumped out of the buckets that are initially placed to rest on the seabed, creating a net pressure difference across the lid that generates a seepage flow, lowers the soil resistance below the foundation skirt, and drives them effectively into the seabed. The stability of the suction mechanism as well as the possibility of a piping failure (i.e., localized fluidization within the internal soil plug) during their installation are some of the key questions that remain open. The present work deals with an experimental study of localized fluidization by suction within a fixed bucket partially embedded into a submerged artificial soil made of spherical beads. The transient process, from the onset of granular motion until reaching a stationary regime for the fluidization at the embedded bucket wall, is recorded using the combined optical techniques of planar laser-induced fluorescence and refractive index matching. To conduct a systematic study of the piping threshold for the seepage flow, we vary the beads size, the suction pressure, and the initial depth for the bucket. This experimental modelling, by dealing with erosion-related phenomena from a micromechanical perspective, shall provide qualitative scenarios for the local processes at work which are missing in the offshore practice so far.

Keywords: fluidization, micromechanical approach, offshore foundations, suction bucket

Procedia PDF Downloads 181
699 Bilateral Thalamic Hypodense Lesions in Computing Tomography

Authors: Angelis P. Barlampas

Abstract:

Purpose of Learning Objective: This case depicts the need for cooperation between the emergency department and the radiologist to achieve the best diagnostic result for the patient. The clinical picture must correlate well with the radiology report and when it does not, this is not necessarily someone’s fault. Careful interpretation and good knowledge of the limitations, advantages and disadvantages of each imaging procedure are essential for the final diagnostic goal. Methods or Background: A patient was brought to the emergency department by their relatives. He was suddenly confused and his mental status was altered. He hadn't any history of mental illness and was otherwise healthy. A computing tomography scan without contrast was done, but it was unremarkable. Because of high clinical suspicion of probable neurologic disease, he was admitted to the hospital. Results or Findings: Another T was done after 48 hours. It showed a hypodense region in both thalamic areas. Taking into account that the first CT was normal, but the initial clinical picture of the patient was alerting of something wrong, the repetitive CT exam is highly suggestive of a probable diagnosis of bilateral thalamic infractions. Differential diagnosis: Primary bilateral thalamic glioma, Wernicke encephalopathy, osmotic myelinolysis, Fabry disease, Wilson disease, Leigh disease, West Nile encephalitis, Greutzfeldt Jacob disease, top of the basilar syndrome, deep venous thrombosis, mild to moderate cerebral hypotension, posterior reversible encephalopathy syndrome, Neurofibromatosis type 1. Conclusion: As is the case of limitations for any imaging procedure, the same applies to CT. The acute ischemic attack can not depict on CT. A period of 24 to 48 hours has to elapse before any abnormality can be seen. So, despite the fact that there are no obvious findings of an ischemic episode, like paresis or imiparesis, one must be careful not to attribute the patient’s clinical signs to other conditions, such as toxic effects, metabolic disorders, psychiatric symptoms, etc. Further investigation with MRI or at least a repeated CT must be done.

Keywords: CNS, CT, thalamus, emergency department

Procedia PDF Downloads 118
698 Evaluation of the Efficiency of Nanomaterials in the Consolidation of Limestone

Authors: Mohamed Saad Gad Elzoghby

Abstract:

Nanomaterials are widely used nowadays for the consolidation of degraded archaeological limestone. It’s one of the most predominant stones in monumental buildings and statuary works. It is exposed to different weathering processes that cause degradation and the presence of deterioration pattern as cracks, fissures, and granular disintegration. Nanomaterials have been applied to limestone consolidation. Among these nanomaterials are nanolimes, i.e., dispersions of lime nanoparticles in alcohols, and nano-silica, i.e., dispersions of silica nanoparticles in water, promising consolidating products for limestone. It was investigated and applied to overcome the disadvantages of traditional consolidation materials such as lime water, water glass, and paraliod. So, researchers investigated and tested the effectiveness of nanomaterials as consolidation materials for limestone. The present study includes an evaluation of some nanomaterials in consolidation limestone stone in comparison with traditional consolidants. These consolidation materials are nano calcium hydroxide nanolime, and nanosilica. The latter is known commercially as Nano Estel and the former Known as Nanorestore compared to traditional consolidants Wacker OH (ethyl silicate) and Paraloid B72 (a copolymer of ethyl methacrylate and methyl acrylate). The study evaluated the consolidation effectiveness of nanomaterials and traditional consolidants by using followed methods, characterization of physical properties of stone, scanning electron microscopy (SEM), X-ray diffractometry, Fourier transforms infrared spectroscopy, and mechanical properties. The study confirmed that nanomaterials were better in the distribution and encapsulation of calcite grains in limestone, and traditional materials were better in improving the physical properties of limestone. It demonstrated that good results could be achieved through mixtures of nanomaterials and traditional consolidants.

Keywords: nanomaterials, limestone, consolidation, evaluation, weathering, nanolime, nanosilica, scanning electron microscope

Procedia PDF Downloads 79
697 Optimization of Multi Commodities Consumer Supply Chain: Part 1-Modelling

Authors: Zeinab Haji Abolhasani, Romeo Marian, Lee Luong

Abstract:

This paper and its companions (Part II, Part III) will concentrate on optimizing a class of supply chain problems known as Multi- Commodities Consumer Supply Chain (MCCSC) problem. MCCSC problem belongs to production-distribution (P-D) planning category. It aims to determine facilities location, consumers’ allocation, and facilities configuration to minimize total cost (CT) of the entire network. These facilities can be manufacturer units (MUs), distribution centres (DCs), and retailers/end-users (REs) but not limited to them. To address this problem, three major tasks should be undertaken. At the first place, a mixed integer non-linear programming (MINP) mathematical model is developed. Then, system’s behaviors under different conditions will be observed using a simulation modeling tool. Finally, the most optimum solution (minimum CT) of the system will be obtained using a multi-objective optimization technique. Due to the large size of the problem, and the uncertainties in finding the most optimum solution, integration of modeling and simulation methodologies is proposed followed by developing new approach known as GASG. It is a genetic algorithm on the basis of granular simulation which is the subject of the methodology of this research. In part II, MCCSC is simulated using discrete-event simulation (DES) device within an integrated environment of SimEvents and Simulink of MATLAB® software package followed by a comprehensive case study to examine the given strategy. Also, the effect of genetic operators on the obtained optimal/near optimal solution by the simulation model will be discussed in part III.

Keywords: supply chain, genetic algorithm, optimization, simulation, discrete event system

Procedia PDF Downloads 315
696 Evaluation of the Efficiency of Nanomaterials in Consolidation of Limestone

Authors: Mohamed Saad Gad Eloghby

Abstract:

Nanomaterials are widely used nowadays for the consolidation of degraded archaeological limestone. It’s one of the most predominant stones in monumental buildings and statuary works. Exposure to different weathering processes caused degradation and the presence of deterioration pattern as cracks, fissures, and granular disintegration. Nanomaterials have been applied to limestone consolidation. Among these nanomaterials are nanolimes, i.e., dispersions of lime nanoparticles in alcohols and nanosilica, i.e., dispersions of silica nanoparticles in water promising consolidating products for limestone. It was investigated and applied to overcome the disadvantages of traditional consolidation materials such as lime water, water glass and paraliod. So, researchers investigated and tested the effectiveness of nanomaterials as consolidation materials for limestone. The present study includes the evaluation of some nano materials in consolidation limestone stone in comparison with traditional consolidantes. These consolidation materials are nano calcium hydroxide nanolime and nanosilica. The latter is known commercially as Nano Estel and the former is known as Nanorestore compared to traditional consolidantes Wacker OH (ethyl silicate) and Paraloid B72 (a copolymer of ethyl methacrylate and methyl acrylate). The study evaluated the consolidation effectiveness of nanomaterials and traditional consolidantes by using followed methods, Characterization of physical properties of stone, Scanning electron microscopy (SEM), X-ray diffractometry, Fourier transform infrared spectroscopy and Mechanical properties. The study confirmed that nanomaterials were better in the distribution and encapsulation of calcite grains in limestone, and traditional materials were better in improving the physical properties of limestone. It demonstrated that good results can be achieved through mixtures of nanomaterials and traditional consolidants.

Keywords: nanomaterials, limestone, consolidation, evaluation, weathering, nanolime, nanosilica, scanning electron microscope

Procedia PDF Downloads 72
695 Failure of Agriculture Soil following the Passage of Tractors

Authors: Anis Eloud, Sayed Chehaibi

Abstract:

Compaction of agricultural soils as a result of the passage of heavy machinery on the fields is a problem that affects many agronomists and farmers since it results in a loss of yield of most crops. To remedy this, and raise the overall future of the food security challenge, we must study and understand the process of soil degradation. The present review is devoted to understanding the effect of repeated passages on agricultural land. The experiments were performed on a plot of the area of the ESIER, characterized by a clay texture in order to quantify the soil compaction caused by the wheels of the tractor during repeated passages on agricultural land. The test tractor CASE type puissance 110 hp and 5470 kg total mass of 3500 kg including the two rear axles and 1970 kg on the front axle. The state of soil compaction has been characterized by measuring its resistance to penetration by means of a penetrometer and direct manual reading, the density and permeability of the soil. Soil moisture was taken jointly. The measurements are made in the initial state before passing the tractor and after each pass varies from 1 to 7 on the track wheel inflated to 1.5 bar for the rear wheel and broke water to the level of valve and 4 bar for the front wheels. The passages are spaced to the average of one week. The results show that the passage of wheels on a farm tilled soil leads to compaction and the latter increases with the number of passages, especially for the upper 15 cm depth horizons. The first passage is characterized by the greatest effect. However, the effect of other passages do not follow a definite law for the complex behavior of granular media and the history of labor and the constraints it suffers from its formation.

Keywords: wheel traffic, tractor, soil compaction, wheel

Procedia PDF Downloads 480
694 Off-Line Text-Independent Arabic Writer Identification Using Optimum Codebooks

Authors: Ahmed Abdullah Ahmed

Abstract:

The task of recognizing the writer of a handwritten text has been an attractive research problem in the document analysis and recognition community with applications in handwriting forensics, paleography, document examination and handwriting recognition. This research presents an automatic method for writer recognition from digitized images of unconstrained writings. Although a great effort has been made by previous studies to come out with various methods, their performances, especially in terms of accuracy, are fallen short, and room for improvements is still wide open. The proposed technique employs optimal codebook based writer characterization where each writing sample is represented by a set of features computed from two codebooks, beginning and ending. Unlike most of the classical codebook based approaches which segment the writing into graphemes, this study is based on fragmenting a particular area of writing which are beginning and ending strokes. The proposed method starting with contour detection to extract significant information from the handwriting and the curve fragmentation is then employed to categorize the handwriting into Beginning and Ending zones into small fragments. The similar fragments of beginning strokes are grouped together to create Beginning cluster, and similarly, the ending strokes are grouped to create the ending cluster. These two clusters lead to the development of two codebooks (beginning and ending) by choosing the center of every similar fragments group. Writings under study are then represented by computing the probability of occurrence of codebook patterns. The probability distribution is used to characterize each writer. Two writings are then compared by computing distances between their respective probability distribution. The evaluations carried out on ICFHR standard dataset of 206 writers using Beginning and Ending codebooks separately. Finally, the Ending codebook achieved the highest identification rate of 98.23%, which is the best result so far on ICFHR dataset.

Keywords: off-line text-independent writer identification, feature extraction, codebook, fragments

Procedia PDF Downloads 510
693 On Adaptive and Auto-Configurable Apps

Authors: Prisa Damrongsiri, Kittinan Pongpianskul, Mario Kubek, Herwig Unger

Abstract:

Apps are today the most important possibility to adapt mobile phones and computers to fulfill the special needs of their users. Location- and context-sensitive programs are hereby the key to support the interaction of the user with his/her environment and also to avoid an overload with a plenty of dispensable information. The contribution shows, how a trusted, secure and really bi-directional communication and interaction among users and their environment can be established and used, e.g. in the field of home automation.

Keywords: apps, context-sensitive, location-sensitive, self-configuration, mobile computing, smart home

Procedia PDF Downloads 393
692 Infrared Lightbox and iPhone App for Improving Detection Limit of Phosphate Detecting Dip Strips

Authors: H. Heidari-Bafroui, B. Ribeiro, A. Charbaji, C. Anagnostopoulos, M. Faghri

Abstract:

In this paper, we report the development of a portable and inexpensive infrared lightbox for improving the detection limits of paper-based phosphate devices. Commercial paper-based devices utilize the molybdenum blue protocol to detect phosphate in the environment. Although these devices are easy to use and have a long shelf life, their main deficiency is their low sensitivity based on the qualitative results obtained via a color chart. To improve the results, we constructed a compact infrared lightbox that communicates wirelessly with a smartphone. The system measures the absorbance of radiation for the molybdenum blue reaction in the infrared region of the spectrum. It consists of a lightbox illuminated by four infrared light-emitting diodes, an infrared digital camera, a Raspberry Pi microcontroller, a mini-router, and an iPhone to control the microcontroller. An iPhone application was also developed to analyze images captured by the infrared camera in order to quantify phosphate concentrations. Additionally, the app connects to an online data center to present a highly scalable worldwide system for tracking and analyzing field measurements. In this study, the detection limits for two popular commercial devices were improved by a factor of 4 for the Quantofix devices (from 1.3 ppm using visible light to 300 ppb using infrared illumination) and a factor of 6 for the Indigo units (from 9.2 ppm to 1.4 ppm) with repeatability of less than or equal to 1.2% relative standard deviation (RSD). The system also provides more granular concentration information compared to the discrete color chart used by commercial devices and it can be easily adapted for use in other applications.

Keywords: infrared lightbox, paper-based device, phosphate detection, smartphone colorimetric analyzer

Procedia PDF Downloads 121
691 The Role of Mass Sport Guidance in the Health Service Industry of China

Authors: Qiu Jian-Rong, Li Qing-Hui, Zhan Dong, Zhang Lei

Abstract:

Facing the problem of the demand of economic restructuring and risk of social economy stagnation due to the ageing of population, the Health Service Industry will play a very important role in the structure of industry in the future. During the process, the orient of Chinese sports medicine as well as the joint with preventive medicine, and the integration with data bank and cloud computing will be involved.

Keywords: China, the health service industry, mass sport, data bank

Procedia PDF Downloads 626
690 Data Confidentiality in Public Cloud: A Method for Inclusion of ID-PKC Schemes in OpenStack Cloud

Authors: N. Nalini, Bhanu Prakash Gopularam

Abstract:

The term data security refers to the degree of resistance or protection given to information from unintended or unauthorized access. The core principles of information security are the confidentiality, integrity and availability, also referred as CIA triad. Cloud computing services are classified as SaaS, IaaS and PaaS services. With cloud adoption the confidential enterprise data are moved from organization premises to untrusted public network and due to this the attack surface has increased manifold. Several cloud computing platforms like OpenStack, Eucalyptus, Amazon EC2 offer users to build and configure public, hybrid and private clouds. While the traditional encryption based on PKI infrastructure still works in cloud scenario, the management of public-private keys and trust certificates is difficult. The Identity based Public Key Cryptography (also referred as ID-PKC) overcomes this problem by using publicly identifiable information for generating the keys and works well with decentralized systems. The users can exchange information securely without having to manage any trust information. Another advantage is that access control (role based access control policy) information can be embedded into data unlike in PKI where it is handled by separate component or system. In OpenStack cloud platform the keystone service acts as identity service for authentication and authorization and has support for public key infrastructure for auto services. In this paper, we explain OpenStack security architecture and evaluate the PKI infrastructure piece for data confidentiality. We provide method to integrate ID-PKC schemes for securing data while in transit and stored and explain the key measures for safe guarding data against security attacks. The proposed approach uses JPBC crypto library for key-pair generation based on IEEE P1636.3 standard and secure communication to other cloud services.

Keywords: data confidentiality, identity based cryptography, secure communication, open stack key stone, token scoping

Procedia PDF Downloads 384
689 Numerical Study of Bubbling Fluidized Beds Operating at Sub-atmospheric Conditions

Authors: Lanka Dinushke Weerasiri, Subrat Das, Daniel Fabijanic, William Yang

Abstract:

Fluidization at vacuum pressure has been a topic that is of growing research interest. Several industrial applications (such as drying, extractive metallurgy, and chemical vapor deposition (CVD)) can potentially take advantage of vacuum pressure fluidization. Particularly, the fine chemical industry requires processing under safe conditions for thermolabile substances, and reduced pressure fluidized beds offer an alternative. Fluidized beds under vacuum conditions provide optimal conditions for treatment of granular materials where the reduced gas pressure maintains an operational environment outside of flammability conditions. The fluidization at low-pressure is markedly different from the usual gas flow patterns of atmospheric fluidization. The different flow regimes can be characterized by the dimensionless Knudsen number. Nevertheless, hydrodynamics of bubbling vacuum fluidized beds has not been investigated to author’s best knowledge. In this work, the two-fluid numerical method was used to determine the impact of reduced pressure on the fundamental properties of a fluidized bed. The slip flow model implemented by Ansys Fluent User Defined Functions (UDF) was used to determine the interphase momentum exchange coefficient. A wide range of operating pressures was investigated (1.01, 0.5, 0.25, 0.1 and 0.03 Bar). The gas was supplied by a uniform inlet at 1.5Umf and 2Umf. The predicted minimum fluidization velocity (Umf) shows excellent agreement with the experimental data. The results show that the operating pressure has a notable impact on the bed properties and its hydrodynamics. Furthermore, it also shows that the existing Gorosko correlation that predicts bed expansion is not applicable under reduced pressure conditions.

Keywords: computational fluid dynamics, fluidized bed, gas-solid flow, vacuum pressure, slip flow, minimum fluidization velocity

Procedia PDF Downloads 138