Search results for: synchronous machine
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 3047

Search results for: synchronous machine

467 Numerical Analysis of Solar Cooling System

Authors: Nadia Allouache, Mohamed Belmedani

Abstract:

Energy source is a sustainable, totally inexhaustible and environmentally friendly alternative to the fossil fuels available. It is a renewable and economical energy that can be harnessed sustainably over the long term and thus stabilizes energy costs. Solar cooling technologies have been developed to decrease the augmentation electricity consumption for air conditioning and to displace the peak load during hot summer days. A numerical analysis of thermal and solar performances of an annular finned adsorber, which is the most important component of the adsorption solar refrigerating system, is considered in this work. Different adsorbent/adsorbate pairs, such as activated carbon AC35/methanol, activated carbon AC35/ethanol, and activated carbon BPL/Ammoniac, are undertaken in this study. The modeling of the adsorption cooling machine requires the resolution of the equation describing the energy and mass transfer in the tubular finned adsorber. The Wilson and Dubinin- Astakhov models of the solid-adsorbate equilibrium are used to calculate the adsorbed quantity. The porous medium and the fins are contained in the annular space, and the adsorber is heated by solar energy. Effects of key parameters on the adsorbed quantity and on the thermal and solar performances are analysed and discussed. The AC35/methanol pair is the best pair compared to BPL/Ammoniac and AC35/ethanol pairs in terms of system performance. The system performances are sensitive to the fin geometry. For the considered data measured for clear type days of July 2023 in Algeria and Morocco, the performances of the cooling system are very significant in Algeria.

Keywords: activated carbon AC35-methanol pair, activated carbon AC35-ethanol pair, activated carbon BPL-ammoniac pair, annular finned adsorber, performance coefficients, numerical analysis, solar cooling system

Procedia PDF Downloads 55
466 A Novel Heuristic for Analysis of Large Datasets by Selecting Wrapper-Based Features

Authors: Bushra Zafar, Usman Qamar

Abstract:

Large data sample size and dimensions render the effectiveness of conventional data mining methodologies. A data mining technique are important tools for collection of knowledgeable information from variety of databases and provides supervised learning in the form of classification to design models to describe vital data classes while structure of the classifier is based on class attribute. Classification efficiency and accuracy are often influenced to great extent by noisy and undesirable features in real application data sets. The inherent natures of data set greatly masks its quality analysis and leave us with quite few practical approaches to use. To our knowledge first time, we present a new approach for investigation of structure and quality of datasets by providing a targeted analysis of localization of noisy and irrelevant features of data sets. Machine learning is based primarily on feature selection as pre-processing step which offers us to select few features from number of features as a subset by reducing the space according to certain evaluation criterion. The primary objective of this study is to trim down the scope of the given data sample by searching a small set of important features which may results into good classification performance. For this purpose, a heuristic for wrapper-based feature selection using genetic algorithm and for discriminative feature selection an external classifier are used. Selection of feature based on its number of occurrence in the chosen chromosomes. Sample dataset has been used to demonstrate proposed idea effectively. A proposed method has improved average accuracy of different datasets is about 95%. Experimental results illustrate that proposed algorithm increases the accuracy of prediction of different diseases.

Keywords: data mining, generic algorithm, KNN algorithms, wrapper based feature selection

Procedia PDF Downloads 316
465 A Statistical Approach to Predict and Classify the Commercial Hatchability of Chickens Using Extrinsic Parameters of Breeders and Eggs

Authors: M. S. Wickramarachchi, L. S. Nawarathna, C. M. B. Dematawewa

Abstract:

Hatchery performance is critical for the profitability of poultry breeder operations. Some extrinsic parameters of eggs and breeders cause to increase or decrease the hatchability. This study aims to identify the affecting extrinsic parameters on the commercial hatchability of local chicken's eggs and determine the most efficient classification model with a hatchability rate greater than 90%. In this study, seven extrinsic parameters were considered: egg weight, moisture loss, breeders age, number of fertilised eggs, shell width, shell length, and shell thickness. Multiple linear regression was performed to determine the most influencing variable on hatchability. First, the correlation between each parameter and hatchability were checked. Then a multiple regression model was developed, and the accuracy of the fitted model was evaluated. Linear Discriminant Analysis (LDA), Classification and Regression Trees (CART), k-Nearest Neighbors (kNN), Support Vector Machines (SVM) with a linear kernel, and Random Forest (RF) algorithms were applied to classify the hatchability. This grouping process was conducted using binary classification techniques. Hatchability was negatively correlated with egg weight, breeders' age, shell width, shell length, and positive correlations were identified with moisture loss, number of fertilised eggs, and shell thickness. Multiple linear regression models were more accurate than single linear models regarding the highest coefficient of determination (R²) with 94% and minimum AIC and BIC values. According to the classification results, RF, CART, and kNN had performed the highest accuracy values 0.99, 0.975, and 0.972, respectively, for the commercial hatchery process. Therefore, the RF is the most appropriate machine learning algorithm for classifying the breeder outcomes, which are economically profitable or not, in a commercial hatchery.

Keywords: classification models, egg weight, fertilised eggs, multiple linear regression

Procedia PDF Downloads 87
464 Local Directional Encoded Derivative Binary Pattern Based Coral Image Classification Using Weighted Distance Gray Wolf Optimization Algorithm

Authors: Annalakshmi G., Sakthivel Murugan S.

Abstract:

This paper presents a local directional encoded derivative binary pattern (LDEDBP) feature extraction method that can be applied for the classification of submarine coral reef images. The classification of coral reef images using texture features is difficult due to the dissimilarities in class samples. In coral reef image classification, texture features are extracted using the proposed method called local directional encoded derivative binary pattern (LDEDBP). The proposed approach extracts the complete structural arrangement of the local region using local binary batten (LBP) and also extracts the edge information using local directional pattern (LDP) from the edge response available in a particular region, thereby achieving extra discriminative feature value. Typically the LDP extracts the edge details in all eight directions. The process of integrating edge responses along with the local binary pattern achieves a more robust texture descriptor than the other descriptors used in texture feature extraction methods. Finally, the proposed technique is applied to an extreme learning machine (ELM) method with a meta-heuristic algorithm known as weighted distance grey wolf optimizer (GWO) to optimize the input weight and biases of single-hidden-layer feed-forward neural networks (SLFN). In the empirical results, ELM-WDGWO demonstrated their better performance in terms of accuracy on all coral datasets, namely RSMAS, EILAT, EILAT2, and MLC, compared with other state-of-the-art algorithms. The proposed method achieves the highest overall classification accuracy of 94% compared to the other state of art methods.

Keywords: feature extraction, local directional pattern, ELM classifier, GWO optimization

Procedia PDF Downloads 163
463 Evaluation of Possible Application of Cold Energy in Liquefied Natural Gas Complexes

Authors: А. I. Dovgyalo, S. O. Nekrasova, D. V. Sarmin, A. A. Shimanov, D. A. Uglanov

Abstract:

Usually liquefied natural gas (LNG) gasification is performed due to atmospheric heat. In order to produce a liquefied gas a sufficient amount of energy is to be consumed (about 1 kW∙h for 1 kg of LNG). This study offers a number of solutions, allowing using a cold energy of LNG. In this paper it is evaluated the application turbines installed behind the evaporator in LNG complex due to its work additional energy can be obtained and then converted into electricity. At the LNG consumption of G=1000kg/h the expansion work capacity of about 10 kW can be reached. Herewith-open Rankine cycle is realized, where a low capacity cryo-pump (about 500W) performs its normal function, providing the cycle pressure. Additionally discussed an application of Stirling engine within the LNG complex also gives a possibility to realize cold energy. Considering the fact, that efficiency coefficient of Stirling engine reaches 50 %, LNG consumption of G=1000 kg/h may result in getting a capacity of about 142 kW of such a thermal machine. The capacity of the pump, required to compensate pressure losses when LNG passes through the hydraulic channel, will make 500 W. Apart from the above-mentioned converters, it can be proposed to use thermoelectric generating packages (TGP), which are widely used now. At present, the modern thermoelectric generator line provides availability of electric capacity with coefficient of efficiency up to 15%. In the proposed complex, it is suggested to install the thermoelectric generator on the evaporator surface is such a way, that the cold end is contacted with the evaporator’s surface, and the hot one – with the atmosphere. At the LNG consumption of G=1000 kgг/h and specified coefficient of efficiency the capacity of the heat flow Qh will make about 32 kW. The derivable net electric power will be P=4,2 kW, and the number of packages will amount to about 104 pieces. The carried out calculations demonstrate the research perceptiveness in this field of propulsion plant development, as well as allow realizing the energy saving potential with the use of liquefied natural gas and other cryogenics technologies.

Keywords: cold energy, gasification, liquefied natural gas, electricity

Procedia PDF Downloads 273
462 Effect of Tool Size and Cavity Depth on Response Characteristics during Electric Discharge Machining on Superalloy Metal - An Experimental Investigation

Authors: Sudhanshu Kumar

Abstract:

Electrical discharge machining, also known as EDM, process is one of the most applicable machining process for removal of material in hard to machine materials like superalloy metals. EDM process utilizes electrical energy into sparks to erode the metals in presence of dielectric medium. In the present investigation, superalloy, Inconel 718 has been selected as workpiece and electrolytic copper as tool electrode. Attempt has been made to understand the effect of size of tool with varying cavity depth during drilling of hole through EDM process. In order to systematic investigate, tool size in terms of tool diameter and cavity depth along with other important electrical parameters namely, peak current, pulse-on time and servo voltage have been varied at three different values and the experiments has been designed using fractional factorial (Taguchi) method. Each experiment has been repeated twice under the same condition in order to understand the variability within the experiments. The effect of variations in parameters has been evaluated in terms of material removal rate, tool wear rate and surface roughness. Results revel that change in tool diameter during machining affects the response characteristics significantly. Larger tool diameter yielded 13% more material removal rate than smaller tool diameter. Analysis of the effect of variation in cavity depth is notable. There is no significant effect of cavity depth on material removal rate, tool wear rate and surface quality. This indicates that number of experiments can be performed to analyze other parameters effect even at smaller depth of cavity which can reduce the cost and time of experiments. Further, statistical analysis has been carried out to identify the interaction effect between parameters.

Keywords: EDM, Inconel 718, material removal rate, roughness, tool wear, tool size

Procedia PDF Downloads 216
461 Comparison of Support Vector Machines and Artificial Neural Network Classifiers in Characterizing Threatened Tree Species Using Eight Bands of WorldView-2 Imagery in Dukuduku Landscape, South Africa

Authors: Galal Omer, Onisimo Mutanga, Elfatih M. Abdel-Rahman, Elhadi Adam

Abstract:

Threatened tree species (TTS) play a significant role in ecosystem functioning and services, land use dynamics, and other socio-economic aspects. Such aspects include ecological, economic, livelihood, security-based, and well-being benefits. The development of techniques for mapping and monitoring TTS is thus critical for understanding the functioning of ecosystems. The advent of advanced imaging systems and supervised learning algorithms has provided an opportunity to classify TTS over fragmenting landscape. Recently, vegetation maps have been produced using advanced imaging systems such as WorldView-2 (WV-2) and robust classification algorithms such as support vectors machines (SVM) and artificial neural network (ANN). However, delineation of TTS in a fragmenting landscape using high resolution imagery has widely remained elusive due to the complexity of the species structure and their distribution. Therefore, the objective of the current study was to examine the utility of the advanced WV-2 data for mapping TTS in the fragmenting Dukuduku indigenous forest of South Africa using SVM and ANN classification algorithms. The results showed the robustness of the two machine learning algorithms with an overall accuracy (OA) of 77.00% (total disagreement = 23.00%) for SVM and 75.00% (total disagreement = 25.00%) for ANN using all eight bands of WV-2 (8B). This study concludes that SVM and ANN classification algorithms with WV-2 8B have the potential to classify TTS in the Dukuduku indigenous forest. This study offers relatively accurate information that is important for forest managers to make informed decisions regarding management and conservation protocols of TTS.

Keywords: artificial neural network, threatened tree species, indigenous forest, support vector machines

Procedia PDF Downloads 515
460 Domain-Specific Deep Neural Network Model for Classification of Abnormalities on Chest Radiographs

Authors: Nkechinyere Joy Olawuyi, Babajide Samuel Afolabi, Bola Ibitoye

Abstract:

This study collected a preprocessed dataset of chest radiographs and formulated a deep neural network model for detecting abnormalities. It also evaluated the performance of the formulated model and implemented a prototype of the formulated model. This was with the view to developing a deep neural network model to automatically classify abnormalities in chest radiographs. In order to achieve the overall purpose of this research, a large set of chest x-ray images were sourced for and collected from the CheXpert dataset, which is an online repository of annotated chest radiographs compiled by the Machine Learning Research Group, Stanford University. The chest radiographs were preprocessed into a format that can be fed into a deep neural network. The preprocessing techniques used were standardization and normalization. The classification problem was formulated as a multi-label binary classification model, which used convolutional neural network architecture to make a decision on whether an abnormality was present or not in the chest radiographs. The classification model was evaluated using specificity, sensitivity, and Area Under Curve (AUC) score as the parameter. A prototype of the classification model was implemented using Keras Open source deep learning framework in Python Programming Language. The AUC ROC curve of the model was able to classify Atelestasis, Support devices, Pleural effusion, Pneumonia, A normal CXR (no finding), Pneumothorax, and Consolidation. However, Lung opacity and Cardiomegaly had a probability of less than 0.5 and thus were classified as absent. Precision, recall, and F1 score values were 0.78; this implies that the number of False Positive and False Negative is the same, revealing some measure of label imbalance in the dataset. The study concluded that the developed model is sufficient to classify abnormalities present in chest radiographs into present or absent.

Keywords: transfer learning, convolutional neural network, radiograph, classification, multi-label

Procedia PDF Downloads 129
459 Mechanical and Physical Properties of Various Types of Dental Floss

Authors: Supanitayanon Lalita, Dechkunakorn Surachai, Anuwongnukroh Niwat, Srikhirin Toemsak, Roongrujimek Pitchaya, Tua-Ngam Peerapong

Abstract:

Objective: To compare maximum load, percentage of elongation, physical characteristics of 4 types of dental floss: (1) Thai Silk Floss (silk, waxed), (2) Oral B® Essential Floss (nylon, waxed), (3) Experimental Floss Xu (nylon, unwaxed), (4) Experimental Floss Xw (nylon, waxed). Materials & method: Four types of floss were tested (n=30) with a Universal Testing Machine (Instron®). Each sample (30 cm long, 5 cm segment) was fixed, and pulled apart with load cell of 100 N and a test speed of 100 mm/min. Physical characteristics were investigated by digital microscope under 2.5×10 magnification, and scanning electron microscope under 1×100 and 5×100 magnification. The size of the filaments was measured in micron (μm) and the fineness were measured in Denier. Statistical analysis: For mechanical properties, the maximum load and the percentage of elongation were presented as mean ± SD. The distribution of the data was calculated by the Kolmogorov-Smirnov test. One-way ANOVA and multiple comparison (Tukey HSD) were used to analyze the differences among the groups with the level of a statistical difference at p < 0.05. Results: The maximum load of Floss Xu, Floss Xw, Oral B and Thai Silk were 47.39, 46.46, 25.38, and 23.70 N, respectively. The percentage of elongation of Oral B, Floss Xw, Floss Xu and Thai Silk were 72.43, 44.62, 31.25, and 16.44%, respectively. All 4 types of dental floss showed statistically differences in both the maximum load and percentage of elongation at p < 0.05, except for maximum load between Floss Xw and Floss Xu that showed no statistically significant difference. Physical characteristics of Thai silk revealed the most disintegrated, the smallest, and the least fine filaments. Conclusion: Floss Xu had the highest maximum load. Oral B had the highest percentage of elongation. Wax coating on Floss X increased the elongation but had no significant effect on the maximum load. The physical characteristics of Thai Silk resulted in the lowest mechanical properties values.

Keywords: dental floss, maximum load, mechanical property, percentage of elongation, physical property

Procedia PDF Downloads 280
458 Prostatic Cyst in Suprapubic Ultrasound Examination

Authors: Angelis P. Barlampas, Ghita Bianca-Andreea

Abstract:

A case of a prostatic midline cyst is presented, which was found during a routine general ultrasound examination in an otherwise healthy young man. The incidence of prostatic cysts discovered in suprapubic ultrasound examination has constantly been rising over the previous decades. Despite the fact that the majority of them are benign, a significant amount is related to symptoms, such as pain, dysuria, infertility, and even cancer. The wide use of ultrasound examination and the increasing availability of high-resolution ultrasound systems have rendered new diagnostic challenges. Once upon a time a suprapubic ultrasound was only useful for measuring only the size and the dimensions of the prostatic gland. It did not have the ability to analyze and resolve structures such as cystic or solid nodules. The current machine equipment has managed to depict the imaging characteristics of lesions with high acuity that compares of an intrarectal ultrasound. But the last one is a specialized examination, which demands expertise and good knowledge. Maybe the time has come for the general radiologist and, especially the one who uses suprapubic ultrasound, to pay more attention to the examination of the prostate gland and to take advantage of the superb abilities and the high resolution of the new ultrasound systems. That is exactly, what this case is emphasizing. The incidental discovery of prostatic cysts, and the relatively little available literature about managing them turns them into an interesting theme for exploring and studying. The prostatic cysts are further divided into midline and paramidline cysts, with the first being usually utricle cysts. A more precise categorization is as follows: A midline cystic lesion usually regards a Mullerian duct cyst, a prostatic utricle cyst, an ejaculatory duct cyst, a prostatic cystadenoma, a ductus deferens cyst, and a TURP. On the other hand, a lateral cystic lesion usually refers to a cystic degeneration of benign prostatic hyperplasia, a prostatic retention cyst, a seminal vesicle cyst, diverticular prostatitis, a prostatic abscess, cavitatory prostatitis from chronic prostatitis, a parasitic prostatic cyst, a cystic prostatic carcinoma, e.t.c.

Keywords: prostatic cyst, radiology, benign prostatic lesions, prostatic cancer, suprapubic prostatic ultrasound

Procedia PDF Downloads 58
457 Destination Decision Model for Cruising Taxis Based on Embedding Model

Authors: Kazuki Kamada, Haruka Yamashita

Abstract:

In Japan, taxi is one of the popular transportations and taxi industry is one of the big businesses. However, in recent years, there has been a difficult problem of reducing the number of taxi drivers. In the taxi business, mainly three passenger catching methods are applied. One style is "cruising" that drivers catches passengers while driving on a road. Second is "waiting" that waits passengers near by the places with many requirements for taxies such as entrances of hospitals, train stations. The third one is "dispatching" that is allocated based on the contact from the taxi company. Above all, the cruising taxi drivers need the experience and intuition for finding passengers, and it is difficult to decide "the destination for cruising". The strong recommendation system for the cruising taxies supports the new drivers to find passengers, and it can be the solution for the decreasing the number of drivers in the taxi industry. In this research, we propose a method of recommending a destination for cruising taxi drivers. On the other hand, as a machine learning technique, the embedding models that embed the high dimensional data to a low dimensional space is widely used for the data analysis, in order to represent the relationship of the meaning between the data clearly. Taxi drivers have their favorite courses based on their experiences, and the courses are different for each driver. We assume that the course of cruising taxies has meaning such as the course for finding business man passengers (go around the business area of the city of go to main stations) and course for finding traveler passengers (go around the sightseeing places or big hotels), and extract the meaning of their destinations. We analyze the cruising history data of taxis based on the embedding model and propose the recommendation system for passengers. Finally, we demonstrate the recommendation of destinations for cruising taxi drivers based on the real-world data analysis using proposing method.

Keywords: taxi industry, decision making, recommendation system, embedding model

Procedia PDF Downloads 138
456 Assessment of the Spatio-Temporal Distribution of Pteridium aquilinum (Bracken Fern) Invasion on the Grassland Plateau in Nyika National Park

Authors: Andrew Kanzunguze, Lusayo Mwabumba, Jason K. Gilbertson, Dominic B. Gondwe, George Z. Nxumayo

Abstract:

Knowledge about the spatio-temporal distribution of invasive plants in protected areas provides a base from which hypotheses explaining proliferation of plant invasions can be made alongside development of relevant invasive plant monitoring programs. The aim of this study was to investigate the spatio-temporal distribution of bracken fern on the grassland plateau of Nyika National Park over the past 30 years (1986-2016) as well as to determine the current extent of the invasion. Remote sensing, machine learning, and statistical modelling techniques (object-based image analysis, image classification and linear regression analysis) in geographical information systems were used to determine both the spatial and temporal distribution of bracken fern in the study area. Results have revealed that bracken fern has been increasing coverage on the Nyika plateau at an estimated annual rate of 87.3 hectares since 1986. This translates to an estimated net increase of 2,573.1 hectares, which was recorded from 1,788.1 hectares (1986) to 4,361.9 hectares (2016). As of 2017 bracken fern covered 20,940.7 hectares, approximately 14.3% of the entire grassland plateau. Additionally, it was observed that the fern was distributed most densely around Chelinda camp (on the central plateau) as well as in forest verges and roadsides across the plateau. Based on these results it is recommended that Ecological Niche Modelling approaches be employed to (i) isolate the most important factors influencing bracken fern proliferation as well as (ii) identify and prioritize areas requiring immediate control interventions so as to minimize bracken fern proliferation in Nyika National Park.

Keywords: bracken fern, image classification, Landsat-8, Nyika National Park, spatio-temporal distribution

Procedia PDF Downloads 180
455 Eosinopenia: Marker for Early Diagnosis of Enteric Fever

Authors: Swati Kapoor, Rajeev Upreti, Monica Mahajan, Abhaya Indrayan, Dinesh Srivastava

Abstract:

Enteric Fever is caused by gram negative bacilli Salmonella typhi and paratyphi. It is associated with high morbidity and mortality worldwide. Timely initiation of treatment is a crucial step for prevention of any complications. Cultures of body fluids are diagnostic, but not always conclusive or practically feasible in most centers. Moreover, the results of cultures delay the treatment initiation. Serological tests lack diagnostic value. The blood counts can offer a promising option in diagnosis. A retrospective study to find out the relevance of leucopenia and eosinopenia was conducted on 203 culture proven enteric fever patients and 159 culture proven non-enteric fever patients in a tertiary care hospital in New Delhi. The patient details were retrieved from the electronic medical records section of the hospital. Absolute eosinopenia was considered as absolute eosinophil count (AEC) of less than 40/mm³ (normal level: 40-400/mm³) using LH-750 Beckman Coulter Automated machine. Leucopoenia was defined as total leucocyte count (TLC) of less than 4 X 10⁹/l. Blood cultures were done using BacT/ALERT FA plus automated blood culture system before first antibiotic dose was given. Case and control groups were compared using Pearson Chi square test. It was observed that absolute eosinophil count (AEC) of 0-19/mm³ was a significant finding (p < 0.001) in enteric fever patients, whereas leucopenia was not a significant finding (p=0.096). Using Receiving Operating Characteristic (ROC) curves, it was observed that patients with both AEC < 14/mm³ and TCL < 8 x 10⁹/l had 95.6% chance of being diagnosed as enteric fever and only 4.4% chance of being diagnosed as non-enteric fever. This result was highly significant with p < 0.001. This is a very useful association of AEC and TLC found in enteric fever patients of this study which can be used for the early initiation of treatment in clinically suspected enteric fever patients.

Keywords: absolute eosinopenia, absolute eosinophil count, enteric fever, leucopenia, total leucocyte count

Procedia PDF Downloads 172
454 Transformer-Driven Multi-Category Classification for an Automated Academic Strand Recommendation Framework

Authors: Ma Cecilia Siva

Abstract:

This study introduces a Bidirectional Encoder Representations from Transformers (BERT)-based machine learning model aimed at improving educational counseling by automating the process of recommending academic strands for students. The framework is designed to streamline and enhance the strand selection process by analyzing students' profiles and suggesting suitable academic paths based on their interests, strengths, and goals. Data was gathered from a sample of 200 grade 10 students, which included personal essays and survey responses relevant to strand alignment. After thorough preprocessing, the text data was tokenized, label-encoded, and input into a fine-tuned BERT model set up for multi-label classification. The model was optimized for balanced accuracy and computational efficiency, featuring a multi-category classification layer with sigmoid activation for independent strand predictions. Performance metrics showed an F1 score of 88%, indicating a well-balanced model with precision at 80% and recall at 100%, demonstrating its effectiveness in providing reliable recommendations while reducing irrelevant strand suggestions. To facilitate practical use, the final deployment phase created a recommendation framework that processes new student data through the trained model and generates personalized academic strand suggestions. This automated recommendation system presents a scalable solution for academic guidance, potentially enhancing student satisfaction and alignment with educational objectives. The study's findings indicate that expanding the data set, integrating additional features, and refining the model iteratively could improve the framework's accuracy and broaden its applicability in various educational contexts.

Keywords: tokenized, sigmoid activation, transformer, multi category classification

Procedia PDF Downloads 9
453 Saving Energy through Scalable Architecture

Authors: John Lamb, Robert Epstein, Vasundhara L. Bhupathi, Sanjeev Kumar Marimekala

Abstract:

In this paper, we focus on the importance of scalable architecture for data centers and buildings in general to help an enterprise achieve environmental sustainability. The scalable architecture helps in many ways, such as adaptability to the business and user requirements, promotes high availability and disaster recovery solutions that are cost effective and low maintenance. The scalable architecture also plays a vital role in three core areas of sustainability: economy, environment, and social, which are also known as the 3 pillars of a sustainability model. If the architecture is scalable, it has many advantages. A few examples are that scalable architecture helps businesses and industries to adapt to changing technology, drive innovation, promote platform independence, and build resilience against natural disasters. Most importantly, having a scalable architecture helps industries bring in cost-effective measures for energy consumption, reduce wastage, increase productivity, and enable a robust environment. It also helps in the reduction of carbon emissions with advanced monitoring and metering capabilities. Scalable architectures help in reducing waste by optimizing the designs to utilize materials efficiently, minimize resources, decrease carbon footprints by using low-impact materials that are environmentally friendly. In this paper we also emphasize the importance of cultural shift towards the reuse and recycling of natural resources for a balanced ecosystem and maintain a circular economy. Also, since all of us are involved in the use of computers, much of the scalable architecture we have studied is related to data centers.

Keywords: scalable architectures, sustainability, application design, disruptive technology, machine learning and natural language processing, AI, social media platform, cloud computing, advanced networking and storage devices, advanced monitoring and metering infrastructure, climate change

Procedia PDF Downloads 106
452 AI Peer Review Challenge: Standard Model of Physics vs 4D GEM EOS

Authors: David A. Harness

Abstract:

Natural evolution of ATP cognitive systems is to meet AI peer review standards. ATP process of axiom selection from Mizar to prove a conjecture would be further refined, as in all human and machine learning, by solving the real world problem of the proposed AI peer review challenge: Determine which conjecture forms the higher confidence level constructive proof between Standard Model of Physics SU(n) lattice gauge group operation vs. present non-standard 4D GEM EOS SU(n) lattice gauge group spatially extended operation in which the photon and electron are the first two trace angular momentum invariants of a gravitoelectromagnetic (GEM) energy momentum density tensor wavetrain integration spin-stress pressure-volume equation of state (EOS), initiated via 32 lines of Mathematica code. Resulting gravitoelectromagnetic spectrum ranges from compressive through rarefactive of the central cosmological constant vacuum energy density in units of pascals. Said self-adjoint group operation exclusively operates on the stress energy momentum tensor of the Einstein field equations, introducing quantization directly on the 4D spacetime level, essentially reformulating the Yang-Mills virtual superpositioned particle compounded lattice gauge groups quantization of the vacuum—into a single hyper-complex multi-valued GEM U(1) × SU(1,3) lattice gauge group Planck spacetime mesh quantization of the vacuum. Thus the Mizar corpus already contains all of the axioms required for relevant DeepMath premise selection and unambiguous formal natural language parsing in context deep learning.

Keywords: automated theorem proving, constructive quantum field theory, information theory, neural networks

Procedia PDF Downloads 179
451 Preparation and Characterization of Calcium Phosphate Cement

Authors: W. Thepsuwan, N. Monmaturapoj

Abstract:

Calcium phosphate cements (CPCs) is one of the most attractive bioceramics due to its moldable and shape ability to fill complicated bony cavities or small dental defect positions. In this study, CPCs were produced by using mixtures of tetracalcium phosphate (TTCP, Ca4O(PO4)2) and dicalcium phosphate anhydrous (DCPA, CaHPO4) in equimolar ratio (1/1) with aqueous solutions of acetic acid (C2H4O2) and disodium hydrogen phosphate dehydrate (Na2HPO4.2H2O) in combination with sodium alginate in order to improve theirs moldable characteristic. The concentrations of the aqueous solutions and sodium alginate were varied to investigate the effects of different aqueous solution and alginate on properties of the cements. The cement paste was prepared by mixing cement powder (P) with aqueous solution (L) in a P/L ratio of 1.0 g/ 0.35 ml. X-ray diffraction (XRD) was used to analyses phase formation of the cements. Setting times and compressive strength of the set CPCs were measured using the Gilmore apparatus and Universal testing machine, respectively. The results showed that CPCs could be produced by using both basic (Na2HPO4.2H2O) and acidic (C2H4O2) solutions. XRD results show the precipitation of hydroxyapatite in all cement samples. No change in phase formation among cements using difference concentrations of Na2HPO4.2H2O solutions. With increasing concentration of acidic solutions, samples obtained less hydroxyapatite with a high dicalcium phosphate dehydrate leaded to a shorter setting time. Samples with sodium alginate exhibited higher crystallization of hydroxyapatite than that of without alginate as a result of shorten setting time in basic solution but a longer setting time in acidic solution. The stronger cement was attained from samples using acidic solution with sodium alginate; however it was lower than using the basic solution.

Keywords: calcium phosphate cements, TTCP, DCPA, hydroxyapatite, properties

Procedia PDF Downloads 390
450 X-Ray Diffraction and Crosslink Density Analysis of Starch/Natural Rubber Polymer Composites Prepared by Latex Compounding Method

Authors: Raymond Dominic Uzoh

Abstract:

Starch fillers were extracted from three plant sources namely amora tuber (a wild variety of Irish potato), sweet potato and yam starch and their particle size, pH, amylose, and amylopectin percentage decomposition determined accordingly by high performance liquid chromatography (HPLC). The starch was introduced into natural rubber in liquid phase (through gelatinization) by the latex compounding method and compounded according to standard method. The prepared starch/natural rubber composites was characterized by Instron Universal testing machine (UTM) for tensile mechanical properties. The composites was further characterized by x-ray diffraction and crosslink density analysis. The particle size determination showed that amora starch granules have the highest particle size (156 × 47 μm) followed by yam starch (155× 40 μm) and then the sweet potato starch (153 × 46 μm). The pH test also revealed that amora starch has a near neutral pH of 6.9, yam 6.8, and sweet potato 5.2 respectively. Amylose and amylopectin determination showed that yam starch has a higher percentage of amylose (29.68), followed by potato (22.34) and then amora starch with the lowest value (14.86) respectively. The tensile mechanical properties testing revealed that yam starch produced the best tensile mechanical properties followed by amora starch and then sweet potato starch. The structure, crystallinity/amorphous nature of the product composite was confirmed by x-ray diffraction, while the nature of crosslinking was confirmed by swelling test in toluene solvent using the Flory-Rehner approach. This research study has rendered a workable strategy for enhancing interfacial interaction between a hydrophilic filler (starch) and hydrophobic polymeric matrix (natural rubber) yielding moderately good tensile mechanical properties for further exploitation development and application in the rubber processing industry.

Keywords: natural rubber, fillers, starch, amylose, amylopectin, crosslink density

Procedia PDF Downloads 170
449 Hedgerow Detection and Characterization Using Very High Spatial Resolution SAR DATA

Authors: Saeid Gharechelou, Stuart Green, Fiona Cawkwell

Abstract:

Hedgerow has an important role for a wide range of ecological habitats, landscape, agriculture management, carbon sequestration, wood production. Hedgerow detection accurately using satellite imagery is a challenging problem in remote sensing techniques, because in the special approach it is very similar to line object like a road, from a spectral viewpoint, a hedge is very similar to a forest. Remote sensors with very high spatial resolution (VHR) recently enable the automatic detection of hedges by the acquisition of images with enough spectral and spatial resolution. Indeed, recently VHR remote sensing data provided the opportunity to detect the hedgerow as line feature but still remain difficulties in monitoring the characterization in landscape scale. In this research is used the TerraSAR-x Spotlight and Staring mode with 3-5 m resolution in wet and dry season in the test site of Fermoy County, Ireland to detect the hedgerow by acquisition time of 2014-2015. Both dual polarization of Spotlight data in HH/VV is using for detection of hedgerow. The varied method of SAR image technique with try and error way by integration of classification algorithm like texture analysis, support vector machine, k-means and random forest are using to detect hedgerow and its characterization. We are applying the Shannon entropy (ShE) and backscattering analysis in single and double bounce in polarimetric analysis for processing the object-oriented classification and finally extracting the hedgerow network. The result still is in progress and need to apply the other method as well to find the best method in study area. Finally, this research is under way to ahead to get the best result and here just present the preliminary work that polarimetric image of TSX potentially can detect the hedgerow.

Keywords: TerraSAR-X, hedgerow detection, high resolution SAR image, dual polarization, polarimetric analysis

Procedia PDF Downloads 230
448 The Structure and Function Investigation and Analysis of the Automatic Spin Regulator (ASR) in the Powertrain System of Construction and Mining Machines with the Focus on Dump Trucks

Authors: Amir Mirzaei

Abstract:

The powertrain system is one of the most basic and essential components in a machine. The occurrence of motion is practically impossible without the presence of this system. When power is generated by the engine, it is transmitted by the powertrain system to the wheels, which are the last parts of the system. Powertrain system has different components according to the type of use and design. When the force generated by the engine reaches to the wheels, the amount of frictional force between the tire and the ground determines the amount of traction and non-slip or the amount of slip. At various levels, such as icy, muddy, and snow-covered ground, the amount of friction coefficient between the tire and the ground decreases dramatically and considerably, which in turn increases the amount of force loss and the vehicle traction decreases drastically. This condition is caused by the phenomenon of slipping, which, in addition to the waste of energy produced, causes the premature wear of driving tires. It also causes the temperature of the transmission oil to rise too much, as a result, causes a reduction in the quality and become dirty to oil and also reduces the useful life of the clutches disk and plates inside the transmission. this issue is much more important in road construction and mining machinery than passenger vehicles and is always one of the most important and significant issues in the design discussion, in order to overcome. One of these methods is the automatic spin regulator system which is abbreviated as ASR. The importance of this method and its structure and function have solved one of the biggest challenges of the powertrain system in the field of construction and mining machinery. That this research is examined.

Keywords: automatic spin regulator, ASR, methods of reducing slipping, methods of preventing the reduction of the useful life of clutches disk and plate, methods of preventing the premature dirtiness of transmission oil, method of preventing the reduction of the useful life of tires

Procedia PDF Downloads 79
447 The Importance of Artificial Intelligence in Various Healthcare Applications

Authors: Joshna Rani S., Ahmadi Banu

Abstract:

Artificial Intelligence (AI) has a significant task to carry out in the medical care contributions of things to come. As AI, it is the essential capacity behind the advancement of accuracy medication, generally consented to be a painfully required development in care. Albeit early endeavors at giving analysis and treatment proposals have demonstrated testing, we anticipate that AI will at last dominate that area too. Given the quick propels in AI for imaging examination, it appears to be likely that most radiology, what's more, pathology pictures will be inspected eventually by a machine. Discourse and text acknowledgment are now utilized for assignments like patient correspondence and catch of clinical notes, and their utilization will increment. The best test to AI in these medical services areas isn't regardless of whether the innovations will be sufficiently skilled to be valuable, but instead guaranteeing their appropriation in day by day clinical practice. For far reaching selection to happen, AI frameworks should be affirmed by controllers, coordinated with EHR frameworks, normalized to an adequate degree that comparative items work likewise, instructed to clinicians, paid for by open or private payer associations, and refreshed over the long haul in the field. These difficulties will, at last, be survived, yet they will take any longer to do as such than it will take for the actual innovations to develop. Therefore, we hope to see restricted utilization of AI in clinical practice inside 5 years and more broad use inside 10 years. It likewise appears to be progressively evident that AI frameworks won't supplant human clinicians for a huge scope, yet rather will increase their endeavors to really focus on patients. Over the long haul, human clinicians may advance toward errands and work plans that draw on remarkably human abilities like sympathy, influence, and higher perspective mix. Maybe the lone medical services suppliers who will chance their professions over the long run might be the individuals who will not work close by AI

Keywords: artificial intellogence, health care, breast cancer, AI applications

Procedia PDF Downloads 181
446 Tool for Maxillary Sinus Quantification in Computed Tomography Exams

Authors: Guilherme Giacomini, Ana Luiza Menegatti Pavan, Allan Felipe Fattori Alves, Marcela de Oliveira, Fernando Antonio Bacchim Neto, José Ricardo de Arruda Miranda, Seizo Yamashita, Diana Rodrigues de Pina

Abstract:

The maxillary sinus (MS), part of the paranasal sinus complex, is one of the most enigmatic structures in modern humans. The literature has suggested that MSs function as olfaction accessories, to heat or humidify inspired air, for thermoregulation, to impart resonance to the voice and others. Thus, the real function of the MS is still uncertain. Furthermore, the MS anatomy is complex and varies from person to person. Many diseases may affect the development process of sinuses. The incidence of rhinosinusitis and other pathoses in the MS is comparatively high, so, volume analysis has clinical value. Providing volume values for MS could be helpful in evaluating the presence of any abnormality and could be used for treatment planning and evaluation of the outcome. The computed tomography (CT) has allowed a more exact assessment of this structure, which enables a quantitative analysis. However, this is not always possible in the clinical routine, and if possible, it involves much effort and/or time. Therefore, it is necessary to have a convenient, robust, and practical tool correlated with the MS volume, allowing clinical applicability. Nowadays, the available methods for MS segmentation are manual or semi-automatic. Additionally, manual methods present inter and intraindividual variability. Thus, the aim of this study was to develop an automatic tool to quantity the MS volume in CT scans of paranasal sinuses. This study was developed with ethical approval from the authors’ institutions and national review panels. The research involved 30 retrospective exams of University Hospital, Botucatu Medical School, São Paulo State University, Brazil. The tool for automatic MS quantification, developed in Matlab®, uses a hybrid method, combining different image processing techniques. For MS detection, the algorithm uses a Support Vector Machine (SVM), by features such as pixel value, spatial distribution, shape and others. The detected pixels are used as seed point for a region growing (RG) segmentation. Then, morphological operators are applied to reduce false-positive pixels, improving the segmentation accuracy. These steps are applied in all slices of CT exam, obtaining the MS volume. To evaluate the accuracy of the developed tool, the automatic method was compared with manual segmentation realized by an experienced radiologist. For comparison, we used Bland-Altman statistics, linear regression, and Jaccard similarity coefficient. From the statistical analyses for the comparison between both methods, the linear regression showed a strong association and low dispersion between variables. The Bland–Altman analyses showed no significant differences between the analyzed methods. The Jaccard similarity coefficient was > 0.90 in all exams. In conclusion, the developed tool to quantify MS volume proved to be robust, fast, and efficient, when compared with manual segmentation. Furthermore, it avoids the intra and inter-observer variations caused by manual and semi-automatic methods. As future work, the tool will be applied in clinical practice. Thus, it may be useful in the diagnosis and treatment determination of MS diseases. Providing volume values for MS could be helpful in evaluating the presence of any abnormality and could be used for treatment planning and evaluation of the outcome. The computed tomography (CT) has allowed a more exact assessment of this structure which enables a quantitative analysis. However, this is not always possible in the clinical routine, and if possible, it involves much effort and/or time. Therefore, it is necessary to have a convenient, robust and practical tool correlated with the MS volume, allowing clinical applicability. Nowadays, the available methods for MS segmentation are manual or semi-automatic. Additionally, manual methods present inter and intraindividual variability. Thus, the aim of this study was to develop an automatic tool to quantity the MS volume in CT scans of paranasal sinuses. This study was developed with ethical approval from the authors’ institutions and national review panels. The research involved 30 retrospective exams of University Hospital, Botucatu Medical School, São Paulo State University, Brazil. The tool for automatic MS quantification, developed in Matlab®, uses a hybrid method, combining different image processing techniques. For MS detection, the algorithm uses a Support Vector Machine (SVM), by features such as pixel value, spatial distribution, shape and others. The detected pixels are used as seed point for a region growing (RG) segmentation. Then, morphological operators are applied to reduce false-positive pixels, improving the segmentation accuracy. These steps are applied in all slices of CT exam, obtaining the MS volume. To evaluate the accuracy of the developed tool, the automatic method was compared with manual segmentation realized by an experienced radiologist. For comparison, we used Bland-Altman statistics, linear regression and Jaccard similarity coefficient. From the statistical analyses for the comparison between both methods, the linear regression showed a strong association and low dispersion between variables. The Bland–Altman analyses showed no significant differences between the analyzed methods. The Jaccard similarity coefficient was > 0.90 in all exams. In conclusion, the developed tool to automatically quantify MS volume proved to be robust, fast and efficient, when compared with manual segmentation. Furthermore, it avoids the intra and inter-observer variations caused by manual and semi-automatic methods. As future work, the tool will be applied in clinical practice. Thus, it may be useful in the diagnosis and treatment determination of MS diseases.

Keywords: maxillary sinus, support vector machine, region growing, volume quantification

Procedia PDF Downloads 504
445 Improving Fingerprinting-Based Localization System Using Generative AI

Authors: Getaneh Berie Tarekegn

Abstract:

A precise localization system is crucial for many artificial intelligence Internet of Things (AI-IoT) applications in the era of smart cities. Their applications include traffic monitoring, emergency alarming, environmental monitoring, location-based advertising, intelligent transportation, and smart health care. The most common method for providing continuous positioning services in outdoor environments is by using a global navigation satellite system (GNSS). Due to nonline-of-sight, multipath, and weather conditions, GNSS systems do not perform well in dense urban, urban, and suburban areas.This paper proposes a generative AI-based positioning scheme for large-scale wireless settings using fingerprinting techniques. In this article, we presented a semi-supervised deep convolutional generative adversarial network (S-DCGAN)-based radio map construction method for real-time device localization. It also employed a reliable signal fingerprint feature extraction method with t-distributed stochastic neighbor embedding (t-SNE), which extracts dominant features while eliminating noise from hybrid WLAN and long-term evolution (LTE) fingerprints. The proposed scheme reduced the workload of site surveying required to build the fingerprint database by up to 78.5% and significantly improved positioning accuracy. The results show that the average positioning error of GAILoc is less than 0.39 m, and more than 90% of the errors are less than 0.82 m. According to numerical results, SRCLoc improves positioning performance and reduces radio map construction costs significantly compared to traditional methods.

Keywords: location-aware services, feature extraction technique, generative adversarial network, long short-term memory, support vector machine

Procedia PDF Downloads 60
444 Interactive IoT-Blockchain System for Big Data Processing

Authors: Abdallah Al-ZoubI, Mamoun Dmour

Abstract:

The spectrum of IoT devices is becoming widely diversified, entering almost all possible fields and finding applications in industry, health, finance, logistics, education, to name a few. The IoT active endpoint sensors and devices exceeded the 12 billion mark in 2021 and are expected to reach 27 billion in 2025, with over $34 billion in total market value. This sheer rise in numbers and use of IoT devices bring with it considerable concerns regarding data storage, analysis, manipulation and protection. IoT Blockchain-based systems have recently been proposed as a decentralized solution for large-scale data storage and protection. COVID-19 has actually accelerated the desire to utilize IoT devices as it impacted both demand and supply and significantly affected several regions due to logistic reasons such as supply chain interruptions, shortage of shipping containers and port congestion. An IoT-blockchain system is proposed to handle big data generated by a distributed network of sensors and controllers in an interactive manner. The system is designed using the Ethereum platform, which utilizes smart contracts, programmed in solidity to execute and manage data generated by IoT sensors and devices. such as Raspberry Pi 4, Rasbpian, and add-on hardware security modules. The proposed system will run a number of applications hosted by a local machine used to validate transactions. It then sends data to the rest of the network through InterPlanetary File System (IPFS) and Ethereum Swarm, forming a closed IoT ecosystem run by blockchain where a number of distributed IoT devices can communicate and interact, thus forming a closed, controlled environment. A prototype has been deployed with three IoT handling units distributed over a wide geographical space in order to examine its feasibility, performance and costs. Initial results indicated that big IoT data retrieval and storage is feasible and interactivity is possible, provided that certain conditions of cost, speed and thorough put are met.

Keywords: IoT devices, blockchain, Ethereum, big data

Procedia PDF Downloads 150
443 Short Text Classification Using Part of Speech Feature to Analyze Students' Feedback of Assessment Components

Authors: Zainab Mutlaq Ibrahim, Mohamed Bader-El-Den, Mihaela Cocea

Abstract:

Students' textual feedback can hold unique patterns and useful information about learning process, it can hold information about advantages and disadvantages of teaching methods, assessment components, facilities, and other aspects of teaching. The results of analysing such a feedback can form a key point for institutions’ decision makers to advance and update their systems accordingly. This paper proposes a data mining framework for analysing end of unit general textual feedback using part of speech feature (PoS) with four machine learning algorithms: support vector machines, decision tree, random forest, and naive bays. The proposed framework has two tasks: first, to use the above algorithms to build an optimal model that automatically classifies the whole data set into two subsets, one subset is tailored to assessment practices (assessment related), and the other one is the non-assessment related data. Second task to use the same algorithms to build an optimal model for whole data set, and the new data subsets to automatically detect their sentiment. The significance of this paper is to compare the performance of the above four algorithms using part of speech feature to the performance of the same algorithms using n-grams feature. The paper follows Knowledge Discovery and Data Mining (KDDM) framework to construct the classification and sentiment analysis models, which is understanding the assessment domain, cleaning and pre-processing the data set, selecting and running the data mining algorithm, interpreting mined patterns, and consolidating the discovered knowledge. The results of this paper experiments show that both models which used both features performed very well regarding first task. But regarding the second task, models that used part of speech feature has underperformed in comparison with models that used unigrams and bigrams.

Keywords: assessment, part of speech, sentiment analysis, student feedback

Procedia PDF Downloads 142
442 Chemical and Physical Properties and Biocompatibility of Ti–6Al–4V Produced by Electron Beam Rapid Manufacturing and Selective Laser Melting for Biomedical Applications

Authors: Bing–Jing Zhao, Chang-Kui Liu, Hong Wang, Min Hu

Abstract:

Electron beam rapid manufacturing (EBRM) or Selective laser melting is an additive manufacturing process that uses 3D CAD data as a digital information source and energy in the form of a high-power laser beam or electron beam to create three-dimensional metal parts by fusing fine metallic powders together.Object:The present study was conducted to evaluate the mechanical properties ,the phase transformation,the corrosivity and the biocompatibility of Ti-6Al-4V by EBRM,SLM and forging technique.Method: Ti-6Al-4V alloy standard test pieces were manufactured by EBRM, SLM and forging technique according to AMS4999,GB/T228 and ISO 10993.The mechanical properties were analyzed by universal test machine. The phase transformation was analyzed by X-ray diffraction and scanning electron microscopy. The corrosivity was analyzed by electrochemical method. The biocompatibility was analyzed by co-culturing with mesenchymal stem cell and analyzed by scanning electron microscopy (SEM) and alkaline phosphatase assay (ALP) to evaluate cell adhesion and differentiation, respectively. Results: The mechanical properties, the phase transformation, the corrosivity and the biocompatibility of Ti-6Al-4V by EBRM、SLM were similar to forging and meet the mechanical property requirements of AMS4999 standard. a­phase microstructure for the EBM production contrast to the a’­phase microstructure of the SLM product. Mesenchymal stem cell adhesion and differentiation were well. Conclusion: The property of the Ti-6Al-4V alloy manufactured by EBRM and SLM technique can meet the medical standard from this study. But some further study should be proceeded in order to applying well in clinical practice.

Keywords: 3D printing, Electron Beam Rapid Manufacturing (EBRM), Selective Laser Melting (SLM), Computer Aided Design (CAD)

Procedia PDF Downloads 454
441 Revolutionizing Autonomous Trucking Logistics with Customer Relationship Management Cloud

Authors: Sharda Kumari, Saiman Shetty

Abstract:

Autonomous trucking is just one of the numerous significant shifts impacting fleet management services. The Society of Automotive Engineers (SAE) has defined six levels of vehicle automation that have been adopted internationally, including by the United States Department of Transportation. On public highways in the United States, organizations are testing driverless vehicles with at least Level 4 automation which indicates that a human is present in the vehicle and can disable automation, which is usually done while the trucks are not engaged in highway driving. However, completely driverless vehicles are presently being tested in the state of California. While autonomous trucking can increase safety, decrease trucking costs, provide solutions to trucker shortages, and improve efficiencies, logistics, too, requires advancements to keep up with trucking innovations. Given that artificial intelligence, machine learning, and automated procedures enable people to do their duties in other sectors with fewer resources, CRM (Customer Relationship Management) can be applied to the autonomous trucking business to provide the same level of efficiency. In a society witnessing significant digital disruptions, fleet management is likewise being transformed by technology. Utilizing strategic alliances to enhance core services is an effective technique for capitalizing on innovations and delivering enhanced services. Utilizing analytics on CRM systems improves cost control of fuel strategy, fleet maintenance, driver behavior, route planning, road safety compliance, and capacity utilization. Integration of autonomous trucks with automated fleet management, yard/terminal management, and customer service is possible, thus having significant power to redraw the lines between the public and private spheres in autonomous trucking logistics.

Keywords: autonomous vehicles, customer relationship management, customer experience, autonomous trucking, digital transformation

Procedia PDF Downloads 108
440 Optimizing Energy Efficiency: Leveraging Big Data Analytics and AWS Services for Buildings and Industries

Authors: Gaurav Kumar Sinha

Abstract:

In an era marked by increasing concerns about energy sustainability, this research endeavors to address the pressing challenge of energy consumption in buildings and industries. This study delves into the transformative potential of AWS services in optimizing energy efficiency. The research is founded on the recognition that effective management of energy consumption is imperative for both environmental conservation and economic viability. Buildings and industries account for a substantial portion of global energy use, making it crucial to develop advanced techniques for analysis and reduction. This study sets out to explore the integration of AWS services with big data analytics to provide innovative solutions for energy consumption analysis. Leveraging AWS's cloud computing capabilities, scalable infrastructure, and data analytics tools, the research aims to develop efficient methods for collecting, processing, and analyzing energy data from diverse sources. The core focus is on creating predictive models and real-time monitoring systems that enable proactive energy management. By harnessing AWS's machine learning and data analytics capabilities, the research seeks to identify patterns, anomalies, and optimization opportunities within energy consumption data. Furthermore, this study aims to propose actionable recommendations for reducing energy consumption in buildings and industries. By combining AWS services with metrics-driven insights, the research strives to facilitate the implementation of energy-efficient practices, ultimately leading to reduced carbon emissions and cost savings. The integration of AWS services not only enhances the analytical capabilities but also offers scalable solutions that can be customized for different building and industrial contexts. The research also recognizes the potential for AWS-powered solutions to promote sustainable practices and support environmental stewardship.

Keywords: energy consumption analysis, big data analytics, AWS services, energy efficiency

Procedia PDF Downloads 64
439 Low Enrollment in Civil Engineering Departments: Challenges and Opportunities

Authors: Alaa Yehia, Ayatollah Yehia, Sherif Yehia

Abstract:

There is a recurring issue of low enrollments across many civil engineering departments in postsecondary institutions. While there have been moments where enrollments begin to increase, civil engineering departments find themselves facing low enrollments at around 60% over the last five years across the Middle East. There are many reasons that could be attributed to this decline, such as low entry-level salaries, over-saturation of civil engineering graduates in the job market, and a lack of construction projects due to the impending or current recession. However, this recurring problem alludes to an intrinsic issue of the curriculum. The societal shift to the usage of high technology such as machine learning (ML) and artificial intelligence (AI) demands individuals who are proficient at utilizing it. Therefore, existing curriculums must adapt to this change in order to provide an education that is suitable for potential and current students. In this paper, In order to provide potential solutions for this issue, the analysis considers two possible implementations of high technology into the civil engineering curriculum. The first approach is to implement a course that introduces applications of high technology in Civil Engineering contexts. While the other approach is to intertwine applications of high technology throughout the degree. Both approaches, however, should meet requirements of accreditation agencies. In addition to the proposed improvement in civil engineering curriculum, a different pedagogical practice must be adapted as well. The passive learning approach might not be appropriate for Gen Z students; current students, now more than ever, need to be introduced to engineering topics and practice following different learning methods to ensure they will have the necessary skills for the job market. Different learning methods that incorporate high technology applications, like AI, must be integrated throughout the curriculum to make the civil engineering degree more attractive to prospective students. Moreover, the paper provides insight on the importance and approach of adapting the Civil Engineering curriculum to address the current low enrollment crisis that civil engineering departments globally, but specifically in the Middle East, are facing.

Keywords: artificial intelligence (AI), civil engineering curriculum, high technology, low enrollment, pedagogy

Procedia PDF Downloads 167
438 Cold Formed Steel Sections: Analysis, Design and Applications

Authors: A. Saha Chaudhuri, D. Sarkar

Abstract:

In steel construction, there are two families of structural members. One is hot rolled steel and another is cold formed steel. Cold formed steel section includes steel sheet, strip, plate or flat bar. Cold formed steel section is manufactured in roll forming machine by press brake or bending operation. Cold formed steel (CFS), also known as Light Gauge Steel (LGS). As cold formed steel is a sustainable material, it is widely used in green building. Cold formed steel can be recycled and reused with no degradation in structural properties. Cold formed steel structures can earn credits for green building ratings such as LEED and similar programs. Cold formed steel construction satisfies international demand for better, more efficient and affordable buildings. Cold formed steel sections are used in building, car body, railway coach, various types of equipment, storage rack, grain bin, highway product, transmission tower, transmission pole, drainage facility, bridge construction etc. Various shapes of cold formed steel sections are available, such as C section, Z section, I section, T section, angle section, hat section, box section, square hollow section (SHS), rectangular hollow section (RHS), circular hollow section (CHS) etc. In building construction cold formed steel is used as eave strut, purlin, girt, stud, header, floor joist, brace, diaphragm and covering for roof, wall and floor. Cold formed steel has high strength to weight ratio and high stiffness. Cold formed steel is non shrinking and non creeping at ambient temperature, it is termite proof and rot proof. CFS is durable, dimensionally stable and non combustible material. CFS is economical in transportation and handling. At present days cold formed steel becomes a competitive building material. In this paper all these applications related present research work are described and how the CFS can be used as blast resistant structural system that is examined.

Keywords: cold form steel sections, applications, present research review, blast resistant design

Procedia PDF Downloads 150