Search results for: day night task modification
1046 Life Stage Customer Segmentation by Fine-Tuning Large Language Models
Authors: Nikita Katyal, Shaurya Uppal
Abstract:
This paper tackles the significant challenge of accurately classifying customers within a retailer’s customer base. Accurate classification is essential for developing targeted marketing strategies that effectively engage this important demographic. To address this issue, we propose a method that utilizes Large Language Models (LLMs). By employing LLMs, we analyze the metadata associated with product purchases derived from historical data to identify key product categories that act as distinguishing factors. These categories, such as baby food, eldercare products, or family-sized packages, offer valuable insights into the likely household composition of customers, including families with babies, families with kids/teenagers, families with pets, households caring for elders, or mixed households. We segment high-confidence customers into distinct categories by integrating historical purchase behavior with LLM-powered product classification. This paper asserts that life stage segmentation can significantly enhance e-commerce businesses’ ability to target the appropriate customers with tailored products and campaigns, thereby augmenting sales and improving customer retention. Additionally, the paper details the data sources, model architecture, and evaluation metrics employed for the segmentation task.Keywords: LLMs, segmentation, product tags, fine-tuning, target segments, marketing communication
Procedia PDF Downloads 231045 Sediment Transport Monitoring in the Port of Veracruz Expansion Project
Authors: Francisco Liaño-Carrera, José Isaac Ramírez-Macías, David Salas-Monreal, Mayra Lorena Riveron-Enzastiga, Marcos Rangel-Avalos, Adriana Andrea Roldán-Ubando
Abstract:
The construction of most coastal infrastructure developments around the world are usually made considering wave height, current velocities and river discharges; however, little effort has been paid to surveying sediment transport during dredging or the modification to currents outside the ports or marinas during and after the construction. This study shows a complete survey during the construction of one of the largest ports of the Gulf of Mexico. An anchored Acoustic Doppler Current Velocity profiler (ADCP), a towed ADCP and a combination of model outputs were used at the Veracruz port construction in order to describe the hourly sediment transport and current modifications in and out of the new port. Owing to the stability of the system the new port was construction inside Vergara Bay, a low wave energy system with a tidal range of up to 0.40 m. The results show a two-current system pattern within the bay. The north side of the bay has an anticyclonic gyre, while the southern part of the bay shows a cyclonic gyre. Sediment transport trajectories were made every hour using the anchored ADCP, a numerical model and the weekly data obtained from the towed ADCP within the entire bay. The sediment transport trajectories were carefully tracked since the bay is surrounded by coral reef structures which are sensitive to sedimentation rate and water turbidity. The survey shows that during dredging and rock input used to build the wave breaker sediments were locally added (< 2500 m2) and local currents disperse it in less than 4 h. While the river input located in the middle of the bay and the sewer system plant may add more than 10 times this amount during a rainy day or during the tourist season. Finally, the coastal line obtained seasonally with a drone suggests that the southern part of the bay has not been modified by the construction of the new port located in the northern part of the bay, owing to the two subsystem division of the bay.Keywords: Acoustic Doppler Current Profiler, construction around coral reefs, dredging, port construction, sediment transport monitoring,
Procedia PDF Downloads 2271044 Adaptation and Habituation to new Complete Dentures
Authors: Mohamed Khaled Ahmed Azzam
Abstract:
Complete dentures, a non biological appliance, were and are still used to replace missing teeth and surrounding structures. Its main objectives are esthetics, speech, function and psychological state improvement. Dentists must realize that, just as dentate patients vary in their dental treatment complexity; edentulous patients also vary in the difficulty of their treatment plan. There are two main problems facing the removable Prosthodontist which harden his/her task how to please his patient with their new dentures being: Denture construction which however its fabrication is at the highest standards still is an unpleasant experience to all patients in the beginning and improves by time. This varies from one to several years according to the patient’s attitude, age, gender, socio-economical level and culture. The second problem of edentulous patients is both physical and psychological. Good interview, communication and note how patients present themselves for the concerns of their appearance, overall attitude and expectations concerning treatment is very important physically. On the psychological aspect patients have great difficulty to cope with new dentures to the extent of not using them at all. Hence their mind preparation should be commenced from day one by more than one method. This had a great impact on the acceptance which led to habituation to their dentures and patients were appreciative and pleased. In conclusion to successfully treat edentulous patients a great deal of information is required to complete a proper diagnosis, including patient mental attitude, past and present medical and dental conditions, and extra and intra-oral examinations. In addition to the clinical experience and skill of the whole dental team.Keywords: complete dentures, edentulous patients, management of denture, psychological mind preparation
Procedia PDF Downloads 2521043 Kannada HandWritten Character Recognition by Edge Hinge and Edge Distribution Techniques Using Manhatan and Minimum Distance Classifiers
Authors: C. V. Aravinda, H. N. Prakash
Abstract:
In this paper, we tried to convey fusion and state of art pertaining to SIL character recognition systems. In the first step, the text is preprocessed and normalized to perform the text identification correctly. The second step involves extracting relevant and informative features. The third step implements the classification decision. The three stages which involved are Data acquisition and preprocessing, Feature extraction, and Classification. Here we concentrated on two techniques to obtain features, Feature Extraction & Feature Selection. Edge-hinge distribution is a feature that characterizes the changes in direction of a script stroke in handwritten text. The edge-hinge distribution is extracted by means of a windowpane that is slid over an edge-detected binary handwriting image. Whenever the mid pixel of the window is on, the two edge fragments (i.e. connected sequences of pixels) emerging from this mid pixel are measured. Their directions are measured and stored as pairs. A joint probability distribution is obtained from a large sample of such pairs. Despite continuous effort, handwriting identification remains a challenging issue, due to different approaches use different varieties of features, having different. Therefore, our study will focus on handwriting recognition based on feature selection to simplify features extracting task, optimize classification system complexity, reduce running time and improve the classification accuracy.Keywords: word segmentation and recognition, character recognition, optical character recognition, hand written character recognition, South Indian languages
Procedia PDF Downloads 4941042 Woodcast is Ecologically Sound and Tolerated by a Majority of Patients
Authors: R. Hassan, J. Duncombe, E. Darke, A. Dias, K. Anderson, R. G. Middleton
Abstract:
NHS England has set itself the task of delivering a “Net Zero” National Health service by 2040. It is incumbent upon all health care practioners to work towards this goal. Orthopaedic surgeons are no exception. Distal radial fractures are the most common fractures sustained by the adult population. However, studies are shortcoming on individual patient experience. The aim of this study was to assess the patient’s satisfaction and outcomes with woodcast used in the conservative management of distal radius fractures. For all patients managed with woodcast in our unit, we undertook a structured questionnaire that included the Patient Rated Wrist Evaluation (PRWE) score, The EQ-5D-5L score and the pain numerical score at the time of injury and six weeks after. 30 patients were initially managed with woodcast. 80% of patients tolerated woodcast for the full duration of their treatment. Of these, 20% didn’t tolerate woodcast and had their casts removed within 48 hours. Of the remaining, 79.1% were satisfied about woodcast comfort, 66% were very satisfied about woodcast weight, 70% were satisfied with temperature and sweatiness, 62.5% were very satisfied about the smell/odour, and 75% were satisfied about the level of support woodcast provided. During their treatment, 83.3% of patients rated their pain as five or less. For those who completed their treatment in woodcast, none required any further intervention or utilised the open appointment because of ongoing wrist problems. In conclusion, when woodcast is tolerated, patients’ satisfaction and outcome levels were good. However, we acknowledged 20% of patients in our series were not able to tolerate woodacst, Therefore, we suggest a comparison between the widely used synthetic plaster of Paris casting and woodcast to come in order.Keywords: distal radius fractures, ecological cast, sustainability, woodcast
Procedia PDF Downloads 1001041 Brain Networks and Mathematical Learning Processes of Children
Authors: Felicitas Pielsticker, Christoph Pielsticker, Ingo Witzke
Abstract:
Neurological findings provide foundational results for many different disciplines. In this article we want to discuss these with a special focus on mathematics education. The intention is to make neuroscience research useful for the description of cognitive mathematical learning processes. A key issue of mathematics education is that students often behave as if their mathematical knowledge is constructed in isolated compartments with respect to the specific context of the original learning situation; supporting students to link these compartments to form a coherent mathematical society of mind is a fundamental task not only for mathematics teachers. This aspect goes hand in hand with the question if there is such a thing as abstract general mathematical knowledge detached from concrete reality. Educational Neuroscience may give answers to the question why students develop their mathematical knowledge in isolated subjective domains of experience and if it is generally possible to think in abstract terms. To address these questions, we will provide examples from different fields of mathematics education e.g. students’ development and understanding of the general concept of variables or the mathematical notion of universal proofs. We want to discuss these aspects in the reflection of functional studies which elucidate the role of specific brain regions in mathematical learning processes. In doing this the paper addresses concept formation processes of students in the mathematics classroom and how to support them adequately considering the results of (educational) neuroscience.Keywords: brain regions, concept formation processes in mathematics education, proofs, teaching-learning processes
Procedia PDF Downloads 1491040 Technical, Environmental and Financial Assessment for Optimal Sizing of Run-of-River Small Hydropower Project: Case Study in Colombia
Authors: David Calderon Villegas, Thomas Kaltizky
Abstract:
Run-of-river (RoR) hydropower projects represent a viable, clean, and cost-effective alternative to dam-based plants and provide decentralized power production. However, RoR schemes cost-effectiveness depends on the proper selection of site and design flow, which is a challenging task because it requires multivariate analysis. In this respect, this study presents the development of an investment decision support tool for assessing the optimal size of an RoR scheme considering the technical, environmental, and cost constraints. The net present value (NPV) from a project perspective is used as an objective function for supporting the investment decision. The tool has been tested by applying it to an actual RoR project recently proposed in Colombia. The obtained results show that the optimum point in financial terms does not match the flow that maximizes energy generation from exploiting the river's available flow. For the case study, the flow that maximizes energy corresponds to a value of 5.1 m3/s. In comparison, an amount of 2.1 m3/s maximizes the investors NPV. Finally, a sensitivity analysis is performed to determine the NPV as a function of the debt rate changes and the electricity prices and the CapEx. Even for the worst-case scenario, the optimal size represents a positive business case with an NPV of 2.2 USD million and an IRR 1.5 times higher than the discount rate.Keywords: small hydropower, renewable energy, RoR schemes, optimal sizing, objective function
Procedia PDF Downloads 1321039 Active Packaging Films Based on Chitosan Incorporated with Thyme Essential Oil and Cross Linkers and Its Effect on the Quality Shelf Life of Food
Authors: Aiman Zehra, Sajad Mohd Wani
Abstract:
Packaging has a vital role as it contains and protects the food that moves from the supply chain to the consumer. Chitosan (CH) has been extensively used in food packaging applications among the plentiful natural macromolecules, including all the polysaccharide class, owing to its easy film-forming capacity, biodegradability, better oxygen and water vapour barrier ability and good mechanical strength. Compared to synthetic films, the films produced from chitosan present poor barrier and mechanical properties. To overcome its deficient qualities, a number of modification procedures are required to enhance the mechanical and physical properties. Various additives such as plasticizers (e.g., glycerol and sorbitol), crosslinkers (e.g.,CaCl₂, ZnO), fillers (nanoclay), and antimicrobial agents (e.g. thyme essential oil) have been used to improve the mechanical, thermal, morphological, antimicrobial properties and emulsifying agents for the stability and elasticity of chitosan-based biodegradable films. Different novel biocomposite films based on chitosan incorporated with thyme essential oil and different additives (ZnO, CaCl₂, NC, and PEG) were successfully prepared and used as packaging material for carrot candy. The chitosan film incorporated with crosslinkers was capable of forming a protective barrier on the surface of the candy to maintain moisture content, water activity, TSS, total sugars, and titratable acidity. ZnO +PEG +NC +CaCl₂ remarkably promotes a synergistic effect on the barrier properties of the film. The combined use of ZnO +PEG +NC +CaCl₂ in CH-TO films was more effective in preventing the moisture gain in candies. The lowest a𝓌 (0.624) was also observed for the candies stored in treatment. The color values L*, a*, b* of the candies were also retained in the film containing all the additives during the 6th month of storage. The value for L*, a*, and b* observed for T was 42.72, 9.89, and 10.84, respectively. The candies packaged in film retained TSS and acidity. The packaging film significantly p≤0.05 conserved sensory qualities and inhibited microbial activity during storage. Carrot candy was found microbiologically safe for human consumption even after six months of storage in all the packaging materials.Keywords: chitosan, biodegradable films, antimicrobial activity, thyme essential oil, crosslinkers
Procedia PDF Downloads 951038 General Architecture for Automation of Machine Learning Practices
Authors: U. Borasi, Amit Kr. Jain, Rakesh, Piyush Jain
Abstract:
Data collection, data preparation, model training, model evaluation, and deployment are all processes in a typical machine learning workflow. Training data needs to be gathered and organised. This often entails collecting a sizable dataset and cleaning it to remove or correct any inaccurate or missing information. Preparing the data for use in the machine learning model requires pre-processing it after it has been acquired. This often entails actions like scaling or normalising the data, handling outliers, selecting appropriate features, reducing dimensionality, etc. This pre-processed data is then used to train a model on some machine learning algorithm. After the model has been trained, it needs to be assessed by determining metrics like accuracy, precision, and recall, utilising a test dataset. Every time a new model is built, both data pre-processing and model training—two crucial processes in the Machine learning (ML) workflow—must be carried out. Thus, there are various Machine Learning algorithms that can be employed for every single approach to data pre-processing, generating a large set of combinations to choose from. Example: for every method to handle missing values (dropping records, replacing with mean, etc.), for every scaling technique, and for every combination of features selected, a different algorithm can be used. As a result, in order to get the optimum outcomes, these tasks are frequently repeated in different combinations. This paper suggests a simple architecture for organizing this largely produced “combination set of pre-processing steps and algorithms” into an automated workflow which simplifies the task of carrying out all possibilities.Keywords: machine learning, automation, AUTOML, architecture, operator pool, configuration, scheduler
Procedia PDF Downloads 571037 Supply Chain Network Design for Perishable Products in Developing Countries
Authors: Abhishek Jain, Kavish Kejriwal, V. Balaji Rao, Abhigna Chavda
Abstract:
Increasing environmental and social concerns are forcing companies to take a fresh view of the impact of supply chain operations on environment and society when designing a supply chain. A challenging task in today’s food industry is the distribution of high-quality food items throughout the food supply chain. Improper storage and unwanted transportation are the major hurdles in food supply chain and can be tackled by making dynamic storage facility location decisions with the distribution network. Since food supply chain in India is one of the biggest supply chains in the world, the companies should also consider environmental impact caused by the supply chain. This project proposes a multi-objective optimization model by integrating sustainability in decision-making, on distribution in a food supply chain network (SCN). A Multi-Objective Mixed-Integer Linear Programming (MOMILP) model between overall cost and environmental impact caused by the SCN is formulated for the problem. The goal of MOMILP is to determine the pareto solutions for overall cost and environmental impact caused by the supply chain. This is solved by using GAMS with CPLEX as third party solver. The outcomes of the project are pareto solutions for overall cost and environmental impact, facilities to be operated and the amount to be transferred to each warehouse during the time horizon.Keywords: multi-objective mixed linear programming, food supply chain network, GAMS, multi-product, multi-period, environment
Procedia PDF Downloads 3201036 NanoFrazor Lithography for advanced 2D and 3D Nanodevices
Authors: Zhengming Wu
Abstract:
NanoFrazor lithography systems were developed as a first true alternative or extension to standard mask-less nanolithography methods like electron beam lithography (EBL). In contrast to EBL they are based on thermal scanning probe lithography (t-SPL). Here a heatable ultra-sharp probe tip with an apex of a few nm is used for patterning and simultaneously inspecting complex nanostructures. The heat impact from the probe on a thermal responsive resist generates those high-resolution nanostructures. The patterning depth of each individual pixel can be controlled with better than 1 nm precision using an integrated in-situ metrology method. Furthermore, the inherent imaging capability of the Nanofrazor technology allows for markerless overlay, which has been achieved with sub-5 nm accuracy as well as it supports stitching layout sections together with < 10 nm error. Pattern transfer from such resist features below 10 nm resolution were demonstrated. The technology has proven its value as an enabler of new kinds of ultra-high resolution nanodevices as well as for improving the performance of existing device concepts. The application range for this new nanolithography technique is very broad spanning from ultra-high resolution 2D and 3D patterning to chemical and physical modification of matter at the nanoscale. Nanometer-precise markerless overlay and non-invasiveness to sensitive materials are among the key strengths of the technology. However, while patterning at below 10 nm resolution is achieved, significantly increasing the patterning speed at the expense of resolution is not feasible by using the heated tip alone. Towards this end, an integrated laser write head for direct laser sublimation (DLS) of the thermal resist has been introduced for significantly faster patterning of micrometer to millimeter-scale features. Remarkably, the areas patterned by the tip and the laser are seamlessly stitched together and both processes work on the very same resist material enabling a true mix-and-match process with no developing or any other processing steps in between. The presentation will include examples for (i) high-quality metal contacting of 2D materials, (ii) tuning photonic molecules, (iii) generating nanofluidic devices and (iv) generating spintronic circuits. Some of these applications have been enabled only due to the various unique capabilities of NanoFrazor lithography like the absence of damage from a charged particle beam.Keywords: nanofabrication, grayscale lithography, 2D materials device, nano-optics, photonics, spintronic circuits
Procedia PDF Downloads 721035 Identification of Workplace Hazards of Underground Coal Mines
Authors: Madiha Ijaz, Muhammad Akram, Sima Mir
Abstract:
Underground mining of coal is carried out manually in Pakistan. Exposure to ergonomic hazards (musculoskeletal disorders) are very common among the coal cutters of these mines. Cutting coal in narrow spaces poses a great threat to both upper and lower limbs of these workers. To observe the prevalence of such hazards, a thorough study was conducted on 600 workers from 30 mines (20 workers from 1 mine), located in two districts of province Punjab, Pakistan. Rapid Upper Limb Assessment sheet and Rapid Entire Body Assessment sheet were used for the study along with a standard Nordic Musculoskeleton disorder questionnaire. SPSS, 25, software was used for data analysis on upper and lower limb disorders, and regression analysis models were run for upper and lower back pain. According to the results obtained, it was found that work stages (drilling & blasting, coal cutting, timbering & supporting, etc.), wok experience and number of repetitions performed/minute were significant (with p-value 0.00,0.004 and 0.009, respectively) for discomfort in upper and lower limb. Age got p vale 0.00 for upper limb and 0.012 for lower limb disorder. The task of coal cutting was strongly associated with the pain in upper back (with odd ratios13.21, 95% confidence interval (CI)14.0-21.64)) and lower back pain (3.7, 95% confidence interval 1.3-4.2). scored on RULA and REBA sheets, every work-stage was ranked at 7-highest level of risk involved. Workers were young (mean value of age= 28.7 years) with mean BMI 28.1 kg/m2Keywords: workplace hazards, ergonomic disorders, limb disorders, MSDs.
Procedia PDF Downloads 831034 Barriers to Teachers' Use of Technology in Nigeria and Its Implications in the Academic Performance of Students of Higher Learning: A Case Study of Adeniran Ogunsanya College of Education, Lagos
Authors: Iyabo Aremu
Abstract:
The role of the teacher in stirring a qualitative and distinctive knowledge-driven and value-laden environment with modern teaching practices cannot be over accentuated. In spite of the myriad advantages the use of Information and Communication Technology (ICT) promises, many teachers are still at the rear of this archetypical transition. These teachers; notable forces needed to elicit positive academic performances of students of higher learning are ill-equipped for the task. In view of this, the research work sought to assess how teachers have been able to effectively apply ICT tools to improve students’ academic performance in the higher institution and to evaluate the challenges faced by teachers in using these tools. Thus, the research adopted descriptive survey research design and involved a sample of 25 lecturers from five schools in the study area: Adeniran Ogunsanya College of Education (AOCOED). The barrier to Teachers’ Use of ICT Questionnaire (BTUICTQ) was used to gather data from these respondents. The data gathered was tested with chi-square at 0.05 level of significance. The results revealed that the perception and attitude of teachers towards the use of ICT is not favourable. It was also discovered that teachers suffer from gaps in ICT knowledge and skills. Finally, the research showed that lack of training and inadequate support is a major challenge teacher contend with. The study recommended that teachers should be given adequate training and support and that teachers’ unrestricted access to ICT gadgets should be ensured by schools.Keywords: ICT, teachers, AOCOED, academic performance
Procedia PDF Downloads 1601033 Comparative Study of Non-Identical Firearms with Priority to Repair Subject to Inspection
Authors: A. S. Grewal, R. S. Sangwan, Dharambir, Vikas Dhanda
Abstract:
The purpose of this paper is to develop and analyze two reliability models for a system of non-identical firearms – one is standard firearm (called as original unit) and the other is a country-made firearm (called as duplicate /substandard unit). There is a single server who comes immediately to do inspection and repair whenever needed. On the failure of standard firearm, the server inspects the operative country-made firearm to see whether the unit is capable of performing the desired function well or not. If country-made firearm is not capable to do so, the operation of the system is stopped and server starts repair of the standard firearms immediately. However, no inspection is done at the failure of the country-made firearm as the country-made firearm alone is capable of performing the given task well. In model I, priority to repair the standard firearm is given in case system fails completely and country-made firearm is already under repair, whereas in model II there is no such priority. The failure and repair times of each unit are assumed to be independent and uncorrelated random variables. The distributions of failure time of the units are taken as negative exponential while that of repair and inspection times are general. By using semi-Markov process and regenerative point technique some econo-reliability measures are obtained. Graphs are plotted to compare the MTSF (mean time to system failure), availability and profit of the models for a particular case.Keywords: non-identical firearms, inspection, priority to repair, semi-Markov process, regenerative point
Procedia PDF Downloads 4251032 Studies on Performance of an Airfoil and Its Simulation
Authors: Rajendra Roul
Abstract:
The main objective of the project is to bring attention towards the performance of an aerofoil when exposed to the fluid medium inside the wind tunnel. This project aims at involvement of civil as well as mechanical engineering thereby making itself as a multidisciplinary project. The airfoil of desired size is taken into consideration for the project to carry out effectively. An aerofoil is the shape of the wing or blade of propeller, rotor or turbine. Lot of experiment have been carried out through wind-tunnel keeping aerofoil as a reference object to make a future forecast regarding the design of turbine blade, car and aircraft. Lift and drag now become the major identification factor for any design industry which shows that wind tunnel testing along with software analysis (ANSYS) becomes the mandatory task for any researchers to forecast an aerodynamics design. This project is an initiative towards the mitigation of drag, better lift and analysis of wake surface profile by investigating the surface pressure distribution. The readings has been taken on airfoil model in Wind Tunnel Testing Machine (WTTM) at different air velocity 20m/sec, 25m/sec, 30m/sec and different angle of attack 00,50,100,150,200. Air velocity and pressures are measured in several ways in wind tunnel testing machine by use to measuring instruments like Anemometer and Multi tube manometer. Moreover to make the analysis more accurate Ansys fluent contribution become substantial and subsequently the CFD simulation results. Analysis on an Aerofoil have a wide spectrum of application other than aerodynamics including wind loads in the design of buildings and bridges for structural engineers.Keywords: wind-tunnel, aerofoil, Ansys, multitube manometer
Procedia PDF Downloads 4141031 Bacteriophages for Sustainable Wastewater Treatment: Application in Black Water Decontamination with an Emphasis to DRDO Biotoilet
Authors: Sonika Sharma, Mohan G. Vairale, Sibnarayan Datta, Soumya Chatterjee, Dharmendra Dubey, Rajesh Prasad, Raghvendra Budhauliya, Bidisha Das, Vijay Veer
Abstract:
Bacteriophages are viruses that parasitize specific bacteria and multiply in metabolising host bacteria. Bacteriophages hunt for a single or a subset of bacterial species, making them potential antibacterial agents. Utilizing the ability of phages to control bacterial populations has several applications from medical to the fields of agriculture, aquaculture and the food industry. However, harnessing phage based techniques in wastewater treatments to improve quality of effluent and sludge release into the environment is a potential area for R&D application. Phage mediated bactericidal effect in any wastewater treatment process has many controlling factors that lead to treatment performance. In laboratory conditions, titer of bacteriophages (coliphages) isolated from effluent water of a specially designed anaerobic digester of human night soil (DRDO Biotoilet) was successfully increased with a modified protocol of the classical double layer agar technique. Enrichment of the same was carried out and efficacy of the phage enriched medium was evaluated at different conditions (specific media, temperature, storage conditions). Growth optimization study was carried out on different media like soybean casein digest medium (Tryptone soya medium), Luria-Bertani medium, phage deca broth medium and MNA medium (Modified nutrient medium). Further, temperature-phage yield relationship was also observed at three different temperatures 27˚C, 37˚C and 44˚C at laboratory condition. Results showed the higher activity of coliphage 27˚C and at 37˚C. Further, addition of divalent ions (10mM MgCl2, 5mM CaCl2) and 5% glycerol resulted in a significant increase in phage titer. Besides this, effect of antibiotics addition like ampicillin and kanamycin at different concentration on plaque formation was analysed and reported that ampicillin at a concentration of 1mg/ml ampicillin stimulates phage infection and results in more number of plaques. Experiments to test viability of phage showed that it can remain active for 6 months at 4˚C in fresh tryptone soya broth supplemented with fresh culture of coliforms (early log phase). The application of bacteriophages (especially coliphages) for treatment of effluent of human faecal matter contaminated effluent water is unique. This environment-friendly treatment system not only reduces the pathogenic coliforms, but also decreases the competition between nuisance bacteria and functionally important microbial populations. Therefore, the phage based cocktail to treat fecal pathogenic bacteria present in black water has many implication in wastewater treatment processes including ‘DRDO Biotoilet’, which is an ecofriendly appropriate and affordable human faecal matter treatment technology for different climates and situations.Keywords: wastewater, microbes, virus, biotoilet, phage viability
Procedia PDF Downloads 4361030 Modeling and Simulating Drop Interactions in Spray Structure of High Torque Low Speed Diesel Engine
Authors: Rizwan Latif, Syed Adnan Qasim, Muzaffar Ali
Abstract:
Fuel direct injection represents one of the key aspects in the development of the diesel engines, the idea of controlling the auto-ignition and the consequent combustion of a liquid spray injected in a reacting atmosphere during a time scale of few milliseconds has been a challenging task for the engine community and pushed forward to a massive research in this field. The quality of the air-fuel mixture defines the combustion efficiency, and therefore the engine efficiency. A droplet interaction in dense as well as thin portion of the spray receives equal importance as other parameters in spray structure. Usually, these are modeled along with breakup process and analyzed alike. In this paper, droplet interaction is modeled and simulated for high torque low speed scenario. Droplet interactions may further be subdivided into droplet collision and coalescence, spray wall impingement, droplets drag, etc. Droplet collisions may occur in almost all spray applications, but especially in diesel like conditions such as high pressure sprays as utilized in combustion engines. These collisions have a strong influence on the mean droplet size and its spatial distribution and can, therefore, affect sub-processes of spray combustion such as mass, momentum and energy transfer between gas and droplets. Similarly, for high-pressure injection systems spray wall impingement is an inherent sub-process of mixture formation. However, its influence on combustion is in-explicit.Keywords: droplet collision, coalescence, low speed, diesel fuel
Procedia PDF Downloads 2361029 Changing Trends and Attitudes towards Online Assessment
Authors: Renáta Nagy, Alexandra Csongor, Jon Marquette, Vilmos Warta
Abstract:
The presentation aims at eliciting insight into the results of ongoing research regarding evolving trends and attitudes towards online assessment of English for Medical Purposes. The focus pinpointsonline as one of the most trending formsavailable during the global pandemic. The study was first initiated in 2019 in which its main target was to reveal the intriguing question of students’ and assessors’ attitudes towards online assessment. The research questions the attitudes towards the latest trends, possible online task types, their advantagesand disadvantages through an in-depth experimental process currently undergoing implementation. Material and methods include surveys, needs and wants analysis, and thorough investigations regarding candidates’ and assessors’ attitudes towards online tests in the field of Medicine. The examined test tasks include various online tests drafted in both English and Hungarian by student volunteers at the Medical School of the University of Pécs, Hungary. Over 400 respondents from more than 28 countries participated in the survey, which gives us an international and intercultural insight into how students with different cultural and educational background deal with the evolving online world. The results show the pandemic’s impact, which brought the slumbering online world of assessing roaring alive, fully operational andnowbearsphenomenalrelevancein today’s global education. Undeniably, the results can be used as a perspective in a vast array of contents. The survey hypothesized the generation of the 21st century expect everything readily available online, however, questions whether they are ready for this challenge are lurking in the background.Keywords: assessment, changes, english, ESP, online assessment, online, trends
Procedia PDF Downloads 2021028 Teaching Writing in the Virtual Classroom: Challenges and the Way Forward
Authors: Upeksha Jayasuriya
Abstract:
The sudden transition from onsite to online teaching/learning due to the COVID-19 pandemic called for a need to incorporate feasible as well as effective methods of online teaching in most developing countries like Sri Lanka. The English as a Second Language (ESL) classroom faces specific challenges in this adaptation, and teaching writing can be identified as the most challenging task compared to teaching the other three skills. This study was therefore carried out to explore the challenges of teaching writing online and to provide effective means of overcoming them while taking into consideration the attitudes of students and teachers with regard to learning/teaching English writing via online platforms. A survey questionnaire was distributed (electronically) among 60 students from the University of Colombo, the University of Kelaniya, and The Open University in order to find out the challenges faced by students, while in-depth interviews were conducted with 12 lecturers from the mentioned universities. The findings reveal that the inability to observe students’ writing and to receive real-time feedback discourage students from engaging in writing activities when taught online. It was also discovered that both students and teachers increasingly prefer Google Slides over other platforms such as Padlet, Linoit, and Jam Board as it boosts learner autonomy and student-teacher interaction, which in turn allows real-time formative feedback, observation of student work, and assessment. Accordingly, it can be recommended that teaching writing online can be better facilitated by using interactive platforms such as Google Slides, for it promotes active learning and student engagement in the ESL class.Keywords: ESL, teaching writing, online teaching, active learning, student engagement
Procedia PDF Downloads 891027 Direct Oxidation Synthesis for a Dual-Layer Silver/Silver Orthophosphate with Controllable Tetrahedral Structure as an Active Photoanode for Solar-Driven Photoelectrochemical Water Splitting
Authors: Wen Cai Ng, Saman Ilankoon, Meng Nan Chong
Abstract:
The vast increase in global energy demand, coupled with the growing concerns on environmental issues, has triggered the search for cleaner alternative energy sources. In view of this, the photoelectrochemical (PEC) water splitting offers a sustainable hydrogen (H2) production route that only requires solar energy, water, and PEC system operating in an ambient environment. However, the current advancement of PEC water splitting technologies is still far from the commercialization benchmark indicated by the solar-to-H2 (STH) efficiency of at least 10 %. This is largely due to the shortcomings of photoelectrodes used in the PEC system, such as the rapid recombination of photogenerated charge carriers and limited photo-responsiveness in the visible-light spectrum. Silver orthophosphate (Ag3PO4) possesses many desirable intrinsic properties for the fabrication into photoanode used in PEC systems, such as narrow bandgap of 2.4 eV and low valence band (VB) position. Hence, in this study, a highly efficient Ag3PO4-based photoanode was synthesized and characterized. The surface of the Ag foil substrate was directly oxidized to fabricate a top layer composed of {111}-bound Ag3PO4 tetrahedrons layer with a porous structure, forming the dual-layer Ag/Ag3PO4 photoanode. Furthermore, the key synthesis parameters were systematically investigated by varying the concentration ratio of capping agent-to-precursor (R), the volume ratio of hydrogen peroxide (H2O2)-to-water, and reaction period. Results showed that the optimized dual-layer Ag/Ag3PO4 photoanode achieved a photocurrent density as high as 4.19 mA/cm2 at 1 V vs. Ag/AgCl for the R-value of 4, the volume ratio of H2O2-to-water of 3:5 and 20 h reaction period. The current work provides a solid foundation for further nanoarchitecture modification strategies on Ag3PO4-based photoanodes for more efficient PEC water splitting applications. This piece of information needs to be backed up by evidence; therefore, you need to provide a reference. As the abstract should be self-contained, all information requiring a reference should be removed. This is a fact known to the area of research, and not necessarily required a reference to support.Keywords: solar-to-hydrogen fuel, photoelectrochemical water splitting, photoelectrode, silver orthophosphate
Procedia PDF Downloads 1211026 MIMIC: A Multi Input Micro-Influencers Classifier
Authors: Simone Leonardi, Luca Ardito
Abstract:
Micro-influencers are effective elements in the marketing strategies of companies and institutions because of their capability to create an hyper-engaged audience around a specific topic of interest. In recent years, many scientific approaches and commercial tools have handled the task of detecting this type of social media users. These strategies adopt solutions ranging from rule based machine learning models to deep neural networks and graph analysis on text, images, and account information. This work compares the existing solutions and proposes an ensemble method to generalize them with different input data and social media platforms. The deployed solution combines deep learning models on unstructured data with statistical machine learning models on structured data. We retrieve both social media accounts information and multimedia posts on Twitter and Instagram. These data are mapped into feature vectors for an eXtreme Gradient Boosting (XGBoost) classifier. Sixty different topics have been analyzed to build a rule based gold standard dataset and to compare the performances of our approach against baseline classifiers. We prove the effectiveness of our work by comparing the accuracy, precision, recall, and f1 score of our model with different configurations and architectures. We obtained an accuracy of 0.91 with our best performing model.Keywords: deep learning, gradient boosting, image processing, micro-influencers, NLP, social media
Procedia PDF Downloads 1831025 Web Proxy Detection via Bipartite Graphs and One-Mode Projections
Authors: Zhipeng Chen, Peng Zhang, Qingyun Liu, Li Guo
Abstract:
With the Internet becoming the dominant channel for business and life, many IPs are increasingly masked using web proxies for illegal purposes such as propagating malware, impersonate phishing pages to steal sensitive data or redirect victims to other malicious targets. Moreover, as Internet traffic continues to grow in size and complexity, it has become an increasingly challenging task to detect the proxy service due to their dynamic update and high anonymity. In this paper, we present an approach based on behavioral graph analysis to study the behavior similarity of web proxy users. Specifically, we use bipartite graphs to model host communications from network traffic and build one-mode projections of bipartite graphs for discovering social-behavior similarity of web proxy users. Based on the similarity matrices of end-users from the derived one-mode projection graphs, we apply a simple yet effective spectral clustering algorithm to discover the inherent web proxy users behavior clusters. The web proxy URL may vary from time to time. Still, the inherent interest would not. So, based on the intuition, by dint of our private tools implemented by WebDriver, we examine whether the top URLs visited by the web proxy users are web proxies. Our experiment results based on real datasets show that the behavior clusters not only reduce the number of URLs analysis but also provide an effective way to detect the web proxies, especially for the unknown web proxies.Keywords: bipartite graph, one-mode projection, clustering, web proxy detection
Procedia PDF Downloads 2451024 Beliefs about the God of the Other in Intergroup Conflict: Experimental Results from Israel and Palestine
Authors: Crystal Shackleford, Michael Pasek, Allon Vishkin, Jeremy Ginges
Abstract:
In the Middle East, conflict is often viewed as religiously motivated. In this context, an important question is how we think the religion of the other drives their behavior. If people see conflicts as religious, they may expect the belief of the other to motivate intergroup bias. Beliefs about the motivations of the other impact how we engage with them. Conflict may result if actors believe the other’s religion promotes parochialism. To examine how actors on the ground in Israel-Palestine think about the God of the other as it relates to the other’s behavior towards them, we ran two studies in winter 2019 with an online sample of Jewish Israelis and fieldwork with Palestinians in the West Bank. We asked participants to predict the behavior of an outgroup member participating in an economic game task, dividing the money between themselves and another person, who is either an ingroup or outgroup member. Our experimental manipulation asks participants to predict the behavior of the other when the other is thinking of their God. Both Israelis and Palestinians believed outgroup members would show in-group favoritism, and that group members would give more to their in-group when thinking of their God. We also found that participants thought outgroup members would give more to their own ingroup when thinking of God. In other words, Palestinians predicted that Israelis would give more to fellow Israelis when thinking of God, but also more to Palestinians. Our results suggest that religious belief is seen to promote universal moral reasoning, even in a context with over 70 years of intense conflict. More broadly, this challenges the narrative that religion necessarily motivates intractable conflict.Keywords: conflict, psychology, religion, meta-cognition, morality
Procedia PDF Downloads 1381023 A Framework for Chinese Domain-Specific Distant Supervised Named Entity Recognition
Abstract:
The Knowledge Graphs have now become a new form of knowledge representation. However, there is no consensus in regard to a plausible and definition of entities and relationships in the domain-specific knowledge graph. Further, in conjunction with several limitations and deficiencies, various domain-specific entities and relationships recognition approaches are far from perfect. Specifically, named entity recognition in Chinese domain is a critical task for the natural language process applications. However, a bottleneck problem with Chinese named entity recognition in new domains is the lack of annotated data. To address this challenge, a domain distant supervised named entity recognition framework is proposed. The framework is divided into two stages: first, the distant supervised corpus is generated based on the entity linking model of graph attention neural network; secondly, the generated corpus is trained as the input of the distant supervised named entity recognition model to train to obtain named entities. The link model is verified in the ccks2019 entity link corpus, and the F1 value is 2% higher than that of the benchmark method. The re-pre-trained BERT language model is added to the benchmark method, and the results show that it is more suitable for distant supervised named entity recognition tasks. Finally, it is applied in the computer field, and the results show that this framework can obtain domain named entities.Keywords: distant named entity recognition, entity linking, knowledge graph, graph attention neural network
Procedia PDF Downloads 931022 Monocular Depth Estimation Benchmarking with Thermal Dataset
Authors: Ali Akyar, Osman Serdar Gedik
Abstract:
Depth estimation is a challenging computer vision task that involves estimating the distance between objects in a scene and the camera. It predicts how far each pixel in the 2D image is from the capturing point. There are some important Monocular Depth Estimation (MDE) studies that are based on Vision Transformers (ViT). We benchmark three major studies. The first work aims to build a simple and powerful foundation model that deals with any images under any condition. The second work proposes a method by mixing multiple datasets during training and a robust training objective. The third work combines generalization performance and state-of-the-art results on specific datasets. Although there are studies with thermal images too, we wanted to benchmark these three non-thermal, state-of-the-art studies with a hybrid image dataset which is taken by Multi-Spectral Dynamic Imaging (MSX) technology. MSX technology produces detailed thermal images by bringing together the thermal and visual spectrums. Using this technology, our dataset images are not blur and poorly detailed as the normal thermal images. On the other hand, they are not taken at the perfect light conditions as RGB images. We compared three methods under test with our thermal dataset which was not done before. Additionally, we propose an image enhancement deep learning model for thermal data. This model helps extract the features required for monocular depth estimation. The experimental results demonstrate that, after using our proposed model, the performance of these three methods under test increased significantly for thermal image depth prediction.Keywords: monocular depth estimation, thermal dataset, benchmarking, vision transformers
Procedia PDF Downloads 321021 Approach for Updating a Digital Factory Model by Photogrammetry
Authors: R. Hellmuth, F. Wehner
Abstract:
Factory planning has the task of designing products, plants, processes, organization, areas, and the construction of a factory. The requirements for factory planning and the building of a factory have changed in recent years. Regular restructuring is becoming more important in order to maintain the competitiveness of a factory. Restrictions in new areas, shorter life cycles of product and production technology as well as a VUCA world (Volatility, Uncertainty, Complexity & Ambiguity) lead to more frequent restructuring measures within a factory. A digital factory model is the planning basis for rebuilding measures and becomes an indispensable tool. Short-term rescheduling can no longer be handled by on-site inspections and manual measurements. The tight time schedules require up-to-date planning models. Due to the high adaptation rate of factories described above, a methodology for rescheduling factories on the basis of a modern digital factory twin is conceived and designed for practical application in factory restructuring projects. The focus is on rebuild processes. The aim is to keep the planning basis (digital factory model) for conversions within a factory up to date. This requires the application of a methodology that reduces the deficits of existing approaches. The aim is to show how a digital factory model can be kept up to date during ongoing factory operation. A method based on photogrammetry technology is presented. The focus is on developing a simple and cost-effective solution to track the many changes that occur in a factory building during operation. The method is preceded by a hardware and software comparison to identify the most economical and fastest variant.Keywords: digital factory model, photogrammetry, factory planning, restructuring
Procedia PDF Downloads 1171020 Deep Vision: A Robust Dominant Colour Extraction Framework for T-Shirts Based on Semantic Segmentation
Authors: Kishore Kumar R., Kaustav Sengupta, Shalini Sood Sehgal, Poornima Santhanam
Abstract:
Fashion is a human expression that is constantly changing. One of the prime factors that consistently influences fashion is the change in colour preferences. The role of colour in our everyday lives is very significant. It subconsciously explains a lot about one’s mindset and mood. Analyzing the colours by extracting them from the outfit images is a critical study to examine the individual’s/consumer behaviour. Several research works have been carried out on extracting colours from images, but to the best of our knowledge, there were no studies that extract colours to specific apparel and identify colour patterns geographically. This paper proposes a framework for accurately extracting colours from T-shirt images and predicting dominant colours geographically. The proposed method consists of two stages: first, a U-Net deep learning model is adopted to segment the T-shirts from the images. Second, the colours are extracted only from the T-shirt segments. The proposed method employs the iMaterialist (Fashion) 2019 dataset for the semantic segmentation task. The proposed framework also includes a mechanism for gathering data and analyzing India’s general colour preferences. From this research, it was observed that black and grey are the dominant colour in different regions of India. The proposed method can be adapted to study fashion’s evolving colour preferences.Keywords: colour analysis in t-shirts, convolutional neural network, encoder-decoder, k-means clustering, semantic segmentation, U-Net model
Procedia PDF Downloads 1111019 Automatic Multi-Label Image Annotation System Guided by Firefly Algorithm and Bayesian Method
Authors: Saad M. Darwish, Mohamed A. El-Iskandarani, Guitar M. Shawkat
Abstract:
Nowadays, the amount of available multimedia data is continuously on the rise. The need to find a required image for an ordinary user is a challenging task. Content based image retrieval (CBIR) computes relevance based on the visual similarity of low-level image features such as color, textures, etc. However, there is a gap between low-level visual features and semantic meanings required by applications. The typical method of bridging the semantic gap is through the automatic image annotation (AIA) that extracts semantic features using machine learning techniques. In this paper, a multi-label image annotation system guided by Firefly and Bayesian method is proposed. Firstly, images are segmented using the maximum variance intra cluster and Firefly algorithm, which is a swarm-based approach with high convergence speed, less computation rate and search for the optimal multiple threshold. Feature extraction techniques based on color features and region properties are applied to obtain the representative features. After that, the images are annotated using translation model based on the Net Bayes system, which is efficient for multi-label learning with high precision and less complexity. Experiments are performed using Corel Database. The results show that the proposed system is better than traditional ones for automatic image annotation and retrieval.Keywords: feature extraction, feature selection, image annotation, classification
Procedia PDF Downloads 5861018 Genomic Sequence Representation Learning: An Analysis of K-Mer Vector Embedding Dimensionality
Authors: James Jr. Mashiyane, Risuna Nkolele, Stephanie J. Müller, Gciniwe S. Dlamini, Rebone L. Meraba, Darlington S. Mapiye
Abstract:
When performing language tasks in natural language processing (NLP), the dimensionality of word embeddings is chosen either ad-hoc or is calculated by optimizing the Pairwise Inner Product (PIP) loss. The PIP loss is a metric that measures the dissimilarity between word embeddings, and it is obtained through matrix perturbation theory by utilizing the unitary invariance of word embeddings. Unlike in natural language, in genomics, especially in genome sequence processing, unlike in natural language processing, there is no notion of a “word,” but rather, there are sequence substrings of length k called k-mers. K-mers sizes matter, and they vary depending on the goal of the task at hand. The dimensionality of word embeddings in NLP has been studied using the matrix perturbation theory and the PIP loss. In this paper, the sufficiency and reliability of applying word-embedding algorithms to various genomic sequence datasets are investigated to understand the relationship between the k-mer size and their embedding dimension. This is completed by studying the scaling capability of three embedding algorithms, namely Latent Semantic analysis (LSA), Word2Vec, and Global Vectors (GloVe), with respect to the k-mer size. Utilising the PIP loss as a metric to train embeddings on different datasets, we also show that Word2Vec outperforms LSA and GloVe in accurate computing embeddings as both the k-mer size and vocabulary increase. Finally, the shortcomings of natural language processing embedding algorithms in performing genomic tasks are discussed.Keywords: word embeddings, k-mer embedding, dimensionality reduction
Procedia PDF Downloads 1371017 Ultra-High Molecular Weight Polyethylene (UHMWPE) for Radiation Dosimetry Applications
Authors: Malik Sajjad Mehmood, Aisha Ali, Hamna Khan, Tariq Yasin, Masroor Ikram
Abstract:
Ultra-high molecular weight polyethylene (UHMWPE) is one of the polymers belongs to polyethylene (PE) family having monomer –CH2– and average molecular weight is approximately 3-6 million g/mol. Due its chemical, mechanical, physical and biocompatible properties, it has been extensively used in the field of electrical insulation, medicine, orthopedic, microelectronics, engineering, chemistry and the food industry etc. In order to alter/modify the properties of UHMWPE for particular application of interest, certain various procedures are in practice e.g. treating the material with high energy irradiations like gamma ray, e-beam, and ion bombardment. Radiation treatment of UHMWPE induces free radicals within its matrix, and these free radicals are the precursors of chain scission, chain accumulation, formation of double bonds, molecular emission, crosslinking etc. All the aforementioned physical and chemical processes are mainly responsible for the modification of polymers properties to use them in any particular application of our interest e.g. to fabricate LEDs, optical sensors, antireflective coatings, polymeric optical fibers, and most importantly for radiation dosimetry applications. It is therefore, to check the feasibility of using UHMWPE for radiation dosimetery applications, the compressed sheets of UHMWPE were irradiated at room temperature (~25°C) for total dose values of 30 kGy and 100 kGy, respectively while one were kept un-irradiated as reference. Transmittance data (from 400 nm to 800 nm) of e-beam irradiated UHMWPE and its hybrids were measured by using Muller matrix spectro-polarimeter. As a result significant changes occur in the absorption behavior of irradiated samples. To analyze these (radiation induced) changes in polymer matrix Urbach edge method and modified Tauc’s equation has been used. The results reveal that optical activation energy decreases with irradiation. The values of activation energies are 2.85 meV, 2.48 meV, and 2.40 meV for control, 30 kGy, and 100 kGy samples, respectively. Direct and indirect energy band gaps were also found to decrease with irradiation due to variation of C=C unsaturation in clusters. We believe that the reported results would open new horizons for radiation dosimetery applications.Keywords: electron beam, radiation dosimetry, Tauc’s equation, UHMWPE, Urbach method
Procedia PDF Downloads 407