Search results for: continuous speed profile data
27551 Study on Effective Continuous Assessments Methods to Improve Undergraduates English Language Skills
Authors: K. M. R. Siriwardhana
Abstract:
Sri Lanka is a developing country in South Asia which uses English as its second language. Today, most of the university students in Sri Lanka are eagerly exploring knowledge giving special consideration to English as their 2nd Language with the understanding that to climb up the career ladder, English is inevitable both in local and international contexts. However, still a considerable failing rate in English can also be seen among the Sri Lankan undergraduates Further, most of the Sri Lankan universities now practice English as their medium of instructions making English a credited Subject to brighten the future of the Sri Lankan students. Accordingly, in many universities an array of assessments are employed to evaluate undergraduates’ competence in English language. The main objective of this study was to ascertain the effective assessment methods to improve the 2nd language skills of the Sri Lankan university students which also create a more interest in them to learn English. Accordingly, hundred (100) undergraduates were selected as the research sample and the primary data was collected employing a semi structured questionnaire along with class room observations and semi structured interviews. Data was mainly analyzed descriptively employing graphical illustrations. According to the research findings, it was revealed that practical assessments such as oral tests, competitive drama and presentations are more effective in improving their language skills and preferred by the majority of students than written assignments and papers. Further, most of the students have scored better in practical assignments than in the written assignments. Hence, the study concludes that best and the benefited way of improving English language skills of Sri Lankan undergraduates is practical assessments as it gives them the opportunity to apply the language with much confidence and competence in actual situations. Further, the study recommends the language teachers to improve their own skills and creativity in practicing and employing such assessments as it will develop both second language teaching and learning skills. Ultimately, the university graduates will be able to secure their positions internationally as they are well capable in English, the lingua franca of the world.Keywords: assessments, second language, Sri Lanka, undergraduates
Procedia PDF Downloads 30327550 Data Quality as a Pillar of Data-Driven Organizations: Exploring the Benefits of Data Mesh
Authors: Marc Bachelet, Abhijit Kumar Chatterjee, José Manuel Avila
Abstract:
Data quality is a key component of any data-driven organization. Without data quality, organizations cannot effectively make data-driven decisions, which often leads to poor business performance. Therefore, it is important for an organization to ensure that the data they use is of high quality. This is where the concept of data mesh comes in. Data mesh is an organizational and architectural decentralized approach to data management that can help organizations improve the quality of data. The concept of data mesh was first introduced in 2020. Its purpose is to decentralize data ownership, making it easier for domain experts to manage the data. This can help organizations improve data quality by reducing the reliance on centralized data teams and allowing domain experts to take charge of their data. This paper intends to discuss how a set of elements, including data mesh, are tools capable of increasing data quality. One of the key benefits of data mesh is improved metadata management. In a traditional data architecture, metadata management is typically centralized, which can lead to data silos and poor data quality. With data mesh, metadata is managed in a decentralized manner, ensuring accurate and up-to-date metadata, thereby improving data quality. Another benefit of data mesh is the clarification of roles and responsibilities. In a traditional data architecture, data teams are responsible for managing all aspects of data, which can lead to confusion and ambiguity in responsibilities. With data mesh, domain experts are responsible for managing their own data, which can help provide clarity in roles and responsibilities and improve data quality. Additionally, data mesh can also contribute to a new form of organization that is more agile and adaptable. By decentralizing data ownership, organizations can respond more quickly to changes in their business environment, which in turn can help improve overall performance by allowing better insights into business as an effect of better reports and visualization tools. Monitoring and analytics are also important aspects of data quality. With data mesh, monitoring, and analytics are decentralized, allowing domain experts to monitor and analyze their own data. This will help in identifying and addressing data quality problems in quick time, leading to improved data quality. Data culture is another major aspect of data quality. With data mesh, domain experts are encouraged to take ownership of their data, which can help create a data-driven culture within the organization. This can lead to improved data quality and better business outcomes. Finally, the paper explores the contribution of AI in the coming years. AI can help enhance data quality by automating many data-related tasks, like data cleaning and data validation. By integrating AI into data mesh, organizations can further enhance the quality of their data. The concepts mentioned above are illustrated by AEKIDEN experience feedback. AEKIDEN is an international data-driven consultancy that has successfully implemented a data mesh approach. By sharing their experience, AEKIDEN can help other organizations understand the benefits and challenges of implementing data mesh and improving data quality.Keywords: data culture, data-driven organization, data mesh, data quality for business success
Procedia PDF Downloads 13527549 Wearable Jacket for Game-Based Post-Stroke Arm Rehabilitation
Authors: A. Raj Kumar, A. Okunseinde, P. Raghavan, V. Kapila
Abstract:
Stroke is the leading cause of adult disability worldwide. With recent advances in immediate post-stroke care, there is an increasing number of young stroke survivors, under the age of 65 years. While most stroke survivors will regain the ability to walk, they often experience long-term arm and hand motor impairments. Long term upper limb rehabilitation is needed to restore movement and function, and prevent deterioration from complications such as learned non-use and learned bad-use. We have developed a novel virtual coach, a wearable instrumented rehabilitation jacket, to motivate individuals to participate in long-term skill re-learning, that can be personalized to their impairment profile. The jacket can estimate the movements of an individual’s arms using embedded off-the-shelf sensors (e.g., 9-DOF IMU for inertial measurements, flex-sensors for measuring angular orientation of fingers) and a Bluetooth Low Energy (BLE) powered microcontroller (e.g., RFduino) to non-intrusively extract data. The 9-DOF IMU sensors contain 3-axis accelerometer, 3-axis gyroscope, and 3-axis magnetometer to compute the quaternions, which are transmitted to a computer to compute the Euler angles and estimate the angular orientation of the arms. The data are used in a gaming environment to provide visual, and/or haptic feedback for goal-based, augmented-reality training to facilitate re-learning in a cost-effective, evidence-based manner. The full paper will elaborate the technical aspects of communication, interactive gaming environment, and physical aspects of electronics necessary to achieve our stated goal. Moreover, the paper will suggest methods to utilize the proposed system as a cheaper, portable, and versatile system vis-à-vis existing instrumentation to facilitate post-stroke personalized arm rehabilitation.Keywords: feedback, gaming, Euler angles, rehabilitation, augmented reality
Procedia PDF Downloads 27727548 Optical Repeater Assisted Visible Light Device-to-Device Communications
Authors: Samrat Vikramaditya Tiwari, Atul Sewaiwar, Yeon-Ho Chung
Abstract:
Device-to-device (D2D) communication is considered a promising technique to provide wireless peer-to-peer communication services. Due to increasing demand on mobile services, available spectrum for radio frequency (RF) based communications becomes scarce. Recently, visible light communications (VLC) has evolved as a high speed wireless data transmission technology for indoor environments with abundant available bandwidth. In this paper, a novel VLC based D2D communication that provides wireless peer-to-peer communication is proposed. Potential low operating power devices for an efficient D2D communication over increasing distance of separation between devices is analyzed. Optical repeaters (OR) are also proposed to enhance the performance in an environment where direct D2D communications yield degraded performance. Simulation results show that VLC plays an important role in providing efficient D2D communication up to a distance of 1 m between devices. It is also found that the OR significantly improves the coverage distance up to 3.5 m.Keywords: visible light communication, light emitting diode, device-to-device, optical repeater
Procedia PDF Downloads 47827547 State of Freelancing in IT and Future Trends
Authors: Mihai Gheorghe
Abstract:
Freelancing in IT has seen an increased popularity during the last years mainly because of the fast Internet adoption in the countries with emerging economies, correlated with the continuous seek for reduced development costs as well with the rise of online platforms which address planning, coordination, and various development tasks. This paper conducts an overview of the most relevant Freelance Marketplaces available and studies the market structure, distribution of the workforce and trends in IT freelancing.Keywords: freelancing in IT, freelance marketplaces, freelance market structure, globalization, online staffing, trends in freelancing
Procedia PDF Downloads 20727546 Alternative Housing Systems: Influence on Blood Profile of Egg-Type Chickens in Humid Tropics
Authors: Olufemi M. Alabi, Foluke A. Aderemi, Adebayo A. Adewumi, Banwo O. Alabi
Abstract:
General well-being of animals is of paramount interest in some developed countries and of global importance hence the shift onto alternative housing systems for egg-type chickens as replacement for conventional battery cage system. However, there is paucity of information on the effect of this shift on physiological status of the hens to judge their health via the blood profile. Therefore, investigation was carried out on two strains of hen kept in three different housing systems in humid tropics to evaluate changes in their blood parameters. 108, 17-weeks old super black (SBL) hens and 108, 17-weeks old super brown (SBR) hens were randomly allotted to three different intensive systems Partitioned Conventional Cage (PCC), Extended Conventional Cage (ECC) and Deep Litter System (DLS) in a randomized complete block design with 36 hens per housing system, each with three replicates. The experiment lasted 37 weeks during which blood samples were collected at 18th week of age and bi-weekly thereafter for analyses. Parameters measured are packed cell volume (PCV), hemoglobin concentration (Hb), red blood counts (RBC), white blood counts (WBC) and serum metabolites such as total protein (TP), albumin (Alb), globulin (Glb), glucose, cholesterol, urea, bilirubin, serum cortisol while blood indices such as mean corpuscular hemoglobin (MCH), mean cell volume (MCV) and mean corpuscular hemoglobin concentration (MCHC) were calculated. The hematological values of the hens were not significantly (p>0.05) affected by the housing system and strain, so also the serum metabolites except for the serum cortisol which was significantly (p<0.05) affected by the housing system only. Hens housed on PCC had higher values (20.05 ng/ml for SBL and 20.55 ng/ml for SBR) followed by hens on ECC (18.15ng/ml for SBL and 18.38ng/ml for SBL) while hens on DLS had the lowest value (16.50ng/ml for SBL and 16.00ng/ml for SBR) thereby confirming indication of stress with conventionally caged birds. Alternative housing systems can also be adopted for egg-type chickens in the humid tropics from welfare point of view with the results of this work confirming stress among caged hens.Keywords: blood, housing, humid-tropics, layers
Procedia PDF Downloads 46827545 Bench Tests of Two-Stroke Opposed Piston Aircraft Diesel Engine under Propeller Characteristics Conditions
Authors: A. Majczak, G. Baranski, K. Pietrykowski
Abstract:
Due to the growing popularity of light aircraft, it has become necessary to develop aircraft engines for this type of construction. One of engine system, designed to increase efficiency and reduce weight, is the engine with opposed pistons. In such an engine, the combustion chamber is formed by two pistons moving in one cylinder. Therefore, this type of engines run in a two-stroke cycle, so they have many advantages such as high power and torque, high efficiency, or a favorable power-to-weight ratio. Tests of one of the available aircraft engines with opposing piston system fueled with diesel oil were carried out on an engine dynamometer equipped with an eddy current brake and the necessary measuring and testing equipment. In order to get to know the basic parameters of the engine, the tests were carried out under partial load conditions for the following torque values: 40, 60, 80, 100 Nm. The rotational speed was changed from 1600 to 2500 rpm. Measurements were also taken for designated points of propeller characteristics. During the tests, the engine torque, engine power, fuel consumption, intake manifold pressure, and oil pressure were recorded. On the basis of the measurements carried out for particular loads, the power curve, hourly and specific fuel consumption curves were determined. Characteristics of charge pressure as a function of rotational speed as well as power, torque, hourly and specific fuel consumption curves for propeller characteristics were also prepared. The obtained characteristics make it possible to select the optimal points of engine operation.Keywords: aircraft, diesel, engine testing, opposed piston
Procedia PDF Downloads 15427544 Innovative Schools as Birthplaces for Promoting Educational Innovations: A Case Study of Two Hungarian Schools
Authors: Khin Khin Thant Sin
Abstract:
This study is a case study which investigates successful and ongoing bottom-up innovations for school improvement initiatives in Hungary. Two innovative schools are selected in this study due to their outstanding achievement during the past ten years in Hungary. In one school, data from the personal experiences of a school principal who initiated the bottom-up innovation are included. For the second school, three interviews were carried out with two schoolteachers and one secondary school student. In addition, desk research, including the principal’s published articles, the schoolteachers’ master thesis, the school websites, and other published articles, are analysed to explore the schools’ innovative processes. This study investigates how bottom-up innovation led to major achievements in student learning, teacher professional development, networking and collaboration with other schools, and the establishment of successful partnerships with universities. The highlight of this study is how innovative schools can be the major sources promoting educational innovations as well as improving teacher education, especially in initial teacher education and continuous professional development.Keywords: school innovation, teacher education, hungary, educational innovation, school improvement
Procedia PDF Downloads 10927543 Deep Learning Approach for Colorectal Cancer’s Automatic Tumor Grading on Whole Slide Images
Authors: Shenlun Chen, Leonard Wee
Abstract:
Tumor grading is an essential reference for colorectal cancer (CRC) staging and survival prognostication. The widely used World Health Organization (WHO) grading system defines histological grade of CRC adenocarcinoma based on the density of glandular formation on whole slide images (WSI). Tumors are classified as well-, moderately-, poorly- or un-differentiated depending on the percentage of the tumor that is gland forming; >95%, 50-95%, 5-50% and <5%, respectively. However, manually grading WSIs is a time-consuming process and can cause observer error due to subjective judgment and unnoticed regions. Furthermore, pathologists’ grading is usually coarse while a finer and continuous differentiation grade may help to stratifying CRC patients better. In this study, a deep learning based automatic differentiation grading algorithm was developed and evaluated by survival analysis. Firstly, a gland segmentation model was developed for segmenting gland structures. Gland regions of WSIs were delineated and used for differentiation annotating. Tumor regions were annotated by experienced pathologists into high-, medium-, low-differentiation and normal tissue, which correspond to tumor with clear-, unclear-, no-gland structure and non-tumor, respectively. Then a differentiation prediction model was developed on these human annotations. Finally, all enrolled WSIs were processed by gland segmentation model and differentiation prediction model. The differentiation grade can be calculated by deep learning models’ prediction of tumor regions and tumor differentiation status according to WHO’s defines. If multiple WSIs were possessed by a patient, the highest differentiation grade was chosen. Additionally, the differentiation grade was normalized into scale between 0 to 1. The Cancer Genome Atlas, project COAD (TCGA-COAD) project was enrolled into this study. For the gland segmentation model, receiver operating characteristic (ROC) reached 0.981 and accuracy reached 0.932 in validation set. For the differentiation prediction model, ROC reached 0.983, 0.963, 0.963, 0.981 and accuracy reached 0.880, 0.923, 0.668, 0.881 for groups of low-, medium-, high-differentiation and normal tissue in validation set. Four hundred and one patients were selected after removing WSIs without gland regions and patients without follow up data. The concordance index reached to 0.609. Optimized cut off point of 51% was found by “Maxstat” method which was almost the same as WHO system’s cut off point of 50%. Both WHO system’s cut off point and optimized cut off point performed impressively in Kaplan-Meier curves and both p value of logrank test were below 0.005. In this study, gland structure of WSIs and differentiation status of tumor regions were proven to be predictable through deep leaning method. A finer and continuous differentiation grade can also be automatically calculated through above models. The differentiation grade was proven to stratify CAC patients well in survival analysis, whose optimized cut off point was almost the same as WHO tumor grading system. The tool of automatically calculating differentiation grade may show potential in field of therapy decision making and personalized treatment.Keywords: colorectal cancer, differentiation, survival analysis, tumor grading
Procedia PDF Downloads 13427542 An Efficient Machine Learning Model to Detect Metastatic Cancer in Pathology Scans Using Principal Component Analysis Algorithm, Genetic Algorithm, and Classification Algorithms
Authors: Bliss Singhal
Abstract:
Machine learning (ML) is a branch of Artificial Intelligence (AI) where computers analyze data and find patterns in the data. The study focuses on the detection of metastatic cancer using ML. Metastatic cancer is the stage where cancer has spread to other parts of the body and is the cause of approximately 90% of cancer-related deaths. Normally, pathologists spend hours each day to manually classifying whether tumors are benign or malignant. This tedious task contributes to mislabeling metastasis being over 60% of the time and emphasizes the importance of being aware of human error and other inefficiencies. ML is a good candidate to improve the correct identification of metastatic cancer, saving thousands of lives and can also improve the speed and efficiency of the process, thereby taking fewer resources and time. So far, the deep learning methodology of AI has been used in research to detect cancer. This study is a novel approach to determining the potential of using preprocessing algorithms combined with classification algorithms in detecting metastatic cancer. The study used two preprocessing algorithms: principal component analysis (PCA) and the genetic algorithm, to reduce the dimensionality of the dataset and then used three classification algorithms: logistic regression, decision tree classifier, and k-nearest neighbors to detect metastatic cancer in the pathology scans. The highest accuracy of 71.14% was produced by the ML pipeline comprising of PCA, the genetic algorithm, and the k-nearest neighbor algorithm, suggesting that preprocessing and classification algorithms have great potential for detecting metastatic cancer.Keywords: breast cancer, principal component analysis, genetic algorithm, k-nearest neighbors, decision tree classifier, logistic regression
Procedia PDF Downloads 8227541 Analysis of Splicing Methods for High Speed Automated Fibre Placement Applications
Authors: Phillip Kearney, Constantina Lekakou, Stephen Belcher, Alessandro Sordon
Abstract:
The focus in the automotive industry is to reduce human operator and machine interaction, so manufacturing becomes more automated and safer. The aim is to lower part cost and construction time as well as defects in the parts, sometimes occurring due to the physical limitations of human operators. A move to automate the layup of reinforcement material in composites manufacturing has resulted in the use of tapes that are placed in position by a robotic deposition head, also described as Automated Fibre Placement (AFP). The process of AFP is limited with respect to the finite amount of material that can be loaded into the machine at any one time. Joining two batches of tape material together involves a splice to secure the ends of the finishing tape to the starting edge of the new tape. The splicing method of choice for the majority of prepreg applications is a hand stich method, and as the name suggests requires human input to achieve. This investigation explores three methods for automated splicing, namely, adhesive, binding and stitching. The adhesive technique uses an additional adhesive placed on the tape ends to be joined. Binding uses the binding agent that is already impregnated onto the tape through the application of heat. The stitching method is used as a baseline to compare the new splicing methods to the traditional technique currently in use. As the methods will be used within a High Speed Automated Fibre Placement (HSAFP) process, this meant the parameters of the splices have to meet certain specifications: (a) the splice must be able to endure a load of 50 N in tension applied at a rate of 1 mm/s; (b) the splice must be created in less than 6 seconds, dictated by the capacity of the tape accumulator within the system. The samples for experimentation were manufactured with controlled overlaps, alignment and splicing parameters, these were then tested in tension using a tensile testing machine. Initial analysis explored the use of the impregnated binding agent present on the tape, as in the binding splicing technique. It analysed the effect of temperature and overlap on the strength of the splice. It was found that the optimum splicing temperature was at the higher end of the activation range of the binding agent, 100 °C. The optimum overlap was found to be 25 mm; it was found that there was no improvement in bond strength from 25 mm to 30 mm overlap. The final analysis compared the different splicing methods to the baseline of a stitched bond. It was found that the addition of an adhesive was the best splicing method, achieving a maximum load of over 500 N compared to the 26 N load achieved by a stitching splice and 94 N by the binding method.Keywords: analysis, automated fibre placement, high speed, splicing
Procedia PDF Downloads 15527540 Big Data Analysis with RHadoop
Authors: Ji Eun Shin, Byung Ho Jung, Dong Hoon Lim
Abstract:
It is almost impossible to store or analyze big data increasing exponentially with traditional technologies. Hadoop is a new technology to make that possible. R programming language is by far the most popular statistical tool for big data analysis based on distributed processing with Hadoop technology. With RHadoop that integrates R and Hadoop environment, we implemented parallel multiple regression analysis with different sizes of actual data. Experimental results showed our RHadoop system was much faster as the number of data nodes increases. We also compared the performance of our RHadoop with lm function and big lm packages available on big memory. The results showed that our RHadoop was faster than other packages owing to paralleling processing with increasing the number of map tasks as the size of data increases.Keywords: big data, Hadoop, parallel regression analysis, R, RHadoop
Procedia PDF Downloads 43727539 A Mutually Exclusive Task Generation Method Based on Data Augmentation
Authors: Haojie Wang, Xun Li, Rui Yin
Abstract:
In order to solve the memorization overfitting in the meta-learning MAML algorithm, a method of generating mutually exclusive tasks based on data augmentation is proposed. This method generates a mutex task by corresponding one feature of the data to multiple labels, so that the generated mutex task is inconsistent with the data distribution in the initial dataset. Because generating mutex tasks for all data will produce a large number of invalid data and, in the worst case, lead to exponential growth of computation, this paper also proposes a key data extraction method, that only extracts part of the data to generate the mutex task. The experiments show that the method of generating mutually exclusive tasks can effectively solve the memorization overfitting in the meta-learning MAML algorithm.Keywords: data augmentation, mutex task generation, meta-learning, text classification.
Procedia PDF Downloads 9327538 Application of Ground-Penetrating Radar in Environmental Hazards
Authors: Kambiz Teimour Najad
Abstract:
The basic methodology of GPR involves the use of a transmitting antenna to send electromagnetic waves into the subsurface, which then bounce back to the surface and are detected by a receiving antenna. The transmitter and receiver antennas are typically placed on the ground surface and moved across the area of interest to create a profile of the subsurface. The GPR system consists of a control unit that powers the antennas and records the data, as well as a display unit that shows the results of the survey. The control unit sends a pulse of electromagnetic energy into the ground, which propagates through the soil or rock until it encounters a change in material or structure. When the electromagnetic wave encounters a buried object or structure, some of the energy is reflected back to the surface and detected by the receiving antenna. The GPR data is then processed using specialized software that analyzes the amplitude and travel time of the reflected waves. By interpreting the data, GPR can provide information on the depth, location, and nature of subsurface features and structures. GPR has several advantages over other geophysical survey methods, including its ability to provide high-resolution images of the subsurface and its non-invasive nature, which minimizes disruption to the site. However, the effectiveness of GPR depends on several factors, including the type of soil or rock, the depth of the features being investigated, and the frequency of the electromagnetic waves used. In environmental hazard assessments, GPR can be used to detect buried structures, such as underground storage tanks, pipelines, or utilities, which may pose a risk of contamination to the surrounding soil or groundwater. GPR can also be used to assess soil stability by identifying areas of subsurface voids or sinkholes, which can lead to the collapse of the surface. Additionally, GPR can be used to map the extent and movement of groundwater contamination, which is critical in designing effective remediation strategies. the methodology of GPR in environmental hazard assessments involves the use of electromagnetic waves to create high of the subsurface, which are then analyzed to provide information on the depth, location, and nature of subsurface features and structures. This information is critical in identifying and mitigating environmental hazards, and the non-invasive nature of GPR makes it a valuable tool in this field.Keywords: GPR, hazard, landslide, rock fall, contamination
Procedia PDF Downloads 8227537 Efficient Positioning of Data Aggregation Point for Wireless Sensor Network
Authors: Sifat Rahman Ahona, Rifat Tasnim, Naima Hassan
Abstract:
Data aggregation is a helpful technique for reducing the data communication overhead in wireless sensor network. One of the important tasks of data aggregation is positioning of the aggregator points. There are a lot of works done on data aggregation. But, efficient positioning of the aggregators points is not focused so much. In this paper, authors are focusing on the positioning or the placement of the aggregation points in wireless sensor network. Authors proposed an algorithm to select the aggregators positions for a scenario where aggregator nodes are more powerful than sensor nodes.Keywords: aggregation point, data communication, data aggregation, wireless sensor network
Procedia PDF Downloads 15727536 Quality is the Matter of All
Authors: Mohamed Hamza, Alex Ohoussou
Abstract:
At JAWDA, our primary focus is on ensuring the satisfaction of our clients worldwide. We are committed to delivering new features on our SaaS platform as quickly as possible while maintaining high-quality standards. In this paper, we highlight two key aspects of testing that represent an evolution of current methods and a potential trend for the future, which have enabled us to uphold our commitment effectively. These aspects are: "One Sandbox per Pull Request" (dynamic test environments instead of static ones) and "QA for All.".Keywords: QA for all, dynamic sandboxes, QAOPS, CICD, continuous testing, all testers, QA matters for all, 1 sandbox per PR, utilization rate, coverage rate
Procedia PDF Downloads 3127535 Wind Resource Estimation and Economic Analysis for Rakiraki, Fiji
Authors: Kaushal Kishore
Abstract:
Immense amount of imported fuels are used in Fiji for electricity generation, transportation and for carrying out miscellaneous household work. To alleviate its dependency on fossil fuel, paramount importance has been given to instigate the utilization of renewable energy sources for power generation and to reduce the environmental dilapidation. Amongst the many renewable energy sources, wind has been considered as one of the best identified renewable sources that are comprehensively available in Fiji. In this study the wind resource assessment for three locations in Rakiraki, Fiji has been carried out. The wind resource estimation at Rokavukavu, Navolau and at Tuvavatu has been analyzed. The average wind speed at 55 m above ground level (a.g.l) at Rokavukavu, Navolau, and Tuvavatu sites are 5.91 m/s, 8.94 m/s and 8.13 m/s with the turbulence intensity of 14.9%, 17.1%, and 11.7% respectively. The moment fitting method has been used to estimate the Weibull parameter and the power density at each sites. A high resolution wind resource map for the three locations has been developed by using Wind Atlas Analysis and Application Program (WAsP). The results obtained from WAsP exhibited good wind potential at Navolau and Tuvavatu sites. A wind farm has been proposed at Navolau and Tuvavatu site that comprises six Vergnet 275 kW wind turbines at each site. The annual energy production (AEP) for each wind farm is estimated and an economic analysis is performed. The economic analysis for the proposed wind farms at Navolau and Tuvavatu sites showed a payback period of 5 and 6 years respectively.Keywords: annual energy production, Rakiraki Fiji, turbulence intensity, Weibull parameter, wind speed, Wind Atlas Analysis and Application Program
Procedia PDF Downloads 18827534 Phytochemical Screening, Antioxidant Activity, Lipid Profile Effect of Citrus reticulata Fruit Peel, Zingiber officinale Rhizome, and Sesamum indicum Seed Extracts
Authors: Samar Saadeldin Abdelmotalab Omer, Ikram Mohammed Eltayeb Elsiddig, Amna Beshir Medani Ahmed, Saad Mohammed Hussein Ayoub
Abstract:
Many herbal medicinal products are considered as potential hypocholesterolemic agents with encouraging safety profiles, however, only a limited amount of clinical research exists to support their efficacy. The present study was designed to compare the antihypercholesterolaemic and antioxidant activities of the crude ethanolic extracts of Citrus reticulata peel, Zingiber officinale rhizome, and Sesamum indicum seeds. Forty-five rats were used throughout the experiment, which were divided into nine groups, five rats in each as follows; normal control group (normal rats fed with standard normal diet), rats fed hypercholesterolemic diet consisting of 1% cholesterol and 10% saturated animal fat, which were further divided into eight groups; hypercholesterolemic control group (rats only fed hypercholesterolemic diet), groups 3,4,5,6,7, and 8 were given Citrus reticulata, Zingiber officinale, and Sesamum indicum ethanolic extracts at doses of (250mg/kg and 500mg/kg, respectively) orally; and group 9 rats were given atorvastatin (0.18mg/kg) orally as a reference antihypercholesterolaemic drug. Blood samples were obtained four weeks following treatment from the retro-orbital venous plexus after fasting overnight from all groups and the lipid profile (serum total cholesterol (TC), high-density-lipoprotein-cholesterol (HDL-C), low-density lipoprotein-cholesterol (LDL-C), and triglycerides levels) was measured and the risk ratio (TC/HDL-C) was assessed. The antioxidant activity of the three plant extracts was determined using DPPH free-radical assay. Results of in vivo and in vitro antihypercholesterolaemic and antioxidant assay, respectively, revealed that the three extracts possess comparable antioxidant and anti-hypercholesterolaemic activities.Keywords: anti hypercholesterolemic effects, antioxidant activity, HDL, LDL, TC, TGs, citrus reticulata, sesamum indicum, zingiber officinale
Procedia PDF Downloads 46527533 Variant Selection and Pre-transformation Phase Reconstruction for Deformation-Induced Transformation in AISI 304 Austenitic Stainless Steel
Authors: Manendra Singh Parihar, Sandip Ghosh Chowdhury
Abstract:
Austenitic stainless steels are widely used and give a good combination of properties. When this steel is plastically deformed, a phase transformation of the metastable Face Centred Cubic Austenite to the stable Body Centred Cubic (α’) or to the Hexagonal close packed (ԑ) martensite may occur, leading to the enhancement in the mechanical properties like strength. The work was based on variant selection and corresponding texture analysis for the strain induced martensitic transformation during deformation of the parent austenite FCC phase to form the product HCP and the BCC martensite phases separately, obeying their respective orientation relationships. The automated method for reconstruction of the parent phase orientation using the EBSD data of the product phase orientation is done using the MATLAB and TSL-OIM software. The method of triplets was used which involves the formation of a triplet of neighboring product grains having a common variant and linking them using a misorientation-based criterion. This led to the proper reconstruction of the pre-transformation phase orientation data and thus to its micro structure and texture. The computational speed of current method is better compared to the previously used methods of reconstruction. The reconstruction of austenite from ԑ and α’ martensite was carried out for multiple samples and their IPF images, pole figures, inverse pole figures and ODFs were compared. Similar type of results was observed for all samples. The comparison gives the idea for estimating the correct sequence of the transformation i.e. γ → ε → α’ or γ → α’, during deformation of AISI 304 austenitic stainless steel.Keywords: variant selection, reconstruction, EBSD, austenitic stainless steel, martensitic transformation
Procedia PDF Downloads 48927532 Fuzzy Optimization Multi-Objective Clustering Ensemble Model for Multi-Source Data Analysis
Authors: C. B. Le, V. N. Pham
Abstract:
In modern data analysis, multi-source data appears more and more in real applications. Multi-source data clustering has emerged as a important issue in the data mining and machine learning community. Different data sources provide information about different data. Therefore, multi-source data linking is essential to improve clustering performance. However, in practice multi-source data is often heterogeneous, uncertain, and large. This issue is considered a major challenge from multi-source data. Ensemble is a versatile machine learning model in which learning techniques can work in parallel, with big data. Clustering ensemble has been shown to outperform any standard clustering algorithm in terms of accuracy and robustness. However, most of the traditional clustering ensemble approaches are based on single-objective function and single-source data. This paper proposes a new clustering ensemble method for multi-source data analysis. The fuzzy optimized multi-objective clustering ensemble method is called FOMOCE. Firstly, a clustering ensemble mathematical model based on the structure of multi-objective clustering function, multi-source data, and dark knowledge is introduced. Then, rules for extracting dark knowledge from the input data, clustering algorithms, and base clusterings are designed and applied. Finally, a clustering ensemble algorithm is proposed for multi-source data analysis. The experiments were performed on the standard sample data set. The experimental results demonstrate the superior performance of the FOMOCE method compared to the existing clustering ensemble methods and multi-source clustering methods.Keywords: clustering ensemble, multi-source, multi-objective, fuzzy clustering
Procedia PDF Downloads 18927531 Partnering with Stakeholders to Secure Digitization of Water
Authors: Sindhu Govardhan, Kenneth G. Crowther
Abstract:
Modernisation of the water sector is leading to increased connectivity and integration of emerging technologies with traditional ones, leading to new security risks. The convergence of Information Technology (IT) with Operation Technology (OT) results in solutions that are spread across larger geographic areas, increasingly consist of interconnected Industrial Internet of Things (IIOT) devices and software, rely on the integration of legacy with modern technologies, use of complex supply chain components leading to complex architectures and communication paths. The result is that multiple parties collectively own and operate these emergent technologies, threat actors find new paths to exploit, and traditional cybersecurity controls are inadequate. Our approach is to explicitly identify and draw data flows that cross trust boundaries between owners and operators of various aspects of these emerging and interconnected technologies. On these data flows, we layer potential attack vectors to create a frame of reference for evaluating possible risks against connected technologies. Finally, we identify where existing controls, mitigations, and other remediations exist across industry partners (e.g., suppliers, product vendors, integrators, water utilities, and regulators). From these, we are able to understand potential gaps in security, the roles in the supply chain that are most likely to effectively remediate those security gaps, and test cases to evaluate and strengthen security across these partners. This informs a “shared responsibility” solution that recognises that security is multi-layered and requires collaboration to be successful. This shared responsibility security framework improves visibility, understanding, and control across the entire supply chain, and particularly for those water utilities that are accountable for safe and continuous operations.Keywords: cyber security, shared responsibility, IIOT, threat modelling
Procedia PDF Downloads 7727530 Implicit U-Net Enhanced Fourier Neural Operator for Long-Term Dynamics Prediction in Turbulence
Authors: Zhijie Li, Wenhui Peng, Zelong Yuan, Jianchun Wang
Abstract:
Turbulence is a complex phenomenon that plays a crucial role in various fields, such as engineering, atmospheric science, and fluid dynamics. Predicting and understanding its behavior over long time scales have been challenging tasks. Traditional methods, such as large-eddy simulation (LES), have provided valuable insights but are computationally expensive. In the past few years, machine learning methods have experienced rapid development, leading to significant improvements in computational speed. However, ensuring stable and accurate long-term predictions remains a challenging task for these methods. In this study, we introduce the implicit U-net enhanced Fourier neural operator (IU-FNO) as a solution for stable and efficient long-term predictions of the nonlinear dynamics in three-dimensional (3D) turbulence. The IU-FNO model combines implicit re-current Fourier layers to deepen the network and incorporates the U-Net architecture to accurately capture small-scale flow structures. We evaluate the performance of the IU-FNO model through extensive large-eddy simulations of three types of 3D turbulence: forced homogeneous isotropic turbulence (HIT), temporally evolving turbulent mixing layer, and decaying homogeneous isotropic turbulence. The results demonstrate that the IU-FNO model outperforms other FNO-based models, including vanilla FNO, implicit FNO (IFNO), and U-net enhanced FNO (U-FNO), as well as the dynamic Smagorinsky model (DSM), in predicting various turbulence statistics. Specifically, the IU-FNO model exhibits improved accuracy in predicting the velocity spectrum, probability density functions (PDFs) of vorticity and velocity increments, and instantaneous spatial structures of the flow field. Furthermore, the IU-FNO model addresses the stability issues encountered in long-term predictions, which were limitations of previous FNO models. In addition to its superior performance, the IU-FNO model offers faster computational speed compared to traditional large-eddy simulations using the DSM model. It also demonstrates generalization capabilities to higher Taylor-Reynolds numbers and unseen flow regimes, such as decaying turbulence. Overall, the IU-FNO model presents a promising approach for long-term dynamics prediction in 3D turbulence, providing improved accuracy, stability, and computational efficiency compared to existing methods.Keywords: data-driven, Fourier neural operator, large eddy simulation, fluid dynamics
Procedia PDF Downloads 7427529 Numerical Study on the Flow around a Steadily Rotating Spring: Understanding the Propulsion of a Bacterial Flagellum
Authors: Won Yeol Choi, Sangmo Kang
Abstract:
The propulsion of a bacterial flagellum in a viscous fluid has attracted many interests in the field of biological hydrodynamics, but remains yet fully understood and thus still a challenging problem. In this study, therefore, we have numerically investigated the flow around a steadily rotating micro-sized spring to further understand such bacterial flagellum propulsion. Note that a bacterium gains thrust (propulsive force) by rotating the flagellum connected to the body through a bio motor to move forward. For the investigation, we convert the spring model from the micro scale to the macro scale using a similitude law (scale law) and perform simulations on the converted macro-scale model using a commercial software package, CFX v13 (ANSYS). To scrutinize the propulsion characteristics of the flagellum through the simulations, we make parameter studies by changing some flow parameters, such as the pitch, helical radius and rotational speed of the spring and the Reynolds number (or fluid viscosity), expected to affect the thrust force experienced by the rotating spring. Results show that the propulsion characteristics depend strongly on the parameters mentioned above. It is observed that the forward thrust increases in a linear fashion with either of the rotational speed or the fluid viscosity. In addition, the thrust is directly proportional to square of the helical radius and but the thrust force is increased and then decreased based on the peak value to the pitch. Finally, we also present the appropriate flow and pressure fields visualized to support the observations.Keywords: fluid viscosity, hydrodynamics, similitude, propulsive force
Procedia PDF Downloads 35027528 Dynamic Thermomechanical Behavior of Adhesively Bonded Composite Joints
Authors: Sonia Sassi, Mostapha Tarfaoui, Hamza Benyahia
Abstract:
Composite materials are increasingly being used as a substitute for metallic materials in many technological applications like aeronautics, aerospace, marine and civil engineering applications. For composite materials, the thermomechanical response evolves with the strain rate. The energy balance equation for anisotropic, elastic materials includes heat source terms that govern the conversion of some of the kinetic work into heat. The remainder contributes to the stored energy creating the damage process in the composite material. In this paper, we investigate the bulk thermomechanical behavior of adhesively-bonded composite assemblies to quantitatively asses the temperature rise which accompanies adiabatic deformations. In particular, adhesively bonded joints in glass/vinylester composite material are subjected to in-plane dynamic loads under a range of strain rates. Dynamic thermomechanical behavior of this material is investigated using compression Split Hopkinson Pressure Bars (SHPB) coupled with a high speed infrared camera and a high speed camera to measure in real time the dynamic behavior, the damage kinetic and the temperature variation in the material. The interest of using high speed IR camera is in order to view in real time the evolution of heat dissipation in the material when damage occurs. But, this technique does not produce thermal values in correlation with the stress-strain curves of composite material because of its high time response in comparison with the dynamic test time. For this reason, the authors revisit the application of specific thermocouples placed on the surface of the material to ensure the real thermal measurements under dynamic loading using small thermocouples. Experiments with dynamically loaded material show that the thermocouples record temperatures values with a short typical rise time as a result of the conversion of kinetic work into heat during compression test. This results show that small thermocouples can be used to provide an important complement to other noncontact techniques such as the high speed infrared camera. Significant temperature rise was observed in in-plane compression tests especially under high strain rates. During the tests, it has been noticed that sudden temperature rise occur when macroscopic damage occur. This rise in temperature is linked to the rate of damage. The more serve the damage is, a higher localized temperature is detected. This shows the strong relationship between the occurrence of damage and induced heat dissipation. For the case of the in plane tests, the damage takes place more abruptly as the strain rate is increased. The difference observed in the obtained thermomechanical response in plane compression is explained only by the difference in the damage process being active during the compression tests. In this study, we highlighted the dependence of the thermomechanical response on the strain rate of bonded specimens. The effect of heat dissipation of this material cannot hence be ignored and should be taken into account when defining damage models during impact loading.Keywords: adhesively-bonded composite joints, damage, dynamic compression tests, energy balance, heat dissipation, SHPB, thermomechanical behavior
Procedia PDF Downloads 21227527 Platform Development for Vero Cell Culture on Microcarriers Using Dissociation-Reassociation Method
Authors: Thanunthon Bowornsakulwong, Charukorn Charukarn, Franck Courtes, Panit Kitsubun, Lalintip Horcharoen
Abstract:
Vero cell is a continuous cell line that is widely used for the production of viral vaccines. However, due to its adherent characteristic, scaling up strategy in large-scale production remains complicated and thus limited. Consequently, suspension-like Vero cell culture processes based on microcarriers have been introduced and employed while also providing increased surface area per volume unit. However, harvesting Vero cells from microcarriers is a huge challenge due to difficulties in cells detaching, lower recovery yield, time-consuming and dissociation agent carry-over. To overcome these problems, we developed a dissociation-association platform technology for detaching and re-attaching cells during subculturing from microcarriers to microcarriers, which will be conveniently applied to seed trains strategies in large scale bioreactors. Herein, Hillex-2 was used to culture Vero cells in serum-containing media using spinner flasks as a scale-down model. The overall confluency of cells on microcarriers was observed using inverted microscope, and the sample cells were daily detached in order to obtain the kinetics data. The metabolites consumption and by-products formation were determined by Nova Biomedical BioprofileFlex.Keywords: dissociation-reassociation, microcarrier, scale up, Vero cell
Procedia PDF Downloads 13327526 Electoral Politics and Voting Behaviour in 2011 Assembly Election in West Bengal, India: A Case Study in Electoral Geography
Authors: Md Motibur Rahman
Abstract:
The present paper attempts to study the electoral politics and voting behavior of 2011 assembly election of West Bengal state in India. Electoral geography is considered as the study of geographical aspects of the organization, conduct, and result of elections. It deals with the spatial voting patterns/behaviour or the study of the spatial distribution of political phenomena of voting. Voting behavior is a form of political psychology which played a great role in political decision-making process. The voting behavior of the electorates is largely influenced by their perception that existing during the time of election. The main focus of the study will be to analyze the electoral politics of the party organizations and political profile of the electorates. The principle objectives of the present work are i) to study the spatial patterns of voting behavior in 2011 assembly election in West Bengal, ii) to analysis the result and finding of 2011 assembly election. The whole study based on the secondary source of data. The electoral data have taken from Election Commission of India, New Delhi and Centre for the study of Developing Societies (CSDS) in New Delhi. In the battle of 2011 Assembly election in West Bengal, the two major parties were Left Front and Trinamool Congress. This election witnessed the remarkable successes of Trinamool Congress and decline of 34 years longest ruler party that is Left Front. Trinamool Congress won a majority of seats that 227 out of 294 but Left Front won only 62 seats out of 294 seats. The significance of the present study is that it helps in understanding the voting pattern, voting behaviour, trends of voting and also helps for further study of electoral geography in West Bengal. The study would be highly significant and helpful to the planners, politicians, and administrators who are involved in the formulation of development plans and programmes for the people of the state.Keywords: assembly election, electoral geography, electoral politics, voting behaviour
Procedia PDF Downloads 23127525 Characterization of Nano Coefficient of Friction through Lfm of Superhydrophobic/Oleophobic Coatings Applied on 316l Ss
Authors: Hamza Shams, Sajid Saleem, Bilal A. Siddiqui
Abstract:
This paper investigates the coefficient of friction at nano-levels of commercially available superhydrophobic/oleophobic coatings when applied over 316L SS. 316L Stainless Steel or Marine Stainless Steel has been selected for its widespread uses in structures, marine and biomedical applications. The coatings were investigated in harsh sand-storm and sea water environments. The particle size of the sand during the procedure was carefully selected to simulate sand-storm conditions. Sand speed during the procedure was carefully modulated to simulate actual wind speed during a sand-storm. Sample preparation was carried out using prescribed methodology by the coating manufacturer. The coating’s adhesion and thickness was verified before and after the experiment with the use of Scanning Electron Microscopy (SEM). The value for nano-level coefficient of friction has been determined using Lateral Force Microscopy (LFM). The analysis has been used to formulate a value of friction coefficient which in turn is associative of the amount of wear the coating can bear before the exposure of the base substrate to the harsh environment. The analysis aims to validate the coefficient of friction value as marketed by the coating manufacturers and more importantly test the coating in real-life applications to justify its use. It is expected that the coating would resist exposure to the harsh environment for a considerable amount of time. Further, it would prevent the sample from getting corroded in the process.Keywords: 316L SS, scanning electron microscopy, lateral force microscopy, marine stainless steel, oleophobic coating, superhydrophobic coating
Procedia PDF Downloads 48627524 Assessing the Social Impacts of Regional Services: The Case of a Portuguese Municipality
Authors: A. Camões, M. Ferreira Dias, M. Amorim
Abstract:
In recent years, the social economy is increasingly seen as a viable means to address social problems. Social enterprises, as well as public projects and initiatives targeted to meet social purposes, offer organizational models that assume heterogeneity, flexibility and adaptability to the ‘real world and real problems’. Despite the growing popularity of social initiatives, decision makers still face a paucity in what concerns the available models and tools to adequately assess its sustainability, and its impacts, notably the nature of its contribution to economic growth. This study was carried out at the local level, by analyzing the social impact initiatives and projects promoted by the Municipality of Albergaria-a-Velha (Câmara Municipal de Albergaria-a-Velha -CMA), a municipality of 25,000 inhabitants in the central region of Portugal. This work focuses on the challenges related to the qualifications and employability of citizens, which stands out as one of the key concerns in the Portuguese economy, particularly expressive in the context of small-scale cities and inland territories. The study offers a characterization of the Municipality, its socio-economic structure and challenges, followed by an exploratory analysis of multiple sourced data, collected from the CMA's documental sources as well as from privileged informants. The purpose is to conduct detailed analysis of the CMA's social projects, aimed at characterizing its potential impact for the model of qualifications and employability of the citizens of the Municipality. The study encompasses a discussion of the socio-economic profile of the municipality, notably its asymmetries, the analysis of the social projects and initiatives, as well as of data derived from inquiry actors involved in the implementation of the social projects and its beneficiaries. Finally, the results obtained with the Better Life Index will be included. This study makes it possible to ascertain if what is implicit in the literature goes to the encounter of what one experiences in reality.Keywords: measurement, municipalities, social economy, social impact
Procedia PDF Downloads 13427523 The Contribution of Boards to Company Performance via Strategic Management
Authors: Peter Crow
Abstract:
Boards and directors have been subjects of much scholarly research and public interest over several decades, more so since the succession of high profile company failures of the early 2000s. An array of research outputs including information, correlations, descriptions, models, hypotheses and theories have been reported. While some of this research has shed light on aspects of the board–performance relationship and on board tasks and behaviours, the nature and characteristics of the supposed board–performance relationship remain undetermined. That satisfactory explanations of how boards influence company performance have yet to emerge is a significant blind spot. Yet the board is ultimately responsible for company performance, in accordance with the wishes of shareholders. The aim of this paper is to explore corporate governance and board practice through the lens of strategic management, and to take tentative steps towards a new conception of corporate governance. The findings of a recent longitudinal multiple-case study designed to explore the board’s involvement in strategic management are reported. Qualitative and quantitative data was collected from two quasi-public large companies in New Zealand including from first-hand observations of boards in session, semi-structured interviews with chief executives and chairmen and the inspection of company and board documentation. A synthetic timeline framework was used to collate the financial, board structure, board activity and decision-making data, in order to provide a holistic perspective. Decision sequences were identified, and realist techniques of abduction and retroduction were iteratively applied to analyse the multi-year data set. Using several models previously proposed in the literature as a guide, conjectures were formed, tested and refined—the culmination of which was a provisional model of how boards can influence performance via strategic management. The model builds on both existing theoretical perspectives and theoretical models proposed in the corporate governance and strategic management literature. This paper seeks to add to the understanding of how boards can make meaningful contributions to value creation via strategic management, and to comment on the qualities of directors, social interactions in boardrooms and other circumstances within which influence might be possible given the highly contingent relationship between board activity and business performance outcomes.Keywords: board practice, case study, corporate governance, strategic management
Procedia PDF Downloads 22627522 Contribution at Dimensioning of the Energy Dissipation Basin
Authors: M. Aouimeur
Abstract:
The environmental risks of a dam and particularly the security in the Valley downstream of it,, is a very complex problem. Integrated management and risk-sharing become more and more indispensable. The definition of "vulnerability “concept can provide assistance to controlling the efficiency of protective measures and the characterization of each valley relatively to the floods's risk. Security can be enhanced through the integrated land management. The social sciences may be associated to the operational systems of civil protection, in particular warning networks. The passage of extreme floods in the site of the dam causes the rupture of this structure and important damages downstream the dam. The river bed could be damaged by erosion if it is not well protected. Also, we may encounter some scouring and flooding problems in the downstream area of the dam. Therefore, the protection of the dam is crucial. It must have an energy dissipator in a specific place. The basin of dissipation plays a very important role for the security of the dam and the protection of the environment against floods downstream the dam. It allows to dissipate the potential energy created by the dam with the passage of the extreme flood on the weir and regularize in a natural manner and with more security the discharge or elevation of the water plan on the crest of the weir, also it permits to reduce the speed of the flow downstream the dam, in order to obtain an identical speed to the river bed. The problem of the dimensioning of a classic dissipation basin is in the determination of the necessary parameters for the dimensioning of this structure. This communication presents a simple graphical method, that is fast and complete, and a methodology which determines the main features of the hydraulic jump, necessary parameters for sizing the classic dissipation basin. This graphical method takes into account the constraints imposed by the reality of the terrain or the practice such as the one related to the topography of the site, the preservation of the environment equilibrium and the technical and economic side.This methodology is to impose the loss of head DH dissipated by the hydraulic jump as a hypothesis (free design) to determine all the others parameters of classical dissipation basin. We can impose the loss of head DH dissipated by the hydraulic jump that is equal to a selected value or to a certain percentage of the upstream total head created by the dam. With the parameter DH+ =(DH/k),(k: critical depth),the elaborate graphical representation allows to find the other parameters, the multiplication of these parameters by k gives the main characteristics of the hydraulic jump, necessary parameters for the dimensioning of classic dissipation basin.This solution is often preferred for sizing the dissipation basins of small concrete dams. The results verification and their comparison to practical data, confirm the validity and reliability of the elaborate graphical method.Keywords: dimensioning, energy dissipation basin, hydraulic jump, protection of the environment
Procedia PDF Downloads 583