Search results for: spiral dynamic algorithm
81 Housing Recovery in Heavily Damaged Communities in New Jersey after Hurricane Sandy
Authors: Chenyi Ma
Abstract:
Background: The second costliest hurricane in U.S. history, Sandy landed in southern New Jersey on October 29, 2012, and struck the entire state with high winds and torrential rains. The disaster killed more than 100 people, left more than 8.5 million households without power, and damaged or destroyed more than 200,000 homes across the state. Immediately after the disaster, public policy support was provided in nine coastal counties that constituted 98% of the major and severely damaged housing units in NJ overall. The programs include Individuals and Households Assistance Program, Small Business Loan Program, National Flood Insurance Program, and the Federal Emergency Management Administration (FEMA) Public Assistance Grant Program. In the most severely affected counties, additional funding was provided through Community Development Block Grant: Reconstruction, Rehabilitation, Elevation, and Mitigation Program, and Homeowner Resettlement Program. How these policies individually and as a whole impacted housing recovery across communities with different socioeconomic and demographic profiles has not yet been studied, particularly in relation to damage levels. The concept of community social vulnerability has been widely used to explain many aspects of natural disasters. Nevertheless, how communities are vulnerable has been less fully examined. Community resilience has been conceptualized as a protective factor against negative impacts from disasters, however, how community resilience buffers the effects of vulnerability is not yet known. Because housing recovery is a dynamic social and economic process that varies according to context, this study examined the path from community vulnerability and resilience to housing recovery looking at both community characteristics and policy interventions. Sample/Methods: This retrospective longitudinal case study compared a literature-identified set of pre-disaster community characteristics, the effects of multiple public policy programs, and a set of time-variant community resilience indicators to changes in housing stock (operationally defined by percent of building permits to total occupied housing units/households) between 2010 and 2014, two years before and after Hurricane Sandy. The sample consisted of 51 municipalities in the nine counties in which between 4% and 58% of housing units suffered either major or severe damage. Structural equation modeling (SEM) was used to determine the path from vulnerability to the housing recovery, via multiple public programs, separately and as a whole, and via the community resilience indicators. The spatial analytical tool ArcGIS 10.2 was used to show the spatial relations between housing recovery patterns and community vulnerability and resilience. Findings: Holding damage levels constant, communities with higher proportions of Hispanic households had significantly lower levels of housing recovery while communities with households with an adult >age 65 had significantly higher levels of the housing recovery. The contrast was partly due to the different levels of total public support the two types of the community received. Further, while the public policy programs individually mediated the negative associations between African American and female-headed households and housing recovery, communities with larger proportions of African American, female-headed and Hispanic households were “vulnerable” to lower levels of housing recovery because they lacked sufficient public program support. Even so, higher employment rates and incomes buffered vulnerability to lower housing recovery. Because housing is the "wobbly pillar" of the welfare state, the housing needs of these particular groups should be more fully addressed by disaster policy.Keywords: community social vulnerability, community resilience, hurricane, public policy
Procedia PDF Downloads 37380 The Potential of Rhizospheric Bacteria for Mycotoxigenic Fungi Suppression
Authors: Vanja Vlajkov, Ivana PajčIn, Mila Grahovac, Marta Loc, Dragana Budakov, Jovana Grahovac
Abstract:
The rhizosphere soil refers to the plant roots' dynamic environment characterized by their inhabitants' high biological activity. Rhizospheric bacteria are recognized as effective biocontrol agents and considered cardinal in alternative strategies for securing ecological plant diseases management. The need to suppress fungal pathogens is an urgent task, not only because of the direct economic losses caused by infection but also due to their ability to produce mycotoxins with harmful effects on human health. Aspergillus and Fusarium species are well-known producers of toxigenic metabolites with a high capacity to colonize crops and enter the food chain. The bacteria belonging to the Bacillus genus has been conceded as a plant beneficial species in agricultural practice and identified as plant growth-promoting rhizobacteria (PGPR). Besides incontestable potential, the full commercialization of microbial biopesticides is in the preliminary phase. Thus, there is a constant need for estimating the suitability of novel strains to be used as a central point of viable bioprocess leading to market-ready product development. In the present study, 76 potential producing strains were isolated from the rhizosphere soil, sampled from different localities in the Autonomous Province of Vojvodina, Republic of Serbia. The selective isolation process of strains started by resuspending 1 g of soil samples in 9 ml of saline and incubating at 28° C for 15 minutes at 150 rpm. After homogenization, thermal treatment at 100° C for 7 minutes was performed. Dilution series (10-1-10-3) were prepared, and 500 µl of each was inoculated on nutrient agar plates and incubated at 28° C for 48 h. The pure cultures of morphologically different strains indicating belonging to the Bacillus genus were obtained by the spread-plate technique. The cultivation of the isolated strains was carried out in an Erlenmeyer flask for 96 h, at 28 °C, 170 rpm. The antagonistic activity screening included two phytopathogenic fungi as test microorganisms: Aspergillus sp. and Fusarium sp. The mycelial growth inhibition was estimated based on the antimicrobial activity testing of cultivation broth by the diffusion method. For the Aspergillus sp., the highest antifungal activity was recorded for the isolates Kro-4a and Mah-1a. In contrast, for the Fusarium sp., following 15 isolates exhibited the highest antagonistic effect Par-1, Par-2, Par-3, Par-4, Kup-4, Paš-1b, Pap-3, Kro-2, Kro-3a, Kro-3b, Kra-1a, Kra-1b, Šar-1, Šar-2b and Šar-4. One-way ANOVA was performed to determine the antagonists' effect statistical significance on inhibition zone diameter. Duncan's multiple range test was conducted to define homogenous groups of antagonists with the same level of statistical significance regarding their effect on antimicrobial activity of the tested cultivation broth against tested pathogens. The study results have pointed out the significant in vitro potential of the isolated strains to be used as biocontrol agents for the suppression of the tested mycotoxigenic fungi. Further research should include the identification and detailed characterization of the most promising isolates and mode of action of the selected strains as biocontrol agents. The following research should also involve bioprocess optimization steps to fully reach the selected strains' potential as microbial biopesticides and design cost-effective biotechnological production.Keywords: Bacillus, biocontrol, bioprocess, mycotoxigenic fungi
Procedia PDF Downloads 19879 Made on Land, Ends Up in the Water "I-Clare" Intelligent Remediation System for Removal of Harmful Contaminants in Water using Modified Reticulated Vitreous Carbon Foam
Authors: Sabina Żołędowska, Tadeusz Ossowski, Robert Bogdanowicz, Jacek Ryl, Paweł Rostkowski, Michał Kruczkowski, Michał Sobaszek, Zofia Cebula, Grzegorz Skowierzak, Paweł Jakóbczyk, Lilit Hovhannisyan, Paweł Ślepski, Iwona Kaczmarczyk, Mattia Pierpaoli, Bartłomiej Dec, Dawid Nidzworski
Abstract:
The circular economy of water presents a pressing environmental challenge in our society. Water contains various harmful substances, such as drugs, antibiotics, hormones, and dioxides, which can pose silent threats. Water pollution has severe consequences for aquatic ecosystems. It disrupts the balance of ecosystems by harming aquatic plants, animals, and microorganisms. Water pollution poses significant risks to human health. Exposure to toxic chemicals through contaminated water can have long-term health effects, such as cancer, developmental disorders, and hormonal imbalances. However, effective remediation systems can be implemented to remove these contaminants using electrocatalytic processes, which offer an environmentally friendly alternative to other treatment methods, and one of them is the innovative iCLARE system. The project's primary focus revolves around a few main topics: Reactor design and construction, selection of a specific type of reticulated vitreous carbon foams (RVC), analytical studies of harmful contaminants parameters and AI implementation. This high-performance electrochemical reactor will be build based on a novel type of electrode material. The proposed approach utilizes the application of reticulated vitreous carbon foams (RVC) with deposited modified metal oxides (MMO) and diamond thin films. The following setup is characterized by high surface area development and satisfactory mechanical and electrochemical properties, designed for high electrocatalytic process efficiency. The consortium validated electrode modification methods that are the base of the iCLARE product and established the procedures for the detection of chemicals detection: - deposition of metal oxides WO3 and V2O5-deposition of boron-doped diamond/nanowalls structures by CVD process. The chosen electrodes (porous Ferroterm electrodes) were stress tested for various parameters that might occur inside the iCLARE machine–corosis, the long-term structure of the electrode surface during electrochemical processes, and energetic efficacy using cyclic polarization and electrochemical impedance spectroscopy (before and after electrolysis) and dynamic electrochemical impedance spectroscopy (DEIS). This tool allows real-time monitoring of the changes at the electrode/electrolyte interphase. On the other hand, the toxicity of iCLARE chemicals and products of electrolysis are evaluated before and after the treatment using MARA examination (IBMM) and HPLC-MS-MS (NILU), giving us information about the harmfulness of using electrode material and the efficiency of iClare system in the disposal of pollutants. Implementation of data into the system that uses artificial intelligence and the possibility of practical application is in progress (SensDx).Keywords: waste water treatement, RVC, electrocatalysis, paracetamol
Procedia PDF Downloads 8978 Multibody Constrained Dynamics of Y-Method Installation System for a Large Scale Subsea Equipment
Authors: Naeem Ullah, Menglan Duan, Mac Darlington Uche Onuoha
Abstract:
The lowering of subsea equipment into the deep waters is a challenging job due to the harsh offshore environment. Many researchers have introduced various installation systems to deploy the payload safely into the deep oceans. In general practice, dual floating vessels are not employed owing to the prevalent safety risks and hazards caused by ever-increasing dynamical effects sourced by mutual interaction between the bodies. However, while keeping in the view of the optimal grounds, such as economical one, the Y-method, the two conventional tugboats supporting the equipment by the two independent strands connected to a tri-plate above the equipment, has been employed to study multibody dynamics of the dual barge lifting operations. In this study, the two tugboats and the suspended payload (Y-method) are deployed for the lowering of subsea equipment into the deep waters as a multibody dynamic system. The two-wire ropes are used for the lifting and installation operation by this Y-method installation system. 6-dof (degree of freedom) for each body are considered to establish coupled 18-dof multibody model by embedding technique or velocity transformation technique. The fundamental and prompt advantage of this technique is that the constraint forces can be eliminated directly, and no extra computational effort is required for the elimination of the constraint forces. The inertial frame of reference is taken at the surface of the water as the time-independent frame of reference, and the floating frames of reference are introduced in each body as the time-dependent frames of reference in order to formulate the velocity transformation matrix. The local transformation of the generalized coordinates to the inertial frame of reference is executed by applying the Euler Angle approach. The spherical joints are articulated amongst the multibody as the kinematic joints. The hydrodynamic force, the two-strand forces, the hydrostatic force, and the mooring forces are taken into consideration as the external forces. The radiation force of the hydrodynamic force is obtained by employing the Cummins equation. The wave exciting part of the hydrodynamic force is obtained by using force response amplitude operators (RAOs) that are obtained by the commercial solver ‘OpenFOAM’. The strand force is obtained by considering the wire rope as an elastic spring. The nonlinear hydrostatic force is obtained by the pressure integration technique at each time step of the wave movement. The mooring forces are evaluated by using Faltinsen analytical approach. ‘The Runge Kutta Method’ of Fourth-Order is employed to evaluate the coupled equations of motion obtained for 18-dof multibody model. The results are correlated with the simulated Orcaflex Model. Moreover, the results from Orcaflex Model are compared with the MOSES Model from previous studies. The MBDS of single barge lifting operation from the former studies are compared with the MBDS of the established dual barge lifting operation. The dynamics of the dual barge lifting operation are found larger in magnitude as compared to the single barge lifting operation. It is noticed that the traction at the top connection point of the cable decreases with the increase in the length, and it becomes almost constant after passing through the splash zone.Keywords: dual barge lifting operation, Y-method, multibody dynamics, shipbuilding, installation of subsea equipment, shipbuilding
Procedia PDF Downloads 20377 Adolescent Health Risk Behaviors and the Mediating Effects of Family Dynamics and Socio-Demographic Factors
Authors: Rufina C. Abul, Dylan Kyle D. Apostol, Darius Rex G. Binuya, Alyanah Mae F. Cauilan, Darren A. Diaz, Angelica Jones A. Gallang, Charisse G. Kiwang, Alyanna Nicole G. Mactal, Nadine Beatrize V. Nerona, Janella Nicole R. Posadas, Charisse Purie C. Toledo
Abstract:
Background: Dramatic physical development, socioemotional adjustment, and cognitive changes highlight adolescent development. Adolescent brains are susceptible to emotional reactivity, making them likely to engage in risk-taking and impulsive behaviors. The family is crucial in laying the foundations of good health. Aims: This study determined the degree of family cohesion, quality of father-child and mother-child relationships, and degree of academic pressure across cultures, age groups, and sexual orientations. Further, it sought the prevalence of adolescent health concerns, including suicide risks, risk-taking behaviors, social media engagement, and self-care deviations. Finally, the correlations between health risk behaviors and the elements of family dynamics were unraveled. Methods: The descriptive-correlational design served as the blueprint for this study. Data were collected from 1095 adolescents aged 12-21 in two high schools and two universities in Baguio City using self-report questionnaires. Data was analyzed using Microsoft Excel Toolpak and IBM SPSS Statistics to identify significant differences and relationships among variables through descriptive statistics (frequency, %, means and figures) and inferential statistics (ANOVA and logistic regression). Results and Discussion: Adolescents generally have strong family cohesion (FC), high-quality father-child relationships (F-CR), very high-quality mother-child relationships(M-CR), and experience high academic pressure (AP). Cultural affiliation does not influence the 4 elements of family dynamics; the higher the age, the stronger the family cohesion; males score significantly higher on family cohesion and mother-child relationship while significantly lower in perceived academic pressure compared to their female and LGBT counterparts. Suicide risk is prevalent among 29-63% of the population, safety issues have the lowest prevalence for having an abusive relationship (8.22%) and the highest for encountering major family changes (53.52%). Substance use was highest for vaping (22.74%), sexual engagement occurs in 14.61% of the population, while 63% are engaged in social media for >5 hours/day. The self-care deviation is highest for weight concerns (63.39%), lack of visits to health care professionals (64.65%) and lack of exercise (49.94%). All 4 elements of family dynamic (FC, F-CR, M-CR and AP) are significantly associated with safety concerns, suicide risks and social media engagement, while M-CR significantly influences cigarette smoking, alcohol drinking, rugby use and engagement in sex. Conclusion and Recommendations: Strong family cohesion and quality parent-child interactions improve emotional and behavioral outcomes. Sexual orientation has a significant impact on academic pressure and social media use, demanding targeted treatments. The link between family dynamics and health-risk behaviors emphasizes the importance of promoting positive family relationships and encouraging safer behaviors, which are critical for increasing adolescents' well-being.Keywords: adolescent health, family cohesion, health risk behaviors, suicide risk
Procedia PDF Downloads 1576 Sensing Study through Resonance Energy and Electron Transfer between Föster Resonance Energy Transfer Pair of Fluorescent Copolymers and Nitro-Compounds
Authors: Vishal Kumar, Soumitra Satapathi
Abstract:
Föster Resonance Energy Transfer (FRET) is a powerful technique used to probe close-range molecular interactions. Physically, the FRET phenomenon manifests as a dipole–dipole interaction between closely juxtaposed fluorescent molecules (10–100 Å). Our effort is to employ this FRET technique to make a prototype device for highly sensitive detection of environment pollutant. Among the most common environmental pollutants, nitroaromatic compounds (NACs) are of particular interest because of their durability and toxicity. That’s why, sensitive and selective detection of small amounts of nitroaromatic explosives, in particular, 2,4,6-trinitrophenol (TNP), 2,4-dinitrotoluene (DNT) and 2,4,6-trinitrotoluene (TNT) has been a critical challenge due to the increasing threat of explosive-based terrorism and the need of environmental monitoring of drinking and waste water. In addition, the excessive utilization of TNP in several other areas such as burn ointment, pesticides, glass and the leather industry resulted in environmental accumulation, and is eventually contaminating the soil and aquatic systems. To the date, high number of elegant methods, including fluorimetry, gas chromatography, mass, ion-mobility and Raman spectrometry have been successfully applied for explosive detection. Among these efforts, fluorescence-quenching methods based on the mechanism of FRET show good assembly flexibility, high selectivity and sensitivity. Here, we report a FRET-based sensor system for the highly selective detection of NACs, such as TNP, DNT and TNT. The sensor system is composed of a copolymer Poly [(N,N-dimethylacrylamide)-co-(Boc-Trp-EMA)] (RP) bearing tryptophan derivative in the side chain as donor and dansyl tagged copolymer P(MMA-co-Dansyl-Ala-HEMA) (DCP) as an acceptor. Initially, the inherent fluorescence of RP copolymer is quenched by non-radiative energy transfer to DCP which only happens once the two molecules are within Förster critical distance (R0). The excellent spectral overlap (Jλ= 6.08×10¹⁴ nm⁴M⁻¹cm⁻¹) between donors’ (RP) emission profile and acceptors’ (DCP) absorption profile makes them an exciting and efficient FRET pair i.e. further confirmed by the high rate of energy transfer from RP to DCP i.e. 0.87 ns⁻¹ and lifetime measurement by time correlated single photon counting (TCSPC) to validate the 64% FRET efficiency. This FRET pair exhibited a specific fluorescence response to NACs such as DNT, TNT and TNP with 5.4, 2.3 and 0.4 µM LODs, respectively. The detection of NACs occurs with high sensitivity by photoluminescence quenching of FRET signal induced by photo-induced electron transfer (PET) from electron-rich FRET pair to electron-deficient NAC molecules. The estimated stern-volmer constant (KSV) values for DNT, TNT and TNP are 6.9 × 10³, 7.0 × 10³ and 1.6 × 104 M⁻¹, respectively. The mechanistic details of molecular interactions are established by time-resolved fluorescence, steady-state fluorescence and absorption spectroscopy confirmed that the sensing process is of mixed type, i.e. both dynamic and static quenching as lifetime of FRET system (0.73 ns) is reduced to 0.55, 0.57 and 0.61 ns DNT, TNT and TNP, respectively. In summary, the simplicity and sensitivity of this novel FRET sensor opens up the possibility of designing optical sensor of various NACs in one single platform for developing multimodal sensor for environmental monitoring and future field based study.Keywords: FRET, nitroaromatic, stern-Volmer constant, tryptophan and dansyl tagged copolymer
Procedia PDF Downloads 13675 Transcriptional Differences in B cell Subpopulations over the Course of Preclinical Autoimmunity Development
Authors: Aleksandra Bylinska, Samantha Slight-Webb, Kevin Thomas, Miles Smith, Susan Macwana, Nicolas Dominguez, Eliza Chakravarty, Joan T. Merrill, Judith A. James, Joel M. Guthridge
Abstract:
Background: Systemic Lupus Erythematosus (SLE) is an interferon-related autoimmune disease characterized by B cell dysfunction. One of the main hallmarks is a loss of tolerance to self-antigens leading to increased levels of autoantibodies against nuclear components (ANAs). However, up to 20% of healthy ANA+ individuals will not develop clinical illness. SLE is more prevalent among women and minority populations (African, Asian American and Hispanics). Moreover, African Americans have a stronger interferon (IFN) signature and develop more severe symptoms. The exact mechanisms involved in ethnicity-dependent B cell dysregulation and the progression of autoimmune disease from ANA+ healthy individuals to clinical disease remains unclear. Methods: Peripheral blood mononuclear cells (PBMCs) from African (AA) and European American (EA) ANA- (n=12), ANA+ (n=12) and SLE (n=12) individuals were assessed by multimodal scRNA-Seq/CITE-Seq methods to examine differential gene signatures in specific B cell subsets. Library preparation was done with a 10X Genomics Chromium according to established protocols and sequenced on Illumina NextSeq. The data were further analyzed for distinct cluster identification and differential gene signatures in the Seurat package in R and pathways analysis was performed using Ingenuity Pathways Analysis (IPA). Results: Comparing all subjects, 14 distinct B cell clusters were identified using a community detection algorithm and visualized with Uniform Manifold Approximation Projection (UMAP). The proportion of each of those clusters varied by disease status and ethnicity. Transitional B cells trended higher in ANA+ healthy individuals, especially in AA. Ribonucleoprotein high population (HNRNPH1 elevated, heterogeneous nuclear ribonucleoprotein, RNP-Hi) of proliferating Naïve B cells were more prevalent in SLE patients, specifically in EA. Interferon-induced protein high population (IFIT-Hi) of Naive B cells are increased in EA ANA- individuals. The proportion of memory B cells and plasma cells clusters tend to be expanded in SLE patients. As anticipated, we observed a higher signature of cytokine-related pathways, especially interferon, in SLE individuals. Pathway analysis among AA individuals revealed an NRF2-mediated Oxidative Stress response signature in the transitional B cell cluster, not seen in EA individuals. TNFR1/2 and Sirtuin Signaling pathway genes were higher in AA IFIT-Hi Naive B cells, whereas they were not detected in EA individuals. Interferon signaling was observed in B cells in both ethnicities. Oxidative phosphorylation was found in age-related B cells (ABCs) for both ethnicities, whereas Death Receptor Signaling was found only in EA patients in these cells. Interferon-related transcription factors were elevated in ABCs and IFIT-Hi Naive B cells in SLE subjects of both ethnicities. Conclusions: ANA+ healthy individuals have altered gene expression pathways in B cells that might drive apoptosis and subsequent clinical autoimmune pathogenesis. Increases in certain regulatory pathways may delay progression to SLE. Further, AA individuals have more elevated activation pathways that may make them more susceptible to SLE. Procedia PDF Downloads 17674 Kinematic Gait Analysis Is a Non-Invasive, More Objective and Earlier Measurement of Impairment in the Mdx Mouse Model of Duchenne Muscular Dystrophy
Authors: P. J. Sweeney, T. Ahtoniemi, J. Puoliväli, T. Laitinen, K. Lehtimäki, A. Nurmi, D. Wells
Abstract:
Duchenne muscular dystrophy (DMD) is caused by an X linked mutation in the dystrophin gene; lack of dystrophin causes a progressive muscle necrosis which leads to a progressive decrease in mobility in those suffering from the disease. The MDX mouse, a mutant mouse model which displays a frank dystrophinopathy, is currently widely employed in pre clinical efficacy models for treatments and therapies aimed at DMD. In general the end-points examined within this model have been based on invasive histopathology of muscles and serum biochemical measures like measurement of serum creatine kinase (sCK). It is established that a “critical period” between 4 and 6 weeks exists in the MDX mouse when there is extensive muscle damage that is largely sub clinical but evident with sCK measurements and histopathological staining. However, a full characterization of the MDX model remains largely incomplete especially with respect to the ability to aggravate of the muscle damage beyond the critical period. The purpose of this study was to attempt to aggravate the muscle damage in the MDX mouse and to create a wider, more readily translatable and discernible, therapeutic window for the testing of potential therapies for DMD. The study consisted of subjecting 15 male mutant MDX mice and 15 male wild-type mice to an intense chronic exercise regime that consisted of bi-weekly (two times per week) treadmill sessions over a 12 month period. Each session was 30 minutes in duration and the treadmill speed was gradually built up to 14m/min for the entire session. Baseline plasma creatine kinase (pCK), treadmill training performance and locomotor activity were measured after the “critical period” at around 10 weeks of age and again at 14 weeks of age, 6 months, 9 months and 12 months of age. In addition, kinematic gait analysis was employed using a novel analysis algorithm in order to compare changes in gait and fine motor skills in diseased exercised MDX mice compared to exercised wild type mice and non exercised MDX mice. In addition, a morphological and metabolic profile (including lipid profile), from the muscles most severely affected, the gastrocnemius muscle and the tibialis anterior muscle, was also measured at the same time intervals. Results indicate that by aggravating or exacerbating the underlying muscle damage in the MDX mouse by exercise a more pronounced and severe phenotype in comes to light and this can be picked up earlier by kinematic gait analysis. A reduction in mobility as measured by open field is not apparent at younger ages nor during the critical period, but changes in gait are apparent in the mutant MDX mice. These gait changes coincide with pronounced morphological and metabolic changes by non-invasive anatomical MRI and proton spectroscopy (1H-MRS) we have reported elsewhere. Evidence of a progressive asymmetric pathology in imaging parameters as well as in the kinematic gait analysis was found. Taken together, the data show that chronic exercise regime exacerbates the muscle damage beyond the critical period and the ability to measure through non-invasive means are important factors to consider when performing preclinical efficacy studies in the MDX mouse.Keywords: Gait, muscular dystrophy, Kinematic analysis, neuromuscular disease
Procedia PDF Downloads 27673 Optimized Electron Diffraction Detection and Data Acquisition in Diffraction Tomography: A Complete Solution by Gatan
Authors: Saleh Gorji, Sahil Gulati, Ana Pakzad
Abstract:
Continuous electron diffraction tomography, also known as microcrystal electron diffraction (MicroED) or three-dimensional electron diffraction (3DED), is a powerful technique, which in combination with cryo-electron microscopy (cryo-ED), can provide atomic-scale 3D information about the crystal structure and composition of different classes of crystalline materials such as proteins, peptides, and small molecules. Unlike the well-established X-ray crystallography method, 3DED does not require large single crystals and can collect accurate electron diffraction data from crystals as small as 50 – 100 nm. This is a critical advantage as growing larger crystals, as required by X-ray crystallography methods, is often very difficult, time-consuming, and expensive. In most cases, specimens studied via 3DED method are electron beam sensitive, which means there is a limitation on the maximum amount of electron dose one can use to collect the required data for a high-resolution structure determination. Therefore, collecting data using a conventional scintillator-based fiber coupled camera brings additional challenges. This is because of the inherent noise introduced during the electron-to-photon conversion in the scintillator and transfer of light via the fibers to the sensor, which results in a poor signal-to-noise ratio and requires a relatively higher and commonly specimen-damaging electron dose rates, especially for protein crystals. As in other cryo-EM techniques, damage to the specimen can be mitigated if a direct detection camera is used which provides a high signal-to-noise ratio at low electron doses. In this work, we have used two classes of such detectors from Gatan, namely the K3® camera (a monolithic active pixel sensor) and Stela™ (that utilizes DECTRIS hybrid-pixel technology), to address this problem. The K3 is an electron counting detector optimized for low-dose applications (like structural biology cryo-EM), and Stela is also a counting electron detector but optimized for diffraction applications with high speed and high dynamic range. Lastly, data collection workflows, including crystal screening, microscope optics setup (for imaging and diffraction), stage height adjustment at each crystal position, and tomogram acquisition, can be one of the other challenges of the 3DED technique. Traditionally this has been all done manually or in a partly automated fashion using open-source software and scripting, requiring long hours on the microscope (extra cost) and extensive user interaction with the system. We have recently introduced Latitude® D in DigitalMicrograph® software, which is compatible with all pre- and post-energy-filter Gatan cameras and enables 3DED data acquisition in an automated and optimized fashion. Higher quality 3DED data enables structure determination with higher confidence, while automated workflows allow these to be completed considerably faster than before. Using multiple examples, this work will demonstrate how to direct detection electron counting cameras enhance 3DED results (3 to better than 1 Angstrom) for protein and small molecule structure determination. We will also show how Latitude D software facilitates collecting such data in an integrated and fully automated user interface.Keywords: continuous electron diffraction tomography, direct detection, diffraction, Latitude D, Digitalmicrograph, proteins, small molecules
Procedia PDF Downloads 10772 Learning Curve Effect on Materials Procurement Schedule of Multiple Sister Ships
Authors: Vijaya Dixit Aasheesh Dixit
Abstract:
Shipbuilding industry operates in Engineer Procure Construct (EPC) context. Product mix of a shipyard comprises of various types of ships like bulk carriers, tankers, barges, coast guard vessels, sub-marines etc. Each order is unique based on the type of ship and customized requirements, which are engineered into the product right from design stage. Thus, to execute every new project, a shipyard needs to upgrade its production expertise. As a result, over the long run, holistic learning occurs across different types of projects which contributes to the knowledge base of the shipyard. Simultaneously, in the short term, during execution of a project comprising of multiple sister ships, repetition of similar tasks leads to learning at activity level. This research aims to capture above learnings of a shipyard and incorporate learning curve effect in project scheduling and materials procurement to improve project performance. Extant literature provides support for the existence of such learnings in an organization. In shipbuilding, there are sequences of similar activities which are expected to exhibit learning curve behavior. For example, the nearly identical structural sub-blocks which are successively fabricated, erected, and outfitted with piping and electrical systems. Learning curve representation can model not only a decrease in mean completion time of an activity, but also a decrease in uncertainty of activity duration. Sister ships have similar material requirements. The same supplier base supplies materials for all the sister ships within a project. On one hand, this provides an opportunity to reduce transportation cost by batching the order quantities of multiple ships. On the other hand, it increases the inventory holding cost at shipyard and the risk of obsolescence. Further, due to learning curve effect the production scheduled of each consequent ship gets compressed. Thus, the material requirement schedule of every next ship differs from its previous ship. As more and more ships get constructed, compressed production schedules increase the possibility of batching the orders of sister ships. This work aims at integrating materials management with project scheduling of long duration projects for manufacturing of multiple sister ships. It incorporates the learning curve effect on progressively compressing material requirement schedules and addresses the above trade-off of transportation cost and inventory holding and shortage costs while satisfying budget constraints of various stages of the project. The activity durations and lead time of items are not crisp and are available in the form of probabilistic distribution. A Stochastic Mixed Integer Programming (SMIP) model is formulated which is solved using evolutionary algorithm. Its output provides ordering dates of items and degree of order batching for all types of items. Sensitivity analysis determines the threshold number of sister ships required in a project to leverage the advantage of learning curve effect in materials management decisions. This analysis will help materials managers to gain insights about the scenarios: when and to what degree is it beneficial to treat a multiple ship project as an integrated one by batching the order quantities and when and to what degree to practice distinctive procurement for individual ship.Keywords: learning curve, materials management, shipbuilding, sister ships
Procedia PDF Downloads 50271 Human Identification and Detection of Suspicious Incidents Based on Outfit Colors: Image Processing Approach in CCTV Videos
Authors: Thilini M. Yatanwala
Abstract:
CCTV (Closed-Circuit-Television) Surveillance System is being used in public places over decades and a large variety of data is being produced every moment. However, most of the CCTV data is stored in isolation without having integrity. As a result, identification of the behavior of suspicious people along with their location has become strenuous. This research was conducted to acquire more accurate and reliable timely information from the CCTV video records. The implemented system can identify human objects in public places based on outfit colors. Inter-process communication technologies were used to implement the CCTV camera network to track people in the premises. The research was conducted in three stages and in the first stage human objects were filtered from other movable objects available in public places. In the second stage people were uniquely identified based on their outfit colors and in the third stage an individual was continuously tracked in the CCTV network. A face detection algorithm was implemented using cascade classifier based on the training model to detect human objects. HAAR feature based two-dimensional convolution operator was introduced to identify features of the human face such as region of eyes, region of nose and bridge of the nose based on darkness and lightness of facial area. In the second stage outfit colors of human objects were analyzed by dividing the area into upper left, upper right, lower left, lower right of the body. Mean color, mod color and standard deviation of each area were extracted as crucial factors to uniquely identify human object using histogram based approach. Color based measurements were written in to XML files and separate directories were maintained to store XML files related to each camera according to time stamp. As the third stage of the approach, inter-process communication techniques were used to implement an acknowledgement based CCTV camera network to continuously track individuals in a network of cameras. Real time analysis of XML files generated in each camera can determine the path of individual to monitor full activity sequence. Higher efficiency was achieved by sending and receiving acknowledgments only among adjacent cameras. Suspicious incidents such as a person staying in a sensitive area for a longer period or a person disappeared from the camera coverage can be detected in this approach. The system was tested for 150 people with the accuracy level of 82%. However, this approach was unable to produce expected results in the presence of group of people wearing similar type of outfits. This approach can be applied to any existing camera network without changing the physical arrangement of CCTV cameras. The study of human identification and suspicious incident detection using outfit color analysis can achieve higher level of accuracy and the project will be continued by integrating motion and gait feature analysis techniques to derive more information from CCTV videos.Keywords: CCTV surveillance, human detection and identification, image processing, inter-process communication, security, suspicious detection
Procedia PDF Downloads 18470 Remote Sensing of Urban Land Cover Change: Trends, Driving Forces, and Indicators
Authors: Wei Ji
Abstract:
This study was conducted in the Kansas City metropolitan area of the United States, which has experienced significant urban sprawling in recent decades. The remote sensing of land cover changes in this area spanned over four decades from 1972 through 2010. The project was implemented in two stages: the first stage focused on detection of long-term trends of urban land cover change, while the second one examined how to detect the coupled effects of human impact and climate change on urban landscapes. For the first-stage study, six Landsat images were used with a time interval of about five years for the period from 1972 through 2001. Four major land cover types, built-up land, forestland, non-forest vegetation land, and surface water, were mapped using supervised image classification techniques. The study found that over the three decades the built-up lands in the study area were more than doubled, which was mainly at the expense of non-forest vegetation lands. Surprisingly and interestingly, the area also saw a significant gain in surface water coverage. This observation raised questions: How have human activities and precipitation variation jointly impacted surface water cover during recent decades? How can we detect such coupled impacts through remote sensing analysis? These questions led to the second stage of the study, in which we designed and developed approaches to detecting fine-scale surface waters and analyzing coupled effects of human impact and precipitation variation on the waters. To effectively detect urban landscape changes that might be jointly shaped by precipitation variation, our study proposed “urban wetscapes” (loosely-defined urban wetlands) as a new indicator for remote sensing detection. The study examined whether urban wetscape dynamics was a sensitive indicator of the coupled effects of the two driving forces. To better detect this indicator, a rule-based classification algorithm was developed to identify fine-scale, hidden wetlands that could not be appropriately detected based on their spectral differentiability by a traditional image classification. Three SPOT images for years 1992, 2008, and 2010, respectively were classified with this technique to generate the four types of land cover as described above. The spatial analyses of remotely-sensed wetscape changes were implemented at the scales of metropolitan, watershed, and sub-watershed, as well as based on the size of surface water bodies in order to accurately reveal urban wetscape change trends in relation to the driving forces. The study identified that urban wetscape dynamics varied in trend and magnitude from the metropolitan, watersheds, to sub-watersheds in response to human impacts at different scales. The study also found that increased precipitation in the region in the past decades swelled larger wetlands in particular while generally smaller wetlands decreased mainly due to human development activities. These results confirm that wetscape dynamics can effectively reveal the coupled effects of human impact and climate change on urban landscapes. As such, remote sensing of this indicator provides new insights into the relationships between urban land cover changes and driving forces.Keywords: urban land cover, human impact, climate change, rule-based classification, across-scale analysis
Procedia PDF Downloads 30969 Information Pollution: Exploratory Analysis of Subs-Saharan African Media’s Capabilities to Combat Misinformation and Disinformation
Authors: Muhammed Jamiu Mustapha, Jamiu Folarin, Stephen Obiri Agyei, Rasheed Ademola Adebiyi, Mutiu Iyanda Lasisi
Abstract:
The role of information in societal development and growth cannot be over-emphasized. It has remained an age-long strategy to adopt the information flow to make an egalitarian society. The same has become a tool for throwing society into chaos and anarchy. It has been adopted as a weapon of war and a veritable instrument of psychological warfare with a variety of uses. That is why some scholars posit that information could be deployed as a weapon to wreak “Mass Destruction" or promote “Mass Development". When used as a tool for destruction, the effect on society is like an atomic bomb which when it is released, pollutes the air and suffocates the people. Technological advancement has further exposed the latent power of information and many societies seem to be overwhelmed by its negative effect. While information remains one of the bedrock of democracy, the information ecosystem across the world is currently facing a more difficult battle than ever before due to information pluralism and technological advancement. The more the agents involved try to combat its menace, the difficult and complex it is proving to be curbed. In a region like Africa with dangling democracy enfolds with complexities of multi-religion, multi-cultures, inter-tribes, ongoing issues that are yet to be resolved, it is important to pay critical attention to the case of information disorder and find appropriate ways to curb or mitigate its effects. The media, being the middleman in the distribution of information, needs to build capacities and capabilities to separate the whiff of misinformation and disinformation from the grains of truthful data. From quasi-statistical senses, it has been observed that the efforts aimed at fighting information pollution have not considered the built resilience of media organisations against this disorder. Apparently, the efforts, resources and technologies adopted for the conception, production and spread of information pollution are much more sophisticated than approaches to suppress and even reduce its effects on society. Thus, this study seeks to interrogate the phenomenon of information pollution and the capabilities of select media organisations in Sub-Saharan Africa. In doing this, the following questions are probed; what are the media actions to curb the menace of information pollution? Which of these actions are working and how effective are they? And which of the actions are not working and why they are not working? Adopting quantitative and qualitative approaches and anchored on the Dynamic Capability Theory, the study aims at digging up insights to further understand the complexities of information pollution, media capabilities and strategic resources for managing misinformation and disinformation in the region. The quantitative approach involves surveys and the use of questionnaires to get data from journalists on their understanding of misinformation/disinformation and their capabilities to gate-keep. Case Analysis of select media and content analysis of their strategic resources to manage misinformation and disinformation is adopted in the study while the qualitative approach will involve an In-depth Interview to have a more robust analysis is also considered. The study is critical in the fight against information pollution for a number of reasons. One, it is a novel attempt to document the level of media capabilities to fight the phenomenon of information disorder. Two, the study will enable the region to have a clear understanding of the capabilities of existing media organizations to combat misinformation and disinformation in the countries that make up the region. Recommendations emanating from the study could be used to initiate, intensify or review existing approaches to combat the menace of information pollution in the region.Keywords: disinformation, information pollution, misinformation, media capabilities, sub-Saharan Africa
Procedia PDF Downloads 16268 Urban Heat Islands Analysis of Matera, Italy Based on the Change of Land Cover Using Satellite Landsat Images from 2000 to 2017
Authors: Giuseppina Anna Giorgio, Angela Lorusso, Maria Ragosta, Vito Telesca
Abstract:
Climate change is a major public health threat due to the effects of extreme weather events on human health and on quality of life in general. In this context, mean temperatures are increasing, in particular, extreme temperatures, with heat waves becoming more frequent, more intense, and longer lasting. In many cities, extreme heat waves have drastically increased, giving rise to so-called Urban Heat Island (UHI) phenomenon. In an urban centre, maximum temperatures may be up to 10° C warmer, due to different local atmospheric conditions. UHI occurs in the metropolitan areas as function of the population size and density of a city. It consists of a significant difference in temperature compared to the rural/suburban areas. Increasing industrialization and urbanization have increased this phenomenon and it has recently also been detected in small cities. Weather conditions and land use are one of the key parameters in the formation of UHI. In particular surface urban heat island is directly related to temperatures, to land surface types and surface modifications. The present study concern a UHI analysis of Matera city (Italy) based on the analysis of temperature, change in land use and land cover, using Corine Land Cover maps and satellite Landsat images. Matera, located in Southern Italy, has a typical Mediterranean climate with mild winters and hot and humid summers. Moreover, Matera has been awarded the international title of the 2019 European Capital of Culture. Matera represents a significant example of vernacular architecture. The structure of the city is articulated by a vertical succession of dug layers sometimes excavated or partly excavated and partly built, according to the original shape and height of the calcarenitic slope. In this study, two meteorological stations were selected: MTA (MaTera Alsia, in industrial zone) and MTCP (MaTera Civil Protection, suburban area located in a green zone). In order to evaluate the increase in temperatures (in terms of UHI occurrences) over time, and evaluating the effect of land use on weather conditions, the climate variability of temperatures for both stations was explored. Results show that UHI phenomena is growing in Matera city, with an increase of maximum temperature values at a local scale. Subsequently, spatial analysis was conducted by Landsat satellite images. Four years was selected in the summer period (27/08/2000, 27/07/2006, 11/07/2012, 02/08/2017). In Particular, Landsat 7 ETM+ for 2000, 2006 and 2012 years; Landsat 8 OLI/TIRS for 2017. In order to estimate the LST, Mono Window Algorithm was applied. Therefore, the increase of LST values spatial scale trend has been verified, in according to results obtained at local scale. Finally, the analysis of land use maps over the years by the LST and/or the maximum temperatures measured, show that the development of industrialized area produces a corresponding increase in temperatures and consequently a growth in UHI.Keywords: climate variability, land surface temperature, LANDSAT images, urban heat island
Procedia PDF Downloads 12667 Neologisms and Word-Formation Processes in Board Game Rulebook Corpus: Preliminary Results
Authors: Athanasios Karasimos, Vasiliki Makri
Abstract:
This research focuses on the design and development of the first text Corpus based on Board Game Rulebooks (BGRC) with direct application on the morphological analysis of neologisms and tendencies in word-formation processes. Corpus linguistics is a dynamic field that examines language through the lens of vast collections of texts. These corpora consist of diverse written and spoken materials, ranging from literature and newspapers to transcripts of everyday conversations. By morphologically analyzing these extensive datasets, morphologists can gain valuable insights into how language functions and evolves, as these extensive datasets can reflect the byproducts of inflection, derivation, blending, clipping, compounding, and neology. This entails scrutinizing how words are created, modified, and combined to convey meaning in a corpus of challenging, creative, and straightforward texts that include rules, examples, tutorials, and tips. Board games teach players how to strategize, consider alternatives, and think flexibly, which are critical elements in language learning. Their rulebooks reflect not only their weight (complexity) but also the language properties of each genre and subgenre of these games. Board games are a captivating realm where strategy, competition, and creativity converge. Beyond the excitement of gameplay, board games also spark the art of word creation. Word games, like Scrabble, Codenames, Bananagrams, Wordcraft, Alice in the Wordland, Once uUpona Time, challenge players to construct words from a pool of letters, thus encouraging linguistic ingenuity and vocabulary expansion. These games foster a love for language, motivating players to unearth obscure words and devise clever combinations. On the other hand, the designers and creators produce rulebooks, where they include their joy of discovering the hidden potential of language, igniting the imagination, and playing with the beauty of words, making these games a delightful fusion of linguistic exploration and leisurely amusement. In this research, more than 150 rulebooks in English from all types of modern board games, either language-independent or language-dependent, are used to create the BGRC. A representative sample of each genre (family, party, worker placement, deckbuilding, dice, and chance games, strategy, eurogames, thematic, role-playing, among others) was selected based on the score from BoardGameGeek, the size of the texts and the level of complexity (weight) of the game. A morphological model with morphological networks, multi-word expressions, and word-creation mechanics based on the complexity of the textual structure, difficulty, and board game category will be presented. In enabling the identification of patterns, trends, and variations in word formation and other morphological processes, this research aspires to make avail of this creative yet strict text genre so as to (a) give invaluable insight into morphological creativity and innovation that (re)shape the lexicon of the English language and (b) test morphological theories. Overall, it is shown that corpus linguistics empowers us to explore the intricate tapestry of language, and morphology in particular, revealing its richness, flexibility, and adaptability in the ever-evolving landscape of human expression.Keywords: board game rulebooks, corpus design, morphological innovations, neologisms, word-formation processes
Procedia PDF Downloads 10266 Black-Box-Optimization Approach for High Precision Multi-Axes Forward-Feed Design
Authors: Sebastian Kehne, Alexander Epple, Werner Herfs
Abstract:
A new method for optimal selection of components for multi-axes forward-feed drive systems is proposed in which the choice of motors, gear boxes and ball screw drives is optimized. Essential is here the synchronization of electrical and mechanical frequency behavior of all axes because even advanced controls (like H∞-controls) can only control a small part of the mechanical modes – namely only those of observable and controllable states whose value can be derived from the positions of extern linear length measurement systems and/or rotary encoders on the motor or gear box shafts. Further problems are the unknown processing forces like cutting forces in machine tools during normal operation which make the estimation and control via an observer even more difficult. To start with, the open source Modelica Feed Drive Library which was developed at the Laboratory for Machine Tools, and Production Engineering (WZL) is extended from one axis design to the multi axes design. It is capable to simulate the mechanical, electrical and thermal behavior of permanent magnet synchronous machines with inverters, different gear boxes and ball screw drives in a mechanical system. To keep the calculation time down analytical equations are used for field and torque producing equivalent circuit, heat dissipation and mechanical torque at the shaft. As a first step, a small machine tool with a working area of 635 x 315 x 420 mm is taken apart, and the mechanical transfer behavior is measured with an impulse hammer and acceleration sensors. With the frequency transfer functions, a mechanical finite element model is built up which is reduced with substructure coupling to a mass-damper system which models the most important modes of the axes. The model is modelled with Modelica Feed Drive Library and validated by further relative measurements between machine table and spindle holder with a piezo actor and acceleration sensors. In a next step, the choice of possible components in motor catalogues is limited by derived analytical formulas which are based on well-known metrics to gain effective power and torque of the components. The simulation in Modelica is run with different permanent magnet synchronous motors, gear boxes and ball screw drives from different suppliers. To speed up the optimization different black-box optimization methods (Surrogate-based, gradient-based and evolutionary) are tested on the case. The objective that was chosen is to minimize the integral of the deviations if a step is given on the position controls of the different axes. Small values are good measures for a high dynamic axes. In each iteration (evaluation of one set of components) the control variables are adjusted automatically to have an overshoot less than 1%. It is obtained that the order of the components in optimization problem has a deep impact on the speed of the black-box optimization. An approach to do efficient black-box optimization for multi-axes design is presented in the last part. The authors would like to thank the German Research Foundation DFG for financial support of the project “Optimierung des mechatronischen Entwurfs von mehrachsigen Antriebssystemen (HE 5386/14-1 | 6954/4-1)” (English: Optimization of the Mechatronic Design of Multi-Axes Drive Systems).Keywords: ball screw drive design, discrete optimization, forward feed drives, gear box design, linear drives, machine tools, motor design, multi-axes design
Procedia PDF Downloads 28765 Artificial Intelligence Based Method in Identifying Tumour Infiltrating Lymphocytes of Triple Negative Breast Cancer
Authors: Nurkhairul Bariyah Baharun, Afzan Adam, Reena Rahayu Md Zin
Abstract:
Tumor microenvironment (TME) in breast cancer is mainly composed of cancer cells, immune cells, and stromal cells. The interaction between cancer cells and their microenvironment plays an important role in tumor development, progression, and treatment response. The TME in breast cancer includes tumor-infiltrating lymphocytes (TILs) that are implicated in killing tumor cells. TILs can be found in tumor stroma (sTILs) and within the tumor (iTILs). TILs in triple negative breast cancer (TNBC) have been demonstrated to have prognostic and potentially predictive value. The international Immune-Oncology Biomarker Working Group (TIL-WG) had developed a guideline focus on the assessment of sTILs using hematoxylin and eosin (H&E)-stained slides. According to the guideline, the pathologists use “eye balling” method on the H&E stained- slide for sTILs assessment. This method has low precision, poor interobserver reproducibility, and is time-consuming for a comprehensive evaluation, besides only counted sTILs in their assessment. The TIL-WG has therefore recommended that any algorithm for computational assessment of TILs utilizing the guidelines provided to overcome the limitations of manual assessment, thus providing highly accurate and reliable TILs detection and classification for reproducible and quantitative measurement. This study is carried out to develop a TNBC digital whole slide image (WSI) dataset from H&E-stained slides and IHC (CD4+ and CD8+) stained slides. TNBC cases were retrieved from the database of the Department of Pathology, Hospital Canselor Tuanku Muhriz (HCTM). TNBC cases diagnosed between the year 2010 and 2021 with no history of other cancer and available block tissue were included in the study (n=58). Tissue blocks were sectioned approximately 4 µm for H&E and IHC stain. The H&E staining was performed according to a well-established protocol. Indirect IHC stain was also performed on the tissue sections using protocol from Diagnostic BioSystems PolyVue™ Plus Kit, USA. The slides were stained with rabbit monoclonal, CD8 antibody (SP16) and Rabbit monoclonal, CD4 antibody (EP204). The selected and quality-checked slides were then scanned using a high-resolution whole slide scanner (Pannoramic DESK II DW- slide scanner) to digitalize the tissue image with a pixel resolution of 20x magnification. A manual TILs (sTILs and iTILs) assessment was then carried out by the appointed pathologist (2 pathologists) for manual TILs scoring from the digital WSIs following the guideline developed by TIL-WG 2014, and the result displayed as the percentage of sTILs and iTILs per mm² stromal and tumour area on the tissue. Following this, we aimed to develop an automated digital image scoring framework that incorporates key elements of manual guidelines (including both sTILs and iTILs) using manually annotated data for robust and objective quantification of TILs in TNBC. From the study, we have developed a digital dataset of TNBC H&E and IHC (CD4+ and CD8+) stained slides. We hope that an automated based scoring method can provide quantitative and interpretable TILs scoring, which correlates with the manual pathologist-derived sTILs and iTILs scoring and thus has potential prognostic implications.Keywords: automated quantification, digital pathology, triple negative breast cancer, tumour infiltrating lymphocytes
Procedia PDF Downloads 11864 Effects of Transcutaneous Electrical Pelvic Floor Muscle Stimulation on Peri-Vulva Area on Stress Urinary Incontinence: A Preliminary Study
Authors: Kim Ji-Hyun, Jeon Hye-Seon, Kwon Oh-Yun, Park Eun-Young, Hwang Ui-Jae, Gwak Gyeong-Tae, Yoon Hyeo-Bin
Abstract:
Stress urinary incontinence (SUI), a common women health problem, is an involuntary leakage of urine while sneezing, coughing, or physical exertion caused by insufficient strength of the pelvic floor and sphincter muscles. SUI also leads to decrease in quality of life and limits sexual activities. SUI is related to the increased bladder neck angle, bladder neck movement, funneling index, urethral width, and decreased urethral length. Various pelvic floor muscle electrical stimulation (ES) interventions have been applied to improve the symptoms of the people with SUI. ES activates afferent fibers of pudendal nerve and smoothly induces contractions of the pelvic floor muscles such as striated periurethral muscles and striated pelvic floor muscles. ES via intravaginal electrodes are the most frequently used types of the pelvic floor muscle ES for the female SUI. However, inserted electrode is uncomfortable and it increases the risks of infection. The purpose of this preliminary study was to determine if the 8-week transcutaneous pelvic floor ES would be effective to improve the symptoms and satisfaction of the females with SUI. Easy-K, specially designed ES equipment for the people with SUI, was used in this study. The oval shape stimulator can be placed on a toilet seat, and the surface has invaded electrode fit to contact with the entire vulva area while users are sitting on the stimulator. Five women with SUI were included in this experiment. Prior to the participation, subjects were instructed about procedures and precautions in using the ES. They have used the stimulator once a day for 20 minutes for each session at home. Outcome data was collected 3 times at the baseline, 4 weeks and 8 weeks after the intervention. Intravaginal sonography was used to measure the bladder neck angle, bladder neck movement, funneling index, thickness of an anterior rhabdosphincter and a posterior rhabdosphincter, urethral length, and urethral width. Leavator ani muscle (LAM) contraction strength was assessed by manual palpation according to the oxford scoring system. In addition, incontinence quality of life (IQOL) and female sexual function index (FSFI) questionnaires were used to obtain addition subjective information. Friedman test, a nonparametric statistical test, was used to determine the effectiveness of the ES. The Wilcoxon test was used for the post-hoc analysis and the significance level was set at .05. The bladder neck angle, funneling index and urethral width were significantly decreased after 8-weeks of intervention (p<.05). LAM contraction score, urethral length and anterior and posterior rhabdosphicter thickness were statistically increased by the intervention (p<.05). However, no significant change was found in the bladder neck movement. Although total score of the IQOL did not improve, the score of the ‘avoidance’ subscale of IQOL had significant improved (p<.05). FSFI had statistical difference in FSFI total score and ‘desire’ subscale (p<.05). In conclusion, 8-week use of a transcutaneous ES on peri-vulva area improved dynamic mechanical structures of the pelvic floor musculature as well as IQOL and conjugal relationship.Keywords: electrical stimulation, Pelvic floor muscle, sonography, stress urinary incontinence, women health
Procedia PDF Downloads 15163 Bridging Minds and Nature: Revolutionizing Elementary Environmental Education Through Artificial Intelligence
Authors: Hoora Beheshti Haradasht, Abooali Golzary
Abstract:
Environmental education plays a pivotal role in shaping the future stewards of our planet. Leveraging the power of artificial intelligence (AI) in this endeavor presents an innovative approach to captivate and educate elementary school children about environmental sustainability. This paper explores the application of AI technologies in designing interactive and personalized learning experiences that foster curiosity, critical thinking, and a deep connection to nature. By harnessing AI-driven tools, virtual simulations, and personalized content delivery, educators can create engaging platforms that empower children to comprehend complex environmental concepts while nurturing a lifelong commitment to protecting the Earth. With the pressing challenges of climate change and biodiversity loss, cultivating an environmentally conscious generation is imperative. Integrating AI in environmental education revolutionizes traditional teaching methods by tailoring content, adapting to individual learning styles, and immersing students in interactive scenarios. This paper delves into the potential of AI technologies to enhance engagement, comprehension, and pro-environmental behaviors among elementary school children. Modern AI technologies, including natural language processing, machine learning, and virtual reality, offer unique tools to craft immersive learning experiences. Adaptive platforms can analyze individual learning patterns and preferences, enabling real-time adjustments in content delivery. Virtual simulations, powered by AI, transport students into dynamic ecosystems, fostering experiential learning that goes beyond textbooks. AI-driven educational platforms provide tailored content, ensuring that environmental lessons resonate with each child's interests and cognitive level. By recognizing patterns in students' interactions, AI algorithms curate customized learning pathways, enhancing comprehension and knowledge retention. Utilizing AI, educators can develop virtual field trips and interactive nature explorations. Children can navigate virtual ecosystems, analyze real-time data, and make informed decisions, cultivating an understanding of the delicate balance between human actions and the environment. While AI offers promising educational opportunities, ethical concerns must be addressed. Safeguarding children's data privacy, ensuring content accuracy, and avoiding biases in AI algorithms are paramount to building a trustworthy learning environment. By merging AI with environmental education, educators can empower children not only with knowledge but also with the tools to become advocates for sustainable practices. As children engage in AI-enhanced learning, they develop a sense of agency and responsibility to address environmental challenges. The application of artificial intelligence in elementary environmental education presents a groundbreaking avenue to cultivate environmentally conscious citizens. By embracing AI-driven tools, educators can create transformative learning experiences that empower children to grasp intricate ecological concepts, forge an intimate connection with nature, and develop a strong commitment to safeguarding our planet for generations to come.Keywords: artificial intelligence, environmental education, elementary children, personalized learning, sustainability
Procedia PDF Downloads 8462 Small Scale Mobile Robot Auto-Parking Using Deep Learning, Image Processing, and Kinematics-Based Target Prediction
Authors: Mingxin Li, Liya Ni
Abstract:
Autonomous parking is a valuable feature applicable to many robotics applications such as tour guide robots, UV sanitizing robots, food delivery robots, and warehouse robots. With auto-parking, the robot will be able to park at the charging zone and charge itself without human intervention. As compared to self-driving vehicles, auto-parking is more challenging for a small-scale mobile robot only equipped with a front camera due to the camera view limited by the robot’s height and the narrow Field of View (FOV) of the inexpensive camera. In this research, auto-parking of a small-scale mobile robot with a front camera only was achieved in a four-step process: Firstly, transfer learning was performed on the AlexNet, a popular pre-trained convolutional neural network (CNN). It was trained with 150 pictures of empty parking slots and 150 pictures of occupied parking slots from the view angle of a small-scale robot. The dataset of images was divided into a group of 70% images for training and the remaining 30% images for validation. An average success rate of 95% was achieved. Secondly, the image of detected empty parking space was processed with edge detection followed by the computation of parametric representations of the boundary lines using the Hough Transform algorithm. Thirdly, the positions of the entrance point and center of available parking space were predicted based on the robot kinematic model as the robot was driving closer to the parking space because the boundary lines disappeared partially or completely from its camera view due to the height and FOV limitations. The robot used its wheel speeds to compute the positions of the parking space with respect to its changing local frame as it moved along, based on its kinematic model. Lastly, the predicted entrance point of the parking space was used as the reference for the motion control of the robot until it was replaced by the actual center when it became visible again by the robot. The linear and angular velocities of the robot chassis center were computed based on the error between the current chassis center and the reference point. Then the left and right wheel speeds were obtained using inverse kinematics and sent to the motor driver. The above-mentioned four subtasks were all successfully accomplished, with the transformed learning, image processing, and target prediction performed in MATLAB, while the motion control and image capture conducted on a self-built small scale differential drive mobile robot. The small-scale robot employs a Raspberry Pi board, a Pi camera, an L298N dual H-bridge motor driver, a USB power module, a power bank, four wheels, and a chassis. Future research includes three areas: the integration of all four subsystems into one hardware/software platform with the upgrade to an Nvidia Jetson Nano board that provides superior performance for deep learning and image processing; more testing and validation on the identification of available parking space and its boundary lines; improvement of performance after the hardware/software integration is completed.Keywords: autonomous parking, convolutional neural network, image processing, kinematics-based prediction, transfer learning
Procedia PDF Downloads 13261 An Elasto-Viscoplastic Constitutive Model for Unsaturated Soils: Numerical Implementation and Validation
Authors: Maria Lazari, Lorenzo Sanavia
Abstract:
Mechanics of unsaturated soils has been an active field of research in the last decades. Efficient constitutive models that take into account the partial saturation of soil are necessary to solve a number of engineering problems e.g. instability of slopes and cuts due to heavy rainfalls. A large number of constitutive models can now be found in the literature that considers fundamental issues associated with the unsaturated soil behaviour, like the volume change and shear strength behaviour with suction or saturation changes. Partially saturated soils may either expand or collapse upon wetting depending on the stress level, and it is also possible that a soil might experience a reversal in the volumetric behaviour during wetting. Shear strength of soils also changes dramatically with changes in the degree of saturation, and a related engineering problem is slope failures caused by rainfall. There are several states of the art reviews over the last years for studying the topic, usually providing a thorough discussion of the stress state, the advantages, and disadvantages of specific constitutive models as well as the latest developments in the area of unsaturated soil modelling. However, only a few studies focused on the coupling between partial saturation states and time effects on the behaviour of geomaterials. Rate dependency is experimentally observed in the mechanical response of granular materials, and a viscoplastic constitutive model is capable of reproducing creep and relaxation processes. Therefore, in this work an elasto-viscoplastic constitutive model for unsaturated soils is proposed and validated on the basis of experimental data. The model constitutes an extension of an existing elastoplastic strain-hardening constitutive model capable of capturing the behaviour of variably saturated soils, based on energy conjugated stress variables in the framework of superposed continua. The purpose was to develop a model able to deal with possible mechanical instabilities within a consistent energy framework. The model shares the same conceptual structure of the elastoplastic laws proposed to deal with bonded geomaterials subject to weathering or diagenesis and is capable of modelling several kinds of instabilities induced by the loss of hydraulic bonding contributions. The novelty of the proposed formulation is enhanced with the incorporation of density dependent stiffness and hardening coefficients in order to allow the modeling of the pycnotropy behaviour of granular materials with a single set of material constants. The model has been implemented in the commercial FE platform PLAXIS, widely used in Europe for advanced geotechnical design. The algorithmic strategies adopted for the stress-point algorithm had to be revised to take into account the different approach adopted by PLAXIS developers in the solution of the discrete non-linear equilibrium equations. An extensive comparison between models with a series of experimental data reported by different authors is presented to validate the model and illustrate the capability of the newly developed model. After the validation, the effectiveness of the viscoplastic model is displayed by numerical simulations of a partially saturated slope failure of the laboratory scale and the effect of viscosity and degree of saturation on slope’s stability is discussed.Keywords: PLAXIS software, slope, unsaturated soils, Viscoplasticity
Procedia PDF Downloads 22560 Structured Cross System Planning and Control in Modular Production Systems by Using Agent-Based Control Loops
Authors: Simon Komesker, Achim Wagner, Martin Ruskowski
Abstract:
In times of volatile markets with fluctuating demand and the uncertainty of global supply chains, flexible production systems are the key to an efficient implementation of a desired production program. In this publication, the authors present a holistic information concept taking into account various influencing factors for operating towards the global optimum. Therefore, a strategy for the implementation of multi-level planning for a flexible, reconfigurable production system with an alternative production concept in the automotive industry is developed. The main contribution of this work is a system structure mixing central and decentral planning and control evaluated in a simulation framework. The information system structure in current production systems in the automotive industry is rigidly hierarchically organized in monolithic systems. The production program is created rule-based with the premise of achieving uniform cycle time. This program then provides the information basis for execution in subsystems at the station and process execution level. In today's era of mixed-(car-)model factories, complex conditions and conflicts arise in achieving logistics, quality, and production goals. There is no provision for feedback loops of results from the process execution level (resources) and process supporting (quality and logistics) systems and reconsideration in the planning systems. To enable a robust production flow, the complexity of production system control is artificially reduced by the line structure and results, for example in material-intensive processes (buffers and safety stocks - two container principle also for different variants). The limited degrees of freedom of line production have produced the principle of progress figure control, which results in one-time sequencing, sequential order release, and relatively inflexible capacity control. As a result, modularly structured production systems such as modular production according to known approaches with more degrees of freedom are currently difficult to represent in terms of information technology. The remedy is an information concept that supports cross-system and cross-level information processing for centralized and decentralized decision-making. Through an architecture of hierarchically organized but decoupled subsystems, the paradigm of hybrid control is used, and a holonic manufacturing system is offered, which enables flexible information provisioning and processing support. In this way, the influences from quality, logistics, and production processes can be linked holistically with the advantages of mixed centralized and decentralized planning and control. Modular production systems also require modularly networked information systems with semi-autonomous optimization for a robust production flow. Dynamic prioritization of different key figures between subsystems should lead the production system to an overall optimum. The tasks and goals of quality, logistics, process, resource, and product areas in a cyber-physical production system are designed as an interconnected multi-agent-system. The result is an alternative system structure that executes centralized process planning and decentralized processing. An agent-based manufacturing control is used to enable different flexibility and reconfigurability states and manufacturing strategies in order to find optimal partial solutions of subsystems, that lead to a near global optimum for hybrid planning. This allows a robust near to plan execution with integrated quality control and intralogistics.Keywords: holonic manufacturing system, modular production system, planning, and control, system structure
Procedia PDF Downloads 16959 Mapping of Urban Micro-Climate in Lyon (France) by Integrating Complementary Predictors at Different Scales into Multiple Linear Regression Models
Authors: Lucille Alonso, Florent Renard
Abstract:
The characterizations of urban heat island (UHI) and their interactions with climate change and urban climates are the main research and public health issue, due to the increasing urbanization of the population. These solutions require a better knowledge of the UHI and micro-climate in urban areas, by combining measurements and modelling. This study is part of this topic by evaluating microclimatic conditions in dense urban areas in the Lyon Metropolitan Area (France) using a combination of data traditionally used such as topography, but also from LiDAR (Light Detection And Ranging) data, Landsat 8 satellite observation and Sentinel and ground measurements by bike. These bicycle-dependent weather data collections are used to build the database of the variable to be modelled, the air temperature, over Lyon’s hyper-center. This study aims to model the air temperature, measured during 6 mobile campaigns in Lyon in clear weather, using multiple linear regressions based on 33 explanatory variables. They are of various categories such as meteorological parameters from remote sensing, topographic variables, vegetation indices, the presence of water, humidity, bare soil, buildings, radiation, urban morphology or proximity and density to various land uses (water surfaces, vegetation, bare soil, etc.). The acquisition sources are multiple and come from the Landsat 8 and Sentinel satellites, LiDAR points, and cartographic products downloaded from an open data platform in Greater Lyon. Regarding the presence of low, medium, and high vegetation, the presence of buildings and ground, several buffers close to these factors were tested (5, 10, 20, 25, 50, 100, 200 and 500m). The buffers with the best linear correlations with air temperature for ground are 5m around the measurement points, for low and medium vegetation, and for building 50m and for high vegetation is 100m. The explanatory model of the dependent variable is obtained by multiple linear regression of the remaining explanatory variables (Pearson correlation matrix with a |r| < 0.7 and VIF with < 5) by integrating a stepwise sorting algorithm. Moreover, holdout cross-validation is performed, due to its ability to detect over-fitting of multiple regression, although multiple regression provides internal validation and randomization (80% training, 20% testing). Multiple linear regression explained, on average, 72% of the variance for the study days, with an average RMSE of only 0.20°C. The impact on the model of surface temperature in the estimation of air temperature is the most important variable. Other variables are recurrent such as distance to subway stations, distance to water areas, NDVI, digital elevation model, sky view factor, average vegetation density, or building density. Changing urban morphology influences the city's thermal patterns. The thermal atmosphere in dense urban areas can only be analysed on a microscale to be able to consider the local impact of trees, streets, and buildings. There is currently no network of fixed weather stations sufficiently deployed in central Lyon and most major urban areas. Therefore, it is necessary to use mobile measurements, followed by modelling to characterize the city's multiple thermal environments.Keywords: air temperature, LIDAR, multiple linear regression, surface temperature, urban heat island
Procedia PDF Downloads 13858 A Computer-Aided System for Tooth Shade Matching
Authors: Zuhal Kurt, Meral Kurt, Bilge T. Bal, Kemal Ozkan
Abstract:
Shade matching and reproduction is the most important element of success in prosthetic dentistry. Until recently, shade matching procedure was implemented by dentists visual perception with the help of shade guides. Since many factors influence visual perception; tooth shade matching using visual devices (shade guides) is highly subjective and inconsistent. Subjective nature of this process has lead to the development of instrumental devices. Nowadays, colorimeters, spectrophotometers, spectroradiometers and digital image analysing systems are used for instrumental shade selection. Instrumental devices have advantages that readings are quantifiable, can obtain more rapidly and simply, objectively and precisely. However, these devices have noticeable drawbacks. For example, translucent structure and irregular surfaces of teeth lead to defects on measurement with these devices. Also between the results acquired by devices with different measurement principles may make inconsistencies. So, its obligatory to search for new methods for dental shade matching process. A computer-aided system device; digital camera has developed rapidly upon today. Currently, advances in image processing and computing have resulted in the extensive use of digital cameras for color imaging. This procedure has a much cheaper process than the use of traditional contact-type color measurement devices. Digital cameras can be taken by the place of contact-type instruments for shade selection and overcome their disadvantages. Images taken from teeth show morphology and color texture of teeth. In last decades, a new method was recommended to compare the color of shade tabs taken by a digital camera using color features. This method showed that visual and computer-aided shade matching systems should be used as concatenated. Recently using methods of feature extraction techniques are based on shape description and not used color information. However, color is mostly experienced as an essential property in depicting and extracting features from objects in the world around us. When local feature descriptors with color information are extended by concatenating color descriptor with the shape descriptor, that descriptor will be effective on visual object recognition and classification task. Therefore, the color descriptor is to be used in combination with a shape descriptor it does not need to contain any spatial information, which leads us to use local histograms. This local color histogram method is remain reliable under variation of photometric changes, geometrical changes and variation of image quality. So, coloring local feature extraction methods are used to extract features, and also the Scale Invariant Feature Transform (SIFT) descriptor used to for shape description in the proposed method. After the combination of these descriptors, the state-of-art descriptor named by Color-SIFT will be used in this study. Finally, the image feature vectors obtained from quantization algorithm are fed to classifiers such as Nearest Neighbor (KNN), Naive Bayes or Support Vector Machines (SVM) to determine label(s) of the visual object category or matching. In this study, SVM are used as classifiers for color determination and shade matching. Finally, experimental results of this method will be compared with other recent studies. It is concluded from the study that the proposed method is remarkable development on computer aided tooth shade determination system.Keywords: classifiers, color determination, computer-aided system, tooth shade matching, feature extraction
Procedia PDF Downloads 44857 A Systemic Review and Comparison of Non-Isolated Bi-Directional Converters
Authors: Rahil Bahrami, Kaveh Ashenayi
Abstract:
This paper presents a systematic classification and comparative analysis of non-isolated bi-directional DC-DC converters. The increasing demand for efficient energy conversion in diverse applications has spurred the development of various converter topologies. In this study, we categorize bi-directional converters into three distinct classes: Inverting, Non-Inverting, and Interleaved. Each category is characterized by its unique operational characteristics and benefits. Furthermore, a practical comparison is conducted by evaluating the results of simulation of each bi-directional converter. BDCs can be classified into isolated and non-isolated topologies. Non-isolated converters share a common ground between input and output, making them suitable for applications with minimal voltage change. They are easy to integrate, lightweight, and cost-effective but have limitations like limited voltage gain, switching losses, and no protection against high voltages. Isolated converters use transformers to separate input and output, offering safety benefits, high voltage gain, and noise reduction. They are larger and more costly but are essential for automotive designs where safety is crucial. The paper focuses on non-isolated systems.The paper discusses the classification of non-isolated bidirectional converters based on several criteria. Common factors used for classification include topology, voltage conversion, control strategy, power capacity, voltage range, and application. These factors serve as a foundation for categorizing converters, although the specific scheme might vary depending on contextual, application, or system-specific requirements. The paper presents a three-category classification for non-isolated bi-directional DC-DC converters: inverting, non-inverting, and interleaved. In the inverting category, converters produce an output voltage with reversed polarity compared to the input voltage, achieved through specific circuit configurations and control strategies. This is valuable in applications such as motor control and grid-tied solar systems. The non-inverting category consists of converters maintaining the same voltage polarity, useful in scenarios like battery equalization. Lastly, the interleaved category employs parallel converter stages to enhance power delivery and reduce current ripple. This classification framework enhances comprehension and analysis of non-isolated bi-directional DC-DC converters. The findings contribute to a deeper understanding of the trade-offs and merits associated with different converter types. As a result, this work aids researchers, practitioners, and engineers in selecting appropriate bi-directional converter solutions for specific energy conversion requirements. The proposed classification framework and experimental assessment collectively enhance the comprehension of non-isolated bi-directional DC-DC converters, fostering advancements in efficient power management and utilization.The simulation process involves the utilization of PSIM to model and simulate non-isolated bi-directional converter from both inverted and non-inverted category. The aim is to conduct a comprehensive comparative analysis of these converters, considering key performance indicators such as rise time, efficiency, ripple factor, and maximum error. This systematic evaluation provides valuable insights into the dynamic response, energy efficiency, output stability, and overall precision of the converters. The results of this comparison facilitate informed decision-making and potential optimizations, ensuring that the chosen converter configuration aligns effectively with the designated operational criteria and performance goals.Keywords: bi-directional, DC-DC converter, non-isolated, energy conversion
Procedia PDF Downloads 10156 Assessing Organizational Resilience Capacity to Flooding: Index Development and Application to Greek Small & Medium-Sized Enterprises
Authors: Antonis Skouloudis, Konstantinos Evangelinos, Walter Leal-Filho, Panagiotis Vouros, Ioannis Nikolaou
Abstract:
Organizational resilience capacity to extreme weather events (EWEs) has sparked a growth in scholarly attention over the past decade as an essential aspect in business continuity management, with supporting evidence for this claim to suggest that it retains a key role in successful responses to adverse situations, crises and shocks. Small and medium-sized enterprises (SMEs) are more vulnerable to face floods compared to their larger counterparts, so they are disproportionately affected by such extreme weather events. The limited resources at their disposal, the lack of time and skills all conduce to inadequate preparedness to challenges posed by floods. SMEs tend to plan in the short-term, reacting to circumstances as they arise and focussing on their very survival. Likewise, they share less formalised structures and codified policies while they are most usually owner-managed, resulting in a command-and-control management culture. Such characteristics result in them having limited opportunities to recover from flooding and quickly turnaround their operation from a loss making to a profit making one. Scholars frame the capacity of business entities to be resilient upon an EWE disturbance (such as flash floods) as the rate of recovery and restoration of organizational performance to pre-disturbance conditions, the amount of disturbance (i.e. threshold level) a business can absorb before losing structural and/or functional components that will alter or cease operation, as well as the extent to which the organization maintains its function (i.e. impact resistance) before performance levels are driven to zero. Nevertheless, while it seems to be accepted as an essential trait of firms effectively transcending uncertain conditions, research deconstructing the enabling conditions and/or inhibitory factors of SMEs resilience capacity to natural hazards is still sparse, fragmentary and mostly fuelled by anecdotal evidence or normative assumptions. Focusing on the individual level of analysis, i.e. the individual enterprise and its endeavours to succeed, the emergent picture from this relatively new research strand delineates the specification of variables, conceptual relationships or dynamic boundaries of resilience capacity components in an attempt to provide prescriptions for policy-making as well as business management. This study will present the development of a flood resilience capacity index (FRCI) and its application to Greek SMEs. The proposed composite indicator pertains to cognitive, behavioral/managerial and contextual factors that influence an enterprise’s ability to shape effective responses to meet flood challenges. Through the proposed indicator-based approach, an analytical framework is set forth that will help standardize such assessments with the overarching aim of reducing the vulnerability of SMEs to flooding. This will be achieved by identifying major internal and external attributes explaining resilience capacity which is particularly important given the limited resources these enterprises have and that they tend to be primary sources of vulnerabilities in supply chain networks, generating Single Points of Failure (SPOF).Keywords: Floods, Small & Medium-Sized enterprises, organizational resilience capacity, index development
Procedia PDF Downloads 19255 Developing a Machine Learning-based Cost Prediction Model for Construction Projects using Particle Swarm Optimization
Authors: Soheila Sadeghi
Abstract:
Accurate cost prediction is essential for effective project management and decision-making in the construction industry. This study aims to develop a cost prediction model for construction projects using Machine Learning techniques and Particle Swarm Optimization (PSO). The research utilizes a comprehensive dataset containing project cost estimates, actual costs, resource details, and project performance metrics from a road reconstruction project. The methodology involves data preprocessing, feature selection, and the development of an Artificial Neural Network (ANN) model optimized using PSO. The study investigates the impact of various input features, including cost estimates, resource allocation, and project progress, on the accuracy of cost predictions. The performance of the optimized ANN model is evaluated using metrics such as Mean Squared Error (MSE), Root Mean Squared Error (RMSE), Mean Absolute Error (MAE), and R-squared. The results demonstrate the effectiveness of the proposed approach in predicting project costs, outperforming traditional benchmark models. The feature selection process identifies the most influential variables contributing to cost variations, providing valuable insights for project managers. However, this study has several limitations. Firstly, the model's performance may be influenced by the quality and quantity of the dataset used. A larger and more diverse dataset covering different types of construction projects would enhance the model's generalizability. Secondly, the study focuses on a specific optimization technique (PSO) and a single Machine Learning algorithm (ANN). Exploring other optimization methods and comparing the performance of various ML algorithms could provide a more comprehensive understanding of the cost prediction problem. Future research should focus on several key areas. Firstly, expanding the dataset to include a wider range of construction projects, such as residential buildings, commercial complexes, and infrastructure projects, would improve the model's applicability. Secondly, investigating the integration of additional data sources, such as economic indicators, weather data, and supplier information, could enhance the predictive power of the model. Thirdly, exploring the potential of ensemble learning techniques, which combine multiple ML algorithms, may further improve cost prediction accuracy. Additionally, developing user-friendly interfaces and tools to facilitate the adoption of the proposed cost prediction model in real-world construction projects would be a valuable contribution to the industry. The findings of this study have significant implications for construction project management, enabling proactive cost estimation, resource allocation, budget planning, and risk assessment, ultimately leading to improved project performance and cost control. This research contributes to the advancement of cost prediction techniques in the construction industry and highlights the potential of Machine Learning and PSO in addressing this critical challenge. However, further research is needed to address the limitations and explore the identified future research directions to fully realize the potential of ML-based cost prediction models in the construction domain.Keywords: cost prediction, construction projects, machine learning, artificial neural networks, particle swarm optimization, project management, feature selection, road reconstruction
Procedia PDF Downloads 6154 AI-Enhanced Self-Regulated Learning: Proposing a Comprehensive Model with 'Studium' to Meet a Student-Centric Perspective
Authors: Smita Singh
Abstract:
Objective: The Faculty of Chemistry Education at Humboldt University has developed ‘Studium’, a web application designed to enhance long-term self-regulated learning (SRL) and academic achievement. Leveraging advanced generative AI, ‘Studium’ offers a dynamic and adaptive educational experience tailored to individual learning preferences and languages. The application includes evolving tools for personalized notetaking from preferred sources, customizable presentation capabilities, and AI-assisted guidance from academic documents or textbooks. It also features workflow automation and seamless integration with collaborative platforms like Miro, powered by AI. This study aims to propose a model that combines generative AI with traditional features and customization options, empowering students to create personalized learning environments that effectively address the challenges of SRL. Method: To achieve this, the study included graduate and undergraduate students from diverse subject streams, with 15 participants each from Germany and India, ensuring a diverse educational background. An exploratory design was employed using a speed dating method with enactment, where different scenario sessions were created to allow participants to experience various features of ‘Studium’. The session lasted for 50 minutes, providing an in-depth exploration of the platform's capabilities. Participants interacted with Studium’s features via Zoom conferencing and were then engaged in semi-structured interviews lasting 10-15 minutes to gain deeper insights into the effectiveness of ‘Studium’. Additionally, online questionnaire surveys were conducted before and after the session to gather feedback and evaluate satisfaction with self-regulated learning (SRL) after using ‘Studium’. The response rate of this survey was 100%. Results: The findings of this study indicate that students widely acknowledged the positive impact of ‘Studium’ on their learning experience, particularly its adaptability and intuitive design. They expressed a desire for more tools like ‘Studium’ to support self-regulated learning in the future. The application significantly fostered students' independence in organizing information and planning study workflows, which in turn enhanced their confidence in mastering complex concepts. Additionally, ‘Studium’ promoted strategic decision-making and helped students overcome various learning challenges, reinforcing their self-regulation, organization, and motivation skills. Conclusion: This proposed model emphasizes the need for effective integration of personalized AI tools into active learning and SRL environments. By addressing key research questions, our framework aims to demonstrate how AI-assisted platforms like “Studium” can facilitate deeper understanding, maintain student motivation, and support the achievement of academic goals. Thus, our ideal model for AI-assisted educational platforms provides a strategic approach to enhance student's learning experiences and promote their development as self-regulated learners. This proposed model emphasizes the need for effective integration of personalized AI tools into active learning and SRL environments. By addressing key research questions, our framework aims to demonstrate how AI-assisted platforms like ‘Studium’ can facilitate deeper understanding, maintain student motivation, and support the achievement of academic goals. Thus, our ideal model for AI-assisted educational platforms provides a strategic approach to enhance student's learning experiences and promote their development as self-regulated learners.Keywords: self-regulated learning (SRL), generative AI, AI-assisted educational platforms
Procedia PDF Downloads 3153 Enabling Rather Than Managing: Organizational and Cultural Innovation Mechanisms in a Heterarchical Organization
Authors: Sarah M. Schoellhammer, Stephen Gibb
Abstract:
Bureaucracy, in particular, its core element, a formal and stable hierarchy of authority, is proving less and less appropriate under the conditions of today’s knowledge economy. Centralization and formalization were consistently found to hinder innovation, undermining cross-functional collaboration, personal responsibility, and flexibility. With its focus on systematical planning, controlling and monitoring the development of new or improved solutions for customers, even innovation management as a discipline is to a significant extent based on a mechanistic understanding of organizations. The most important drivers of innovation, human creativity, and initiative, however, can be more hindered than supported by central elements of classic innovation management, such as predefined innovation strategies, rigid stage gate processes, and decisions made in management gate meetings. Heterarchy, as an alternative network form of organization, is essentially characterized by its dynamic influence structures, whereby the biggest influence is allocated by the collective to the persons perceived the most competent in a certain issue. Theoretical arguments that the non-hierarchical concept better supports innovation than bureaucracy have been supported by empirical research. These prior studies either focus on the structure and general functioning of non-hierarchical organizations or on their innovativeness, that means innovation as an outcome. Complementing classic innovation management approaches, this work aims to shed light on how innovations are initiated and realized in heterarchies in order to identify alternative solutions practiced under conditions of the post-bureaucratic organization. Through an initial individual case study, which is part of a multiple-case project, the innovation practices of an innovative and highly heterarchical medium-sized company in the German fire engineering industry are investigated. In a pragmatic mixed methods approach media resonance, company documents, and workspace architecture are analyzed, in addition to qualitative interviews with the CEO and employees of the case company, as well as a quantitative survey aiming to characterize the company along five scaled dimensions of a heterarchy spectrum. The analysis reveals some similarities and striking differences to approaches suggested by classic innovation management. The studied heterarchy has no predefined innovation strategy guiding new product and service development. Instead, strategic direction is provided by the CEO, described as visionary and creative. Procedures for innovation are hardly formalized, with new product ideas being evaluated on the basis of gut feeling and flexible, rather general criteria. Employees still being hesitant to take responsibility and make decisions, hierarchical influence is still prominent. Described as open-minded and collaborative, culture and leadership were found largely congruent with definitions of innovation culture. Overall, innovation efforts at the case company tend to be coordinated more through cultural than through formal organizational mechanisms. To better enable innovation in mainstream organizations, responsible practitioners are recommended not to limit changes to reducing the central elements of the bureaucratic organization, formalization, and centralization. The freedoms this entails need to be sustained through cultural coordination mechanisms, with personal initiative and responsibility by employees as well as common innovation-supportive norms and values. These allow to integrate diverse competencies, opinions, and activities and, thus, to guide innovation efforts.Keywords: bureaucracy, heterarchy, innovation management, values
Procedia PDF Downloads 18852 Assessing Flexural Damage Mechanisms Induced by Mesoscopic Buckle Defects in Textile-Reinforced Polymer Matrix Composites Using Acoustic Emission Analysis
Authors: Christopher Okechukwu Ndukwe
Abstract:
This paper investigates and categorizes the flexural damage mechanisms in composite materials caused by mesoscopic out-of-plane buckle defects that occur during the initial stage of the resin transfer molding (RTM) process. The findings of this study have significant practical implications for the manufacturing and use of composite materials, as they provide a deeper understanding of these damage mechanisms and their analysis. During the initial stage of shaping a preform, alterations, and distortions in the reinforcement sample can significantly lead to defects, such as buckling, especially when forming double-curvature geometries. These recurring mesoscopic defects have been investigated using a specialized laboratory bench designed to reproduce buckle defects like those found in complex geometric shapes, such as tetrahedrons. The study examined two sample configurations with buckle defects in the longitudinal and transverse directions alongside a reference sample for comparison. An acoustic emission (AE) system, a well-regarded non-contact method for monitoring structural health, was used to analyze the mechanical behavior of material samples in detail. An unsupervised K-means algorithm was employed to classify the damage mechanisms—such as matrix cracking, interface damage, and fiber breakage linked to the samples' failure. A standard was established based on three AE parameters: absolute energy, amplitude, and the number of AE events. This standard helped identify the origin and sequence of damage propagation. Initially, the results of the AE parameters were superimposed with the flexural loading curves to pinpoint the loading phases during which damage began and the specific points at which the samples ultimately failed. The normalized density of AE events related to different damage mechanisms was evaluated by analyzing the number of AE events within the amplitude domain of the AE signals. The ranges of the identified damage mechanisms in the amplitude plane illustrate the progression and order of load transfer among the elements of the composite material. In the reference sample, the AE event signals corresponding to the three classes of damage mechanisms partially overlap with adjacent signals. In contrast, the two defective sample configurations showed that the overlapping AE event signals for the respective damage mechanisms converged within the intermediate damage mode area at specific points, depending on the sample configuration. The convergence points in the samples with transverse defects were identified relatively earlier than in the other samples. Low and high amplitude ranges characterize the matrix cracking and fiber breakage damage mechanisms. The low amplitude damage occurred over a more extended length, while the high amplitude damage began much earlier. This results in the signals from both damage mechanisms converging at the center of the interface damage zone. This convergence suggests that all individual composite components fail concurrently at specific points in the defective samples, resulting in rapid fragmentation and ultimately contributing to failure. Overall, the results show that mesoscopic out-of-plane buckling in all directions affects the composite's flexural response, with more severe effects observed when the load is applied transversely.Keywords: acoustic emission, composite reinforcement, damage mechanisms, mesoscopic buckle defects
Procedia PDF Downloads 11