Search results for: vibration isolation performance
168 Investigation of Pu-238 Heat Source Modifications to Increase Power Output through (α,N) Reaction-Induced Fission
Authors: Alex B. Cusick
Abstract:
The objective of this study is to improve upon the current ²³⁸PuO₂ fuel technology for space and defense applications. Modern RTGs (radioisotope thermoelectric generators) utilize the heat generated from the radioactive decay of ²³⁸Pu to create heat and electricity for long term and remote missions. Application of RTG technology is limited by the scarcity and expense of producing the isotope, as well as the power output which is limited to only a few hundred watts. The scarcity and expense make the efficient use of ²³⁸Pu absolutely necessary. By utilizing the decay of ²³⁸Pu, not only to produce heat directly but to also indirectly induce fission in ²³⁹Pu (which is already present within currently used fuel), it is possible to see large increases in temperature which allows for a more efficient conversion to electricity and a higher power-to-weight ratio. This concept can reduce the quantity of ²³⁸Pu necessary for these missions, potentially saving millions on investment, while yielding higher power output. Current work investigating radioisotope power systems have focused on improving efficiency of the thermoelectric components and replacing systems which produce heat by virtue of natural decay with fission reactors. The technical feasibility of utilizing (α,n) reactions to induce fission within current radioisotopic fuels has not been investigated in any appreciable detail, and our study aims to thoroughly investigate the performance of many such designs, develop those with highest capabilities, and facilitate experimental testing of these designs. In order to determine the specific design parameters that maximize power output and the efficient use of ²³⁸Pu for future RTG units, MCNP6 simulations have been used to characterize the effects of modifying fuel composition, geometry, and porosity, as well as introducing neutron moderating, reflecting, and shielding materials to the system. Although this project is currently in the preliminary stages, the final deliverables will include sophisticated designs and simulation models that define all characteristics of multiple novel RTG fuels, detailed enough to allow immediate fabrication and testing. Preliminary work has consisted of developing a benchmark model to accurately represent the ²³⁸PuO₂ pellets currently in use by NASA; this model utilizes the alpha transport capabilities of MCNP6 and agrees well with experimental data. In addition, several models have been developed by varying specific parameters to investigate their effect on (α,n) and (n,fi ssion) reaction rates. Current practices in fuel processing are to exchange out the small portion of naturally occurring ¹⁸O and ¹⁷O to limit (α,n) reactions and avoid unnecessary neutron production. However, we have shown that enriching the oxide in ¹⁸O introduces a sufficient (α,n) reaction rate to support significant fission rates. For example, subcritical fission rates above 10⁸ f/cm³-s are easily achievable in cylindrical ²³⁸PuO₂ fuel pellets with a ¹⁸O enrichment of 100%, given an increase in size and a ⁹Be clad. Many viable designs exist and our intent is to discuss current results and future endeavors on this project.Keywords: radioisotope thermoelectric generators (RTG), Pu-238, subcritical reactors, (alpha, n) reactions
Procedia PDF Downloads 174167 Protected Cultivation of Horticultural Crops: Increases Productivity per Unit of Area and Time
Authors: Deepak Loura
Abstract:
The most contemporary method of producing horticulture crops both qualitatively and quantitatively is protected cultivation, or greenhouse cultivation, which has gained widespread acceptance in recent decades. Protected farming, commonly referred to as controlled environment agriculture (CEA), is extremely productive, land- and water-wise, as well as environmentally friendly. The technology entails growing horticulture crops in a controlled environment where variables such as temperature, humidity, light, soil, water, fertilizer, etc. are adjusted to achieve optimal output and enable a consistent supply of them even during the off-season. Over the past ten years, protected cultivation of high-value crops and cut flowers has demonstrated remarkable potential. More and more agricultural and horticultural crop production systems are moving to protected environments as a result of the growing demand for high-quality products by global markets. By covering the crop, it is possible to control the macro- and microenvironments, enhancing plant performance and allowing for longer production times, earlier harvests, and higher yields of higher quality. These shielding features alter the environment of the plant while also offering protection from wind, rain, and insects. Protected farming opens up hitherto unexplored opportunities in agriculture as the liberalised economy and improved agricultural technologies advance. Typically, the revenues from fruit, vegetable, and flower crops are 4 to 8 times higher than those from other crops. If any of these high-value crops are cultivated in protected environments like greenhouses, net houses, tunnels, etc., this profit can be multiplied. Vegetable and cut flower post-harvest losses are extremely high (20–0%), however sheltered growing techniques and year-round cropping can greatly minimize post-harvest losses and enhance yield by 5–10 times. Seasonality and weather have a big impact on the production of vegetables and flowers. The variety of their products results in significant price and quality changes for vegetables. For the application of current technology in crop production, achieving a balance between year-round availability of vegetables and flowers with minimal environmental impact and remaining competitive is a significant problem. The future of agriculture will be protected since population growth is reducing the amount of land that may be held. Protected agriculture is a particularly profitable endeavor for tiny landholdings. Small greenhouses, net houses, nurseries, and low tunnel greenhouses can all be built by farmers to increase their income. Protected agriculture is also aided by the rise in biotic and abiotic stress factors. As a result of the greater productivity levels, these technologies are not only opening up opportunities for producers with larger landholdings, but also for those with smaller holdings. Protected cultivation can be thought of as a kind of precise, forward-thinking, parallel agriculture that covers almost all aspects of farming and is rather subject to additional inspection for technical applicability to circumstances, farmer economics, and market economics.Keywords: protected cultivation, horticulture, greenhouse, vegetable, controlled environment agriculture
Procedia PDF Downloads 77166 Early Impact Prediction and Key Factors Study of Artificial Intelligence Patents: A Method Based on LightGBM and Interpretable Machine Learning
Authors: Xingyu Gao, Qiang Wu
Abstract:
Patents play a crucial role in protecting innovation and intellectual property. Early prediction of the impact of artificial intelligence (AI) patents helps researchers and companies allocate resources and make better decisions. Understanding the key factors that influence patent impact can assist researchers in gaining a better understanding of the evolution of AI technology and innovation trends. Therefore, identifying highly impactful patents early and providing support for them holds immeasurable value in accelerating technological progress, reducing research and development costs, and mitigating market positioning risks. Despite the extensive research on AI patents, accurately predicting their early impact remains a challenge. Traditional methods often consider only single factors or simple combinations, failing to comprehensively and accurately reflect the actual impact of patents. This paper utilized the artificial intelligence patent database from the United States Patent and Trademark Office and the Len.org patent retrieval platform to obtain specific information on 35,708 AI patents. Using six machine learning models, namely Multiple Linear Regression, Random Forest Regression, XGBoost Regression, LightGBM Regression, Support Vector Machine Regression, and K-Nearest Neighbors Regression, and using early indicators of patents as features, the paper comprehensively predicted the impact of patents from three aspects: technical, social, and economic. These aspects include the technical leadership of patents, the number of citations they receive, and their shared value. The SHAP (Shapley Additive exPlanations) metric was used to explain the predictions of the best model, quantifying the contribution of each feature to the model's predictions. The experimental results on the AI patent dataset indicate that, for all three target variables, LightGBM regression shows the best predictive performance. Specifically, patent novelty has the greatest impact on predicting the technical impact of patents and has a positive effect. Additionally, the number of owners, the number of backward citations, and the number of independent claims are all crucial and have a positive influence on predicting technical impact. In predicting the social impact of patents, the number of applicants is considered the most critical input variable, but it has a negative impact on social impact. At the same time, the number of independent claims, the number of owners, and the number of backward citations are also important predictive factors, and they have a positive effect on social impact. For predicting the economic impact of patents, the number of independent claims is considered the most important factor and has a positive impact on economic impact. The number of owners, the number of sibling countries or regions, and the size of the extended patent family also have a positive influence on economic impact. The study primarily relies on data from the United States Patent and Trademark Office for artificial intelligence patents. Future research could consider more comprehensive data sources, including artificial intelligence patent data, from a global perspective. While the study takes into account various factors, there may still be other important features not considered. In the future, factors such as patent implementation and market applications may be considered as they could have an impact on the influence of patents.Keywords: patent influence, interpretable machine learning, predictive models, SHAP
Procedia PDF Downloads 50165 Spinetoram10% WG+Sulfoxaflor 30% WG: A Promising Green Chemistry to Manage Pest Complex in Bt Cotton
Authors: Siddharudha B. Patil
Abstract:
Cotton is a premier commercial fibre crop of India subjected to ravages of insect pests. Sucking pests viz thrips, Thrips tabaci,(lind) leaf hopper Amrsca devastance,(dist) miridbug, Poppiocapsidea beseratense (Dist) and bollworms continue to inflict damage Bt Cotton right from seeding stage. Their infestation impact cotton yield to an extent of 30-40 percent. Chemical control is still adoptable as one of the techniques for combating these pests. Presently, growers have many challenges in selecting effective chemicals which fit in with an integrated pest management. Spinetoram has broad spectrum with excellent insecticidal activity against both sucking pests and bollworms. Hence, it is expected to make a great contribution to stable production and quality improvement of agricultural products. Spinetoram is a derivative of biologically active substances (Spinosyns) produced by soil actinomycetes, Saccharopolypara spinosa which is semi synthetic active ingredient representing Spinosyn chemical class of insecticide and has demonstrated higher level of efficacy with reduced risk on beneficial arthropods. The efforts were made in the present study to test the efficacy of Spinetoram against sucking pests and bollworms in comparison with other insecticides in Bt Cotton under field condition. Field experiment was laid out during 2013-14 and 2014-15 at Agricultural Research station Dharwad (Karnataka-India) in a randomized block design comprising eight treatments and three replications. Bt cotton genotype, Bunny BG-II was sown in a plot size of 5.4 m x5.4 m. Recommend agronomical practices were followed. The Spinetoram 12% SC alone and incombination with sulfaxaflore with varied dosages against pest complex was tested. Performance was compared with Spinosad 45% SC and thiamethoxam 25% WG. The results of consecutive seasons revealed that nonsignificant difference in thrips and leafhopper population and varied significantly after 3 days of imposition. Among the treatments, combiproduct, Spinetoram 10%WG + Sulfoxaflor 30% WG@ 140 gai/ha registered lowest population of thrips (3.91/3 leaves) and leaf hoppers (1.08/3 leaves) followed by its lower dosages viz 120 gai/ha (4.86/3 leaves and 1.14/3 leaves of thrips and leaf hoppers, respectively) and 100 gai/ha (6.02 and 1.23./3 leaves of thrips and leaf hoppers respectively) being at par, significantly superior to rest of the treatments. On the contrary, the population of thrips, leaf hopper and miridbugs in untreated control was on higher side. Similarly the higher dosage of Spinetoram 10% WG+ Sulfoxaflor 30% WG (140 gai/ha) proved its bioefficacy by registering lowest miridbug incidence of 1.70/25 squares, followed by its lower dosage (1.78 and 1.83/25 squares respectively) Further observation made on bollworms incidence revealed that the higher dosage of Spinetoram 10% WG+Sulfoxaflor 30% WG (140 gai/ha) registered lowest percentage of boll damage (7.22%), more number of good opened bolls (36.89/plant) and higher seed cotton yield (19.45q/ha) followed by rest of its lower dosages, Spinetoram 12% SC alone and Spinosad 45% SC being at par significantly superior to rest of the treatments. However, significantly higher boll damage (15.13%) and lower seed cotton yield (14.45 q/ha) was registered in untreated control. Thus Spinetoram10% WG+Sulfoxaflor 30% WG can be a promising option for pest management in Bt Cotton.Keywords: Spinetoram10% WG+Sulfoxaflor 30% WG, sucking pests, bollworms, Bt cotton, management
Procedia PDF Downloads 254164 Sustainable Crop Production: Greenhouse Gas Management in Farm Value Chain
Authors: Aswathaman Vijayan, Manish Jha, Ullas Theertha
Abstract:
Climate change and Global warming have become an issue for both developed and developing countries and perhaps the biggest threat to the environment. We at ITC Limited believe that a company’s performance must be measured by its Triple Bottom Line contribution to building economic, social and environmental capital. This Triple Bottom Line strategy focuses on - Embedding sustainability in business practices, Investing in social development and Adopting a low carbon growth path with a cleaner environment approach. The Agri Business Division - ILTD operates in the tobacco crop growing regions of Andhra Pradesh and Karnataka province of India. The Agri value chain of the company comprises of two distinct phases: First phase is Agricultural operations undertaken by ITC trained farmers and the second phase is Industrial operations which include marketing and processing of the agricultural produce. This research work covers the Greenhouse Gas (GHG) management strategy of ITC in the Agricultural operations undertaken by the farmers. The agriculture sector adds considerably to global GHG emissions through the use of carbon-based energies, use of fertilizers and other farming operations such as ploughing. In order to minimize the impact of farming operations on the environment, ITC has a taken a big leap in implementing system and process in reducing the GHG impact in farm value chain by partnering with the farming community. The company has undertaken a unique three-pronged approach for GHG management at the farm value chain: 1) GHG inventory at farm value chain: Different sources of GHG emission in the farm value chain were identified and quantified for the baseline year, as per the IPCC guidelines for greenhouse gas inventories. The major sources of emission identified are - emission due to nitrogenous fertilizer application during seedling production and main-field; emission due to diesel usage for farm machinery; emission due to fuel consumption and due to burning of crop residues. 2) Identification and implementation of technologies to reduce GHG emission: Various methodologies and technologies were identified for each GHG emission source and implemented at farm level. The identified methodologies are – reducing the consumption of chemical fertilizer usage at the farm through site-specific nutrient recommendation; Usage of sharp shovel for land preparation to reduce diesel consumption; implementation of energy conservation technologies to reduce fuel requirement and avoiding burning of crop residue by incorporation in the main field. These identified methodologies were implemented at farm level, and the GHG emission was quantified to understand the reduction in GHG emission. 3) Social and farm forestry for CO2 sequestration: In addition, the company encouraged social and farm forestry in the waste lands to convert it into green cover. The plantations are carried out with fast growing trees viz., Eucalyptus, Casuarina, and Subabul at the rate of 10,000 Ha of land per year. The above approach minimized considerable amount of GHG emission at the farm value chain benefiting farmers, community, and environment at a whole. In addition, the CO₂ stock created by social and farm forestry program has made the farm value chain to become environment-friendly.Keywords: CO₂ sequestration, farm value chain, greenhouse gas, ITC limited
Procedia PDF Downloads 300163 Protocol for Dynamic Load Distributed Low Latency Web-Based Augmented Reality and Virtual Reality
Authors: Rohit T. P., Sahil Athrij, Sasi Gopalan
Abstract:
Currently, the content entertainment industry is dominated by mobile devices. As the trends slowly shift towards Augmented/Virtual Reality applications the computational demands on these devices are increasing exponentially and we are already reaching the limits of hardware optimizations. This paper proposes a software solution to this problem. By leveraging the capabilities of cloud computing we can offload the work from mobile devices to dedicated rendering servers that are way more powerful. But this introduces the problem of latency. This paper introduces a protocol that can achieve high-performance low latency Augmented/Virtual Reality experience. There are two parts to the protocol, 1) In-flight compression The main cause of latency in the system is the time required to transmit the camera frame from client to server. The round trip time is directly proportional to the amount of data transmitted. This can therefore be reduced by compressing the frames before sending. Using some standard compression algorithms like JPEG can result in minor size reduction only. Since the images to be compressed are consecutive camera frames there won't be a lot of changes between two consecutive images. So inter-frame compression is preferred. Inter-frame compression can be implemented efficiently using WebGL but the implementation of WebGL limits the precision of floating point numbers to 16bit in most devices. This can introduce noise to the image due to rounding errors, which will add up eventually. This can be solved using an improved interframe compression algorithm. The algorithm detects changes between frames and reuses unchanged pixels from the previous frame. This eliminates the need for floating point subtraction thereby cutting down on noise. The change detection is also improved drastically by taking the weighted average difference of pixels instead of the absolute difference. The kernel weights for this comparison can be fine-tuned to match the type of image to be compressed. 2) Dynamic Load distribution Conventional cloud computing architectures work by offloading as much work as possible to the servers, but this approach can cause a hit on bandwidth and server costs. The most optimal solution is obtained when the device utilizes 100% of its resources and the rest is done by the server. The protocol balances the load between the server and the client by doing a fraction of the computing on the device depending on the power of the device and network conditions. The protocol will be responsible for dynamically partitioning the tasks. Special flags will be used to communicate the workload fraction between the client and the server and will be updated in a constant interval of time ( or frames ). The whole of the protocol is designed so that it can be client agnostic. Flags are available to the client for resetting the frame, indicating latency, switching mode, etc. The server can react to client-side changes on the fly and adapt accordingly by switching to different pipelines. The server is designed to effectively spread the load and thereby scale horizontally. This is achieved by isolating client connections into different processes.Keywords: 2D kernelling, augmented reality, cloud computing, dynamic load distribution, immersive experience, mobile computing, motion tracking, protocols, real-time systems, web-based augmented reality application
Procedia PDF Downloads 77162 Catchment Nutrient Balancing Approach to Improve River Water Quality: A Case Study at the River Petteril, Cumbria, United Kingdom
Authors: Nalika S. Rajapaksha, James Airton, Amina Aboobakar, Nick Chappell, Andy Dyer
Abstract:
Nutrient pollution and their impact on water quality is a key concern in England. Many water quality issues originate from multiple sources of pollution spread across the catchment. The river water quality in England has improved since 1990s and wastewater effluent discharges into rivers now contain less phosphorus than in the past. However, excess phosphorus is still recognised as the prevailing issue for rivers failing Water Framework Directive (WFD) good ecological status. To achieve WFD Phosphorus objectives, Wastewater Treatment Works (WwTW) permit limits are becoming increasingly stringent. Nevertheless, in some rural catchments, the apportionment of Phosphorus pollution can be greater from agricultural runoff and other sources such as septic tanks. Therefore, the challenge of meeting the requirements of watercourses to deliver WFD objectives often goes beyond water company activities, providing significant opportunities to co-deliver activities in wider catchments to reduce nutrient load at source. The aim of this study was to apply the United Utilities' Catchment Systems Thinking (CaST) strategy and pilot an innovative permitting approach - Catchment Nutrient Balancing (CNB) in a rural catchment in Cumbria (the River Petteril) in collaboration with the regulator and others to achieve WFD objectives and multiple benefits. The study area is mainly agricultural land, predominantly livestock farms. The local ecology is impacted by significant nutrient inputs which require intervention to meet WFD obligations. There are a range of Phosphorus inputs into the river, including discharges from wastewater assets but also significantly from agricultural contributions. Solely focusing on the WwTW discharges would not have resolved the problem hence in order to address this issue effectively, a CNB trial was initiated at a small WwTW, targeting the removal of a total of 150kg of Phosphorus load, of which 13kg were to be reduced through the use of catchment interventions. Various catchment interventions were implemented across selected farms in the upstream of the catchment and also an innovative polonite reactive filter media was implemented at the WwTW as an alternative to traditional Phosphorus treatment methods. During the 3 years of this trial, the impact of the interventions in the catchment and the treatment works were monitored. In 2020 and 2022, it respectively achieved a 69% and 63% reduction in the phosphorus level in the catchment against the initial reduction target of 9%. Phosphorus treatment at the WwTW had a significant impact on overall load reduction. The wider catchment impact, however, was seven times greater than the initial target when wider catchment interventions were also established. While it is unlikely that all the Phosphorus load reduction was delivered exclusively from the interventions implemented though this project, this trial evidenced the enhanced benefits that can be achieved with an integrated approach, that engages all sources of pollution within the catchment - rather than focusing on a one-size-fits-all solution. Primarily, the CNB approach and the act of collaboratively engaging others, particularly the agriculture sector is likely to yield improved farm and land management performance and better compliance, which can lead to improved river quality as well as wider benefits.Keywords: agriculture, catchment nutrient balancing, phosphorus pollution, water quality, wastewater
Procedia PDF Downloads 68161 Organisational Mindfulness Case Study: A 6-Week Corporate Mindfulness Programme Significantly Enhances Organisational Well-Being
Authors: Dana Zelicha
Abstract:
A 6-week mindfulness programme was launched to improve the well being and performance of 20 managers (including the supervisor) of an international corporation in London. A unique assessment methodology was customised to the organisation’s needs, measuring four parameters: prioritising skills, listening skills, mindfulness levels and happiness levels. All parameters showed significant improvements (p < 0.01) post intervention, with a remarkable increase in listening skills and mindfulness levels. Although corporate mindfulness programmes have proven to be effective, the challenge remains the low engagement levels at home and the implementation of these tools beyond the scope of the intervention. This study has offered an innovative approach to enforce home engagement levels, which yielded promising results. The programme launched with a 2-day introduction intervention, which was followed by a 6-week training course (1 day a week; 2 hours each). Participants learned all basic principles of mindfulness such as mindfulness meditations, Mindfulness Based Stress Reduction (MBSR) techniques and Mindfulness Based Cognitive Therapy (MBCT) practices to incorporate into their professional and personal lives. The programme contained experiential mindfulness meditations and innovative mindfulness tools (OWBA-MT) created by OWBA - The Well Being Agency. Exercises included Mindful Meetings, Unitasking and Mindful Feedback. All sessions concluded with guided discussions and group reflections. One fundamental element of this programme was engagement level outside of the workshop. In the office, participants connected with a mindfulness buddy - a team member in the group with whom they could find support throughout the programme. At home, participants completed online daily mindfulness forms that varied according to weekly themes. These customised forms gave participants the opportunity to reflect on whether they made time for daily mindfulness practice, and to facilitate a sense of continuity and responsibility. At the end of the programme, the most engaged team member was crowned the ‘mindful maven’ and received a special gift. The four parameters were measured using online self-reported questionnaires, including the Listening Skills Inventory (LSI), Mindfulness Attention Awareness Scale (MAAS), Time Management Behaviour Scale (TMBS) and a modified version of the Oxford Happiness Questionnaire (OHQ). Pre-intervention questionnaires were collected at the start of the programme, and post-intervention data was collected 4-weeks following completion. Quantitative analysis using paired T-tests of means showed significant improvements, with a 23% increase in listening skills, a 22% improvement in mindfulness levels, a 12% increase in prioritising skills, and an 11% improvement in happiness levels. Participant testimonials exhibited high levels of satisfaction and the overall results indicate that the mindfulness programme substantially impacted the team. These results suggest that 6-week mindfulness programmes can improve employees’ capacities to listen and work well with others, to effectively manage time and to experience enhanced satisfaction both at work and in life. Limitations noteworthy to consider include the afterglow effect and lack of generalisability, as this study was conducted on a small and fairly homogenous sample.Keywords: corporate mindfulness, listening skills, organisational well being, prioritising skills, mindful leadership
Procedia PDF Downloads 271160 Application of the Carboxylate Platform in the Consolidated Bioconversion of Agricultural Wastes to Biofuel Precursors
Authors: Sesethu G. Njokweni, Marelize Botes, Emile W. H. Van Zyl
Abstract:
An alternative strategy to the production of bioethanol is by examining the degradability of biomass in a natural system such as the rumen of mammals. This anaerobic microbial community has higher cellulolytic activities than microbial communities from other habitats and degrades cellulose to produce volatile fatty acids (VFA), methane and CO₂. VFAs have the potential to serve as intermediate products for electrochemical conversion to hydrocarbon fuels. In vitro mimicking of this process would be more cost-effective than bioethanol production as it does not require chemical pre-treatment of biomass, a sterile environment or added enzymes. The strategies of the carboxylate platform and the co-cultures of a bovine ruminal microbiota from cannulated cows were combined in order to investigate and optimize the bioconversion of agricultural biomass (apple and grape pomace, citrus pulp, sugarcane bagasse and triticale straw) to high value VFAs as intermediates for biofuel production in a consolidated bioprocess. Optimisation of reactor conditions was investigated using five different ruminal inoculum concentrations; 5,10,15,20 and 25% with fixed pH at 6.8 and temperature at 39 ˚C. The ANKOM 200/220 fiber analyser was used to analyse in vitro neutral detergent fiber (NDF) disappearance of the feedstuffs. Fresh and cryo-frozen (5% DMSO and 50% glycerol for 3 months) rumen cultures were tested for the retainment of fermentation capacity and durability in 72 h fermentations in 125 ml serum vials using a FURO medical solutions 6-valve gas manifold to induce anaerobic conditions. Fermentation of apple pomace, triticale straw, and grape pomace showed no significant difference (P > 0.05) in the effect of 15 and 20 % inoculum concentrations for the total VFA yield. However, high performance liquid chromatographic separation within the two inoculum concentrations showed a significant difference (P < 0.05) in acetic acid yield, with 20% inoculum concentration being the optimum at 4.67 g/l. NDF disappearance of 85% in 96 h and total VFA yield of 11.5 g/l in 72 h (A/P ratio = 2.04) for apple pomace entailed that it was the optimal feedstuff for this process. The NDF disappearance and VFA yield of DMSO (82% NDF disappearance and 10.6 g/l VFA) and glycerol (90% NDF disappearance and 11.6 g/l VFA) stored rumen also showed significantly similar degradability of apple pomace with lack of treatment effect differences compared to a fresh rumen control (P > 0.05). The lack of treatment effects was a positive sign in indicating that there was no difference between the stored samples and the fresh rumen control. Retaining of the fermentation capacity within the preserved cultures suggests that its metabolic characteristics were preserved due to resilience and redundancy of the rumen culture. The amount of degradability and VFA yield within a short span was similar to other carboxylate platforms that have longer run times. This study shows that by virtue of faster rates and high extent of degradability, small scale alternatives to bioethanol such as rumen microbiomes and other natural fermenting microbiomes can be employed to enhance the feasibility of biofuels large-scale implementation.Keywords: agricultural wastes, carboxylate platform, rumen microbiome, volatile fatty acids
Procedia PDF Downloads 131159 Regulation Effect of Intestinal Microbiota by Fermented Processing Wastewater of Yuba
Authors: Ting Wu, Feiting Hu, Xinyue Zhang, Shuxin Tang, Xiaoyun Xu
Abstract:
As a by-product of yuba, processing wastewater of Yuba (PWY) contains many bioactive components such as soybean isoflavones, soybean polysaccharides and soybean oligosaccharides, which is a good source of prebiotics and has a potential of high value utilization. The use of Lactobacillus plantarum to ferment PWY can be considered as a potential biogenic element, which can regulate the balance of intestinal microbiota. In this study, firstly, Lactobacillus plantarum was used to ferment PWY to improve its content of active components and antioxidant activity. Then, the health effect of fermented processing wastewater of yuba (FPWY) was measured in vitro. Finally, microencapsulation technology was used applied to improve the sustained release of FPWY and reduce the loss of active components in the digestion process, as well as to improving the activity of FPWY. The main results are as follows: (1) FPWY presented a good antioxidant capacity with DPPH free radical scavenging ability (0.83 ± 0.01 mmol Trolox/L), ABTS free radical scavenging ability (7.47 ± 0.35 mmol Trolox/L) and iron ion reducing ability (1.11 ± 0.07 mmol Trolox/L). Compared with non-fermented processing wastewater of yuba (NFPWY), there was no significant difference in the content of total soybean isoflavones, but the content of glucoside soybean isoflavones decreased, and aglyconic soybean isoflavones increased significantly. After fermentation, PWY can effectively reduce the soluble monosaccharides, disaccharides and oligosaccharides, such as glucose, fructose, galactose, trehalose, stachyose, maltose, raffinose and sucrose. (2) FPWY can significantly enhance the growth of beneficial bacteria such as Bifidobacterium, Ruminococcus and Akkermansia, significantly inhibit the growth of harmful bacteria E.coli, regulate the structure of intestinal microbiota, and significantly increase the content of short-chain fatty acids such as acetic acid, propionic acid, butyric acid, isovaleric acid. Higher amount of lactic acid in the gut can be further broken down into short chain fatty acids. (3) In order to improve the stability of soybean isoflavones in FPWY during digestion, sodium alginate and chitosan were used as wall materials for embedding. The FPWY freeze-dried powder was embedded by the method of acute-coagulation bath. The results show that when the core wall ratio is 3:1, the concentration of chitosan is 1.5%, the concentration of sodium alginate is 2.0%, and the concentration of calcium is 3%, the embossing rate is 53.20%. In the simulated in vitro digestion stage, the release rate of microcapsules reached 59.36% at the end of gastric digestion and 82.90% at the end of intestinal digestion. Therefore, the core materials with good sustained-release performance of microcapsules were almost all released. The structural analysis results of FPWY microcapsules show that the microcapsules have good mechanical properties. Its hardness, springness, cohesiveness, gumminess, chewiness and resilience were 117.75± 0.21 g, 0.76±0.02, 0.54±0.01, 63.28±0.71 g·sec, 48.03±1.37 g·sec, 0.31±0.01, respectively. Compared with the unembedded FPWY, the infrared spectrum results showed that the microcapsules had embedded effect on the FPWY freeze-dried powder.Keywords: processing wastewater of yuba, lactobacillus plantarum, intestinal microbiota, microcapsule
Procedia PDF Downloads 80158 Deep Learning for SAR Images Restoration
Authors: Hossein Aghababaei, Sergio Vitale, Giampaolo Ferraioli
Abstract:
In the context of Synthetic Aperture Radar (SAR) data, polarization is an important source of information for Earth's surface monitoring. SAR Systems are often considered to transmit only one polarization. This constraint leads to either single or dual polarimetric SAR imaging modalities. Single polarimetric systems operate with a fixed single polarization of both transmitted and received electromagnetic (EM) waves, resulting in a single acquisition channel. Dual polarimetric systems, on the other hand, transmit in one fixed polarization and receive in two orthogonal polarizations, resulting in two acquisition channels. Dual polarimetric systems are obviously more informative than single polarimetric systems and are increasingly being used for a variety of remote sensing applications. In dual polarimetric systems, the choice of polarizations for the transmitter and the receiver is open. The choice of circular transmit polarization and coherent dual linear receive polarizations forms a special dual polarimetric system called hybrid polarimetry, which brings the properties of rotational invariance to geometrical orientations of features in the scene and optimizes the design of the radar in terms of reliability, mass, and power constraints. The complete characterization of target scattering, however, requires fully polarimetric data, which can be acquired with systems that transmit two orthogonal polarizations. This adds further complexity to data acquisition and shortens the coverage area or swath of fully polarimetric images compared to the swath of dual or hybrid polarimetric images. The search for solutions to augment dual polarimetric data to full polarimetric data will therefore take advantage of full characterization and exploitation of the backscattered field over a wider coverage with less system complexity. Several methods for reconstructing fully polarimetric images using hybrid polarimetric data can be found in the literature. Although the improvements achieved by the newly investigated and experimented reconstruction techniques are undeniable, the existing methods are, however, mostly based upon model assumptions (especially the assumption of reflectance symmetry), which may limit their reliability and applicability to vegetation and forest scenarios. To overcome the problems of these techniques, this paper proposes a new framework for reconstructing fully polarimetric information from hybrid polarimetric data. The framework uses Deep Learning solutions to augment hybrid polarimetric data without relying on model assumptions. A convolutional neural network (CNN) with a specific architecture and loss function is defined for this augmentation problem by focusing on different scattering properties of the polarimetric data. In particular, the method controls the CNN training process with respect to several characteristic features of polarimetric images defined by the combination of different terms in the cost or loss function. The proposed method is experimentally validated with real data sets and compared with a well-known and standard approach from the literature. From the experiments, the reconstruction performance of the proposed framework is superior to conventional reconstruction methods. The pseudo fully polarimetric data reconstructed by the proposed method also agree well with the actual fully polarimetric images acquired by radar systems, confirming the reliability and efficiency of the proposed method.Keywords: SAR image, polarimetric SAR image, convolutional neural network, deep learnig, deep neural network
Procedia PDF Downloads 72157 Artificial Intelligence in Management Simulators
Authors: Nuno Biga
Abstract:
Artificial Intelligence (AI) has the potential to transform management into several impactful ways. It allows machines to interpret information to find patterns in big data and learn from context analysis, optimize operations, make predictions sensitive to each specific situation and support data-driven decision making. The introduction of an 'artificial brain' in organization also enables learning through complex information and data provided by those who train it, namely its users. The "Assisted-BIGAMES" version of the Accident & Emergency (A&E) simulator introduces the concept of a "Virtual Assistant" (VA) sensitive to context, that provides users useful suggestions to pursue the following operations such as: a) to relocate workstations in order to shorten travelled distances and minimize the stress of those involved; b) to identify in real time existing bottleneck(s) in the operations system so that it is possible to quickly act upon them; c) to identify resources that should be polyvalent so that the system can be more efficient; d) to identify in which specific processes it may be advantageous to establish partnership with other teams; and e) to assess possible solutions based on the suggested KPIs allowing action monitoring to guide the (re)definition of future strategies. This paper is built on the BIGAMES© simulator and presents the conceptual AI model developed and demonstrated through a pilot project (BIG-AI). Each Virtual Assisted BIGAME is a management simulator developed by the author that guides operational and strategic decision making, providing users with useful information in the form of management recommendations that make it possible to predict the actual outcome of different alternative management strategic actions. The pilot project developed incorporates results from 12 editions of the BIGAME A&E that took place between 2017 and 2022 at AESE Business School, based on the compilation of data that allows establishing causal relationships between decisions taken and results obtained. The systemic analysis and interpretation of data is powered in the Assisted-BIGAMES through a computer application called "BIGAMES Virtual Assistant" (VA) that players can use during the Game. Each participant in the VA permanently asks himself about the decisions he should make during the game to win the competition. To this end, the role of the VA of each team consists in guiding the players to be more effective in their decision making, through presenting recommendations based on AI methods. It is important to note that the VA's suggestions for action can be accepted or rejected by the managers of each team, as they gain a better understanding of the issues along time, reflect on good practice and rely on their own experience, capability and knowledge to support their own decisions. Preliminary results show that the introduction of the VA provides a faster learning of the decision-making process. The facilitator designated as “Serious Game Controller” (SGC) is responsible for supporting the players with further analysis. The recommended actions by the SGC may differ or be similar to the ones previously provided by the VA, ensuring a higher degree of robustness in decision-making. Additionally, all the information should be jointly analyzed and assessed by each player, who are expected to add “Emotional Intelligence”, an essential component absent from the machine learning process.Keywords: artificial intelligence, gamification, key performance indicators, machine learning, management simulators, serious games, virtual assistant
Procedia PDF Downloads 106156 Scalable CI/CD and Scalable Automation: Assisting in Optimizing Productivity and Fostering Delivery Expansion
Authors: Solanki Ravirajsinh, Kudo Kuniaki, Sharma Ankit, Devi Sherine, Kuboshima Misaki, Tachi Shuntaro
Abstract:
In software development life cycles, the absence of scalable CI/CD significantly impacts organizations, leading to increased overall maintenance costs, prolonged release delivery times, heightened manual efforts, and difficulties in meeting tight deadlines. Implementing CI/CD with standard serverless technologies using cloud services overcomes all the above-mentioned issues and helps organizations improve efficiency and faster delivery without the need to manage server maintenance and capacity. By integrating scalable CI/CD with scalable automation testing, productivity, quality, and agility are enhanced while reducing the need for repetitive work and manual efforts. Implementing scalable CI/CD for development using cloud services like ECS (Container Management Service), AWS Fargate, ECR (to store Docker images with all dependencies), Serverless Computing (serverless virtual machines), Cloud Log (for monitoring errors and logs), Security Groups (for inside/outside access to the application), Docker Containerization (Docker-based images and container techniques), Jenkins (CI/CD build management tool), and code management tools (GitHub, Bitbucket, AWS CodeCommit) can efficiently handle the demands of diverse development environments and are capable of accommodating dynamic workloads, increasing efficiency for faster delivery with good quality. CI/CD pipelines encourage collaboration among development, operations, and quality assurance teams by providing a centralized platform for automated testing, deployment, and monitoring. Scalable CI/CD streamlines the development process by automatically fetching the latest code from the repository every time the process starts, building the application based on the branches, testing the application using a scalable automation testing framework, and deploying the builds. Developers can focus more on writing code and less on managing infrastructure as it scales based on the need. Serverless CI/CD eliminates the need to manage and maintain traditional CI/CD infrastructure, such as servers and build agents, reducing operational overhead and allowing teams to allocate resources more efficiently. Scalable CI/CD adjusts the application's scale according to usage, thereby alleviating concerns about scalability, maintenance costs, and resource needs. Creating scalable automation testing using cloud services (ECR, ECS Fargate, Docker, EFS, Serverless Computing) helps organizations run more than 500 test cases in parallel, aiding in the detection of race conditions, performance issues, and reducing execution time. Scalable CI/CD offers flexibility, dynamically adjusting to varying workloads and demands, allowing teams to scale resources up or down as needed. It optimizes costs by only paying for the resources as they are used and increases reliability. Scalable CI/CD pipelines employ automated testing and validation processes to detect and prevent errors early in the development cycle.Keywords: achieve parallel execution, cloud services, scalable automation testing, scalable continuous integration and deployment
Procedia PDF Downloads 48155 Assessment and Characterization of Dual-Hardening Adhesion Promoter for Self-Healing Mechanisms in Metal-Plastic Hybrid System
Authors: Anas Hallak, Latifa Seblini, Juergen Wilde
Abstract:
In mechatronics or sensor technology, plastic housings are used to protect sensitive components from harmful environmental influences, such as moisture, media, or reactive substances. Connections, preferably in the form of metallic lead-frame structures, through the housing wall are required for their electrical supply or control. In this system, an insufficient connection between the plastic component, e.g., Polyamide66, and the metal surface, e.g., copper, due to the incompatibility is dominating. As a result, leakage paths can occur along with the plastic-metal interface. Since adhesive bonding has been established as one of the most important joining processes and its use has expanded significantly, driven by the development of improved high-performance adhesives and bonding techniques, this technology has been involved in metal-plastic hybrid structures. In this study, an epoxy bonding agent from DELO (DUALBOND LT2266) has been used to improve the mechanical and chemical binding between the metal and the polymer. It is an adhesion promoter with two reaction stages. In these, the first stage provides fixation to the lead frame directly after the coating step, which can be done by UV-Exposure for a few seconds. In the second stage, the material will be thermally hardened during injection molding. To analyze the two reaction stages of the primer, dynamic DSC experiments were carried out and correlated with Fourier-transform infrared spectroscopy measurements. Furthermore, the number of crosslinking bonds formed in the system in each reaction stage has also been estimated by a rheological characterization. Those investigations have been performed with different times of UV exposure: 12, 96 s and in an industrial preferred temperature range from -20 to 175°C. The shear viscosity values of primer have been measured as a function of temperature and exposure times. For further interpretation, the storage modulus values have been calculated, and the so-called Booij–Palmen plot has been sketched. The next approach in this study is the self-healing mechanisms in the hydride system in which the primer should flow into micro-damage such as interface, cracks, inhibit them from growing, and close them. The ability of the primer to flow in and penetrate defined capillaries made in Ultramid was investigated. Holes with a diameter of 0.3 mm were produced in injection-molded A3EG7 plates with 4 mm thickness. A copper substrate coated with the DUALBOND was placed on the A3EG7 plate and pressed with a certain force. Metallographic analyses were carried out to verify the filling grade, which showed an almost 95% filling ratio of the capillaries. Finally, to estimate the self-healing mechanism in metal-plastic hybrid systems, characterizations have been done on a simple geometry with a metal inlay developed by the Institute of Polymer Technology in Friedrich-Alexander-University. The specimens have been modified with tungsten wire which was to be pulled out after the injection molding to create a micro-hole in the specimen at the interface between the primer and the polymer. The capability of the primer to heal those micro-cracks upon heating, pressing, and thermal aging has been characterized through metallographic analyses.Keywords: hybrid structures, self-healing, thermoplastic housing, adhesive
Procedia PDF Downloads 196154 Shear Strength Characterization of Coal Mine Spoil in Very-High Dumps with Large Scale Direct Shear Testing
Authors: Leonie Bradfield, Stephen Fityus, John Simmons
Abstract:
The shearing behavior of current and planned coal mine spoil dumps up to 400m in height is studied using large-sample-high-stress direct shear tests performed on a range of spoils common to the coalfields of Eastern Australia. The motivation for the study is to address industry concerns that some constructed spoil dump heights ( > 350m) are exceeding the scale ( ≤ 120m) for which reliable design information exists, and because modern geotechnical laboratories are not equipped to test representative spoil specimens at field-scale stresses. For more than two decades, shear strength estimation for spoil dumps has been based on either infrequent, very small-scale tests where oversize particles are scalped to comply with device specimen size capacity such that the influence of prototype-sized particles on shear strength is not captured; or on published guidelines that provide linear shear strength envelopes derived from small-scale test data and verified in practice by slope performance of dumps up to 120m in height. To date, these published guidelines appear to have been reliable. However, in the field of rockfill dam design there is a broad acceptance of a curvilinear shear strength envelope, and if this is applicable to coal mine spoils, then these industry-accepted guidelines may overestimate the strength and stability of dumps at higher stress levels. The pressing need to rationally define the shearing behavior of more representative spoil specimens at field-scale stresses led to the successful design, construction and operation of a large direct shear machine (LDSM) and its subsequent application to provide reliable design information for current and planned very-high dumps. The LDSM can test at a much larger scale, in terms of combined specimen size (720mm x 720mm x 600mm) and stress (σn up to 4.6MPa), than has ever previously been achieved using a direct shear machine for geotechnical testing of rockfill. The results of an extensive LDSM testing program on a wide range of coal-mine spoils are compared to a published framework that widely accepted by the Australian coal mining industry as the standard for shear strength characterization of mine spoil. A critical outcome is that the LDSM data highlights several non-compliant spoils, and stress-dependent shearing behavior, for which the correct application of the published framework will not provide reliable shear strength parameters for design. Shear strength envelopes developed from the LDSM data are also compared with dam engineering knowledge, where failure envelopes of rockfills are curved in a concave-down manner. The LDSM data indicates that shear strength envelopes for coal-mine spoils abundant with rock fragments are not in fact curved and that the shape of the failure envelope is ultimately determined by the strength of rock fragments. Curvilinear failure envelopes were found to be appropriate for soil-like spoils containing minor or no rock fragments, or hard-soil aggregates.Keywords: coal mine, direct shear test, high dump, large scale, mine spoil, shear strength, spoil dump
Procedia PDF Downloads 164153 Blended Learning Instructional Approach to Teach Pharmaceutical Calculations
Authors: Sini George
Abstract:
Active learning pedagogies are valued for their success in increasing 21st-century learners’ engagement, developing transferable skills like critical thinking or quantitative reasoning, and creating deeper and more lasting educational gains. 'Blended learning' is an active learning pedagogical approach in which direct instruction moves from the group learning space to the individual learning space, and the resulting group space is transformed into a dynamic, interactive learning environment where the educator guides students as they apply concepts and engage creatively in the subject matter. This project aimed to develop a blended learning instructional approach to teaching concepts around pharmaceutical calculations to year 1 pharmacy students. The wrong dose, strength or frequency of a medication accounts for almost a third of medication errors in the NHS therefore, progression to year 2 requires a 70% pass in this calculation test, in addition to the standard progression requirements. Many students were struggling to achieve this requirement in the past. It was also challenging to teach these concepts to students of a large class (> 130) with mixed mathematical abilities, especially within a traditional didactic lecture format. Therefore, short screencasts with voice-over of the lecturer were provided in advance of a total of four teaching sessions (two hours/session), incorporating core content of each session and talking through how they approached the calculations to model metacognition. Links to the screencasts were posted on the learning management. Viewership counts were used to determine that the students were indeed accessing and watching the screencasts on schedule. In the classroom, students had to apply the knowledge learned beforehand to a series of increasingly difficult set of questions. Students were then asked to create a question in group settings (two students/group) and to discuss the questions created by their peers in their groups to promote deep conceptual learning. Students were also given time for question-and-answer period to seek clarifications on the concepts covered. Student response to this instructional approach and their test grades were collected. After collecting and organizing the data, statistical analysis was carried out to calculate binomial statistics for the two data sets: the test grade for students who received blended learning instruction and the test grades for students who received instruction in a standard lecture format in class, to compare the effectiveness of each type of instruction. Student response and their performance data on the assessment indicate that the learning of content in the blended learning instructional approach led to higher levels of student engagement, satisfaction, and more substantial learning gains. The blended learning approach enabled each student to learn how to do calculations at their own pace freeing class time for interactive application of this knowledge. Although time-consuming for an instructor to implement, the findings of this research demonstrate that the blended learning instructional approach improves student academic outcomes and represents a valuable method to incorporate active learning methodologies while still maintaining broad content coverage. Satisfaction with this approach was high, and we are currently developing more pharmacy content for delivery in this format.Keywords: active learning, blended learning, deep conceptual learning, instructional approach, metacognition, pharmaceutical calculations
Procedia PDF Downloads 175152 Educational Audit and Curricular Reforms in the Arabian Context
Authors: Irum Naz
Abstract:
In the Arabian higher education context, linguistic proficiency in the English language is considered crucial for the developmental sustainability, economic growth, and stability of communities and societies. Qatar’s educational reforms package, through the 2030 vision, identifies the acquisition of English at K-12 as an essential survival communication tool for globalization, believing that Qatari students need better preparation to take on the responsibilities of leadership and to participate effectively in the country’s surging economy. The idea of introducing Qatari students to modern curricula benchmarked to high-student-performance curricula in developed countries is one of the components of reformatory design principles of Education for New Era reform project that is mutually consented to and supported by the Office of Shared Services, Communications Office, and Supreme Education Council. In appreciation of the government’s vision, the English Language Centre (ELC) at the Community College of Qatar ran an internal educational audit and conducted evaluative research to understand and appraise the value, impact, and practicality of the existing ELC language development program. This study sought to identify the type of change that could identify and improve the quality of Foundation Program courses and the manners in which second language learners could be assisted to transit smoothly between (ELC) levels. Following the interpretivist paradigm and mixed research method, the data was gathered through a bicyclic research model and a triangular design. The analyses of the data suggested that there was a need for improvement in the ELC program as a whole, and particularly in terms of curriculum, student learning outcomes, and the general learning environment in the department. Key findings suggest that the target program would benefit from significant revisions, which would include narrowing the focus of the courses, providing sets of specific learning objectives, and preventing repetition between levels. Another promising finding was about the assessment tools and process. The data suggested that a set of standardized assessments that more closely suited the programs of study should be devised. It was also recommended that students undergo a more comprehensive placement process to ensure that they begin the program at an appropriate level and get the maximum benefit from their learning experience. Although this ties into the idea of curriculum revamp, it was expected that students could leave the ELC having had exposure to courses in English for specific purposes. The idea of a more reliable exit assessment for students was raised frequently so ELC could regulate itself and ensure optimum learning outcomes. Another important recommendation was the provision of a Student Learning Center for students that would help them to receive personalized tuition, differentiated instruction, and self-driven and self-evaluated learning experience. In addition, an extra study level was recommended to be added to the program to accommodate the different levels of English language proficiency represented among ELC students. The evidence collected in the course of conducting the study suggests that significant change is needed in the structure of the ELC program, specifically about curriculum, the program learning outcomes, and the learning environment in general.Keywords: educational audit, ESL, optimum learning outcomes, Qatar’s educational reforms, self-driven and self-evaluated learning experience, Student Learning Center
Procedia PDF Downloads 187151 3D Printing of Polycaprolactone Scaffold with Multiscale Porosity Via Incorporation of Sacrificial Sucrose Particles
Authors: Mikaela Kutrolli, Noah S. Pereira, Vanessa Scanlon, Mohamadmahdi Samandari, Ali Tamayol
Abstract:
Bone tissue engineering has drawn significant attention and various biomaterials have been tested. Polymers such as polycaprolactone (PCL) offer excellent biocompatibility, reasonable mechanical properties, and biodegradability. However, PCL scaffolds suffer a critical drawback: a lack of micro/mesoporosity, affecting cell attachment, tissue integration, and mineralization. It also results in a slow degradation rate. While 3D-printing has addressed the issue of macroporosity through CAD-guided fabrication, PCL scaffolds still exhibit poor smaller-scale porosity. To overcome this, we generated composites of PCL, hydroxyapatite (HA), and powdered sucrose (PS). The latter serves as a sacrificial material to generate porous particles after sucrose dissolution. Additionally, we have incorporated dexamethasone (DEX) to boost the PCL osteogenic properties. The resulting scaffolds maintain controlled macroporosity from the lattice print structure but also develop micro/mesoporosity within PCL fibers when exposed to aqueous environments. The study involved mixing PS into solvent-dissolved PCL in different weight ratios of PS to PCL (70:30, 50:50, and 30:70 wt%). The resulting composite was used for 3D printing of scaffolds at room temperature. Printability was optimized by adjusting pressure, speed, and layer height through filament collapse and fusion test. Enzymatic degradation, porogen leaching, and DEX release profiles were characterized. Physical properties were assessed using wettability, SEM, and micro-CT to quantify the porosity (percentage, pore size, and interconnectivity). Raman spectroscopy was used to verify the absence of sugar after leaching. Mechanical characteristics were evaluated via compression testing before and after porogen leaching. Bone marrow stromal cells (BMSCs) behavior in the printed scaffolds was studied by assessing viability, metabolic activity, osteo-differentiation, and mineralization. The scaffolds with a 70% sugar concentration exhibited superior printability and reached the highest porosity of 80%, but performed poorly during mechanical testing. A 50% PS concentration demonstrated a 70% porosity, with an average pore size of 25 µm, favoring cell attachment. No trace of sucrose was found in Raman after leaching the sugar for 8 hours. Water contact angle results show improved hydrophilicity as the sugar concentration increased, making the scaffolds more conductive to cell adhesion. The behavior of bone marrow stromal cells (BMSCs) showed positive viability and proliferation results with an increasing trend of mineralization and osteo-differentiation as the sucrose concentration increased. The addition of HA and DEX also promoted mineralization and osteo-differentiation in the cultures. The integration of PS as porogen at a concentration of 50%wt within PCL scaffolds presents a promising approach to address the poor cell attachment and tissue integration issues of PCL in bone tissue engineering. The method allows for the fabrication of scaffolds with tunable porosity and mechanical properties, suitable for various applications. The addition of HA and DEX further enhanced the scaffolds. Future studies will apply the scaffolds in an in-vivo model to thoroughly investigate their performance.Keywords: bone, PCL, 3D printing, tissue engineering
Procedia PDF Downloads 60150 Developing Early Intervention Tools: Predicting Academic Dishonesty in University Students Using Psychological Traits and Machine Learning
Authors: Pinzhe Zhao
Abstract:
This study focuses on predicting university students' cheating tendencies using psychological traits and machine learning techniques. Academic dishonesty is a significant issue that compromises the integrity and fairness of educational institutions. While much research has been dedicated to detecting cheating behaviors after they have occurred, there is limited work on predicting such tendencies before they manifest. The aim of this research is to develop a model that can identify students who are at higher risk of engaging in academic misconduct, allowing for earlier interventions to prevent such behavior. Psychological factors are known to influence students' likelihood of cheating. Research shows that traits such as test anxiety, moral reasoning, self-efficacy, and achievement motivation are strongly linked to academic dishonesty. High levels of anxiety may lead students to cheat as a way to cope with pressure. Those with lower self-efficacy are less confident in their academic abilities, which can push them toward dishonest behaviors to secure better outcomes. Students with weaker moral judgment may also justify cheating more easily, believing it to be less wrong under certain conditions. Achievement motivation also plays a role, as students driven primarily by external rewards, such as grades, are more likely to cheat compared to those motivated by intrinsic learning goals. In this study, data on students’ psychological traits is collected through validated assessments, including scales for anxiety, moral reasoning, self-efficacy, and motivation. Additional data on academic performance, attendance, and engagement in class are also gathered to create a more comprehensive profile. Using machine learning algorithms such as Random Forest, Support Vector Machines (SVM), and Long Short-Term Memory (LSTM) networks, the research builds models that can predict students’ cheating tendencies. These models are trained and evaluated using metrics like accuracy, precision, recall, and F1 scores to ensure they provide reliable predictions. The findings demonstrate that combining psychological traits with machine learning provides a powerful method for identifying students at risk of cheating. This approach allows for early detection and intervention, enabling educational institutions to take proactive steps in promoting academic integrity. The predictive model can be used to inform targeted interventions, such as counseling for students with high test anxiety or workshops aimed at strengthening moral reasoning. By addressing the underlying factors that contribute to cheating behavior, educational institutions can reduce the occurrence of academic dishonesty and foster a culture of integrity. In conclusion, this research contributes to the growing body of literature on predictive analytics in education. It offers a approach by integrating psychological assessments with machine learning to predict cheating tendencies. This method has the potential to significantly improve how academic institutions address academic dishonesty, shifting the focus from punishment after the fact to prevention before it occurs. By identifying high-risk students and providing them with the necessary support, educators can help maintain the fairness and integrity of the academic environment.Keywords: academic dishonesty, cheating prediction, intervention strategies, machine learning, psychological traits, academic integrity
Procedia PDF Downloads 24149 Kinematic Gait Analysis Is a Non-Invasive, More Objective and Earlier Measurement of Impairment in the Mdx Mouse Model of Duchenne Muscular Dystrophy
Authors: P. J. Sweeney, T. Ahtoniemi, J. Puoliväli, T. Laitinen, K. Lehtimäki, A. Nurmi, D. Wells
Abstract:
Duchenne muscular dystrophy (DMD) is caused by an X linked mutation in the dystrophin gene; lack of dystrophin causes a progressive muscle necrosis which leads to a progressive decrease in mobility in those suffering from the disease. The MDX mouse, a mutant mouse model which displays a frank dystrophinopathy, is currently widely employed in pre clinical efficacy models for treatments and therapies aimed at DMD. In general the end-points examined within this model have been based on invasive histopathology of muscles and serum biochemical measures like measurement of serum creatine kinase (sCK). It is established that a “critical period” between 4 and 6 weeks exists in the MDX mouse when there is extensive muscle damage that is largely sub clinical but evident with sCK measurements and histopathological staining. However, a full characterization of the MDX model remains largely incomplete especially with respect to the ability to aggravate of the muscle damage beyond the critical period. The purpose of this study was to attempt to aggravate the muscle damage in the MDX mouse and to create a wider, more readily translatable and discernible, therapeutic window for the testing of potential therapies for DMD. The study consisted of subjecting 15 male mutant MDX mice and 15 male wild-type mice to an intense chronic exercise regime that consisted of bi-weekly (two times per week) treadmill sessions over a 12 month period. Each session was 30 minutes in duration and the treadmill speed was gradually built up to 14m/min for the entire session. Baseline plasma creatine kinase (pCK), treadmill training performance and locomotor activity were measured after the “critical period” at around 10 weeks of age and again at 14 weeks of age, 6 months, 9 months and 12 months of age. In addition, kinematic gait analysis was employed using a novel analysis algorithm in order to compare changes in gait and fine motor skills in diseased exercised MDX mice compared to exercised wild type mice and non exercised MDX mice. In addition, a morphological and metabolic profile (including lipid profile), from the muscles most severely affected, the gastrocnemius muscle and the tibialis anterior muscle, was also measured at the same time intervals. Results indicate that by aggravating or exacerbating the underlying muscle damage in the MDX mouse by exercise a more pronounced and severe phenotype in comes to light and this can be picked up earlier by kinematic gait analysis. A reduction in mobility as measured by open field is not apparent at younger ages nor during the critical period, but changes in gait are apparent in the mutant MDX mice. These gait changes coincide with pronounced morphological and metabolic changes by non-invasive anatomical MRI and proton spectroscopy (1H-MRS) we have reported elsewhere. Evidence of a progressive asymmetric pathology in imaging parameters as well as in the kinematic gait analysis was found. Taken together, the data show that chronic exercise regime exacerbates the muscle damage beyond the critical period and the ability to measure through non-invasive means are important factors to consider when performing preclinical efficacy studies in the MDX mouse.Keywords: Gait, muscular dystrophy, Kinematic analysis, neuromuscular disease
Procedia PDF Downloads 277148 Poly (3,4-Ethylenedioxythiophene) Prepared by Vapor Phase Polymerization for Stimuli-Responsive Ion-Exchange Drug Delivery
Authors: M. Naveed Yasin, Robert Brooke, Andrew Chan, Geoffrey I. N. Waterhouse, Drew Evans, Darren Svirskis, Ilva D. Rupenthal
Abstract:
Poly(3,4-ethylenedioxythiophene) (PEDOT) is a robust conducting polymer (CP) exhibiting high conductivity and environmental stability. It can be synthesized by either chemical, electrochemical or vapour phase polymerization (VPP). Dexamethasone sodium phosphate (dexP) is an anionic drug molecule which has previously been loaded onto PEDOT as a dopant via electrochemical polymerisation; however this technique requires conductive surfaces from which polymerization is initiated. On the other hand, VPP produces highly organized biocompatible CP structures while polymerization can be achieved onto a range of surfaces with a relatively straight forward scale-up process. Following VPP of PEDOT, dexP can be loaded and subsequently released via ion-exchange. This study aimed at preparing and characterising both non-porous and porous VPP PEDOT structures including examining drug loading and release via ion-exchange. Porous PEDOT structures were prepared by first depositing a sacrificial polystyrene (PS) colloidal template on a substrate, heat curing this deposition and then spin coating it with the oxidant solution (iron tosylate) at 1500 rpm for 20 sec. VPP of both porous and non-porous PEDOT was achieved by exposing to monomer vapours in a vacuum oven at 40 mbar and 40 °C for 3 hrs. Non-porous structures were prepared similarly on the same substrate but without any sacrificial template. Surface morphology, compositions and behaviour were then characterized by atomic force microscopy (AFM), scanning electron microscopy (SEM), x-ray photoelectron spectroscopy (XPS) and cyclic voltammetry (CV) respectively. Drug loading was achieved by 50 CV cycles in a 0.1 M dexP aqueous solution. For drug release, each sample was exposed to 20 mL of phosphate buffer saline (PBS) placed in a water bath operating at 37 °C and 100 rpm. Film was stimulated (continuous pulse of ± 1 V at 0.5 Hz for 17 mins) while immersed into PBS. Samples were collected at 1, 2, 6, 23, 24, 26 and 27 hrs and were analysed for dexP by high performance liquid chromatography (HPLC Agilent 1200 series). AFM and SEM revealed the honey comb nature of prepared porous structures. XPS data showed the elemental composition of the dexP loaded film surface, which related well with that of PEDOT and also showed that one dexP molecule was present per almost three EDOT monomer units. The reproducible electroactive nature was shown by several cycles of reduction and oxidation via CV. Drug release revealed success in drug loading via ion-exchange, with stimulated porous and non-porous structures exhibiting a proof of concept burst release upon application of an electrical stimulus. A similar drug release pattern was observed for porous and non-porous structures without any significant statistical difference, possibly due to the thin nature of these structures. To our knowledge, this is the first report to explore the potential of VPP prepared PEDOT for stimuli-responsive drug delivery via ion-exchange. The produced porous structures were ordered and highly porous as indicated by AFM and SEM. These porous structures exhibited good electroactivity as shown by CV. Future work will investigate porous structures as nano-reservoirs to increase drug loading while sealing these structures to minimize spontaneous drug leakage.Keywords: PEDOT for ion-exchange drug delivery, stimuli-responsive drug delivery, template based porous PEDOT structures, vapour phase polymerization of PEDOT
Procedia PDF Downloads 232147 Simulation-based Decision Making on Intra-hospital Patient Referral in a Collaborative Medical Alliance
Authors: Yuguang Gao, Mingtao Deng
Abstract:
The integration of independently operating hospitals into a unified healthcare service system has become a strategic imperative in the pursuit of hospitals’ high-quality development. Central to the concept of group governance over such transformation, exemplified by a collaborative medical alliance, is the delineation of shared value, vision, and goals. Given the inherent disparity in capabilities among hospitals within the alliance, particularly in the treatment of different diseases characterized by Disease Related Groups (DRG) in terms of effectiveness, efficiency and resource utilization, this study aims to address the centralized decision-making of intra-hospital patient referral within the medical alliance to enhance the overall production and quality of service provided. We first introduce the notion of production utility, where a higher production utility for a hospital implies better performance in treating patients diagnosed with that specific DRG group of diseases. Then, a Discrete-Event Simulation (DES) framework is established for patient referral among hospitals, where patient flow modeling incorporates a queueing system with fixed capacities for each hospital. The simulation study begins with a two-member alliance. The pivotal strategy examined is a "whether-to-refer" decision triggered when the bed usage rate surpasses a predefined threshold for either hospital. Then, the decision encompasses referring patients to the other hospital based on DRG groups’ production utility differentials as well as bed availability. The objective is to maximize the total production utility of the alliance while minimizing patients’ average length of stay and turnover rate. Thus the parameter under scrutiny is the bed usage rate threshold, influencing the efficacy of the referral strategy. Extending the study to a three-member alliance, which could readily be generalized to multi-member alliances, we maintain the core setup while introducing an additional “which-to-refer" decision that involves referring patients with specific DRG groups to the member hospital according to their respective production utility rankings. The overarching goal remains consistent, for which the bed usage rate threshold is once again a focal point for analysis. For the two-member alliance scenario, our simulation results indicate that the optimal bed usage rate threshold hinges on the discrepancy in the number of beds between member hospitals, the distribution of DRG groups among incoming patients, and variations in production utilities across hospitals. Transitioning to the three-member alliance, we observe similar dependencies on these parameters. Additionally, it becomes evident that an imbalanced distribution of DRG diagnoses and further disparity in production utilities among member hospitals may lead to an increase in the turnover rate. In general, it was found that the intra-hospital referral mechanism enhances the overall production utility of the medical alliance compared to individual hospitals without partnership. Patients’ average length of stay is also reduced, showcasing the positive impact of the collaborative approach. However, the turnover rate exhibits variability based on parameter setups, particularly when patients are redirected within the alliance. In conclusion, the re-structuring of diagnostic disease groups within the medical alliance proves instrumental in improving overall healthcare service outcomes, providing a compelling rationale for the government's promotion of patient referrals within collaborative medical alliances.Keywords: collaborative medical alliance, disease related group, patient referral, simulation
Procedia PDF Downloads 60146 Enhancing Early Detection of Coronary Heart Disease Through Cloud-Based AI and Novel Simulation Techniques
Authors: Md. Abu Sufian, Robiqul Islam, Imam Hossain Shajid, Mahesh Hanumanthu, Jarasree Varadarajan, Md. Sipon Miah, Mingbo Niu
Abstract:
Coronary Heart Disease (CHD) remains a principal cause of global morbidity and mortality, characterized by atherosclerosis—the build-up of fatty deposits inside the arteries. The study introduces an innovative methodology that leverages cloud-based platforms like AWS Live Streaming and Artificial Intelligence (AI) to early detect and prevent CHD symptoms in web applications. By employing novel simulation processes and AI algorithms, this research aims to significantly mitigate the health and societal impacts of CHD. Methodology: This study introduces a novel simulation process alongside a multi-phased model development strategy. Initially, health-related data, including heart rate variability, blood pressure, lipid profiles, and ECG readings, were collected through user interactions with web-based applications as well as API Integration. The novel simulation process involved creating synthetic datasets that mimic early-stage CHD symptoms, allowing for the refinement and training of AI algorithms under controlled conditions without compromising patient privacy. AWS Live Streaming was utilized to capture real-time health data, which was then processed and analysed using advanced AI techniques. The novel aspect of our methodology lies in the simulation of CHD symptom progression, which provides a dynamic training environment for our AI models enhancing their predictive accuracy and robustness. Model Development: it developed a machine learning model trained on both real and simulated datasets. Incorporating a variety of algorithms including neural networks and ensemble learning model to identify early signs of CHD. The model's continuous learning mechanism allows it to evolve adapting to new data inputs and improving its predictive performance over time. Results and Findings: The deployment of our model yielded promising results. In the validation phase, it achieved an accuracy of 92% in predicting early CHD symptoms surpassing existing models. The precision and recall metrics stood at 89% and 91% respectively, indicating a high level of reliability in identifying at-risk individuals. These results underscore the effectiveness of combining live data streaming with AI in the early detection of CHD. Societal Implications: The implementation of cloud-based AI for CHD symptom detection represents a significant step forward in preventive healthcare. By facilitating early intervention, this approach has the potential to reduce the incidence of CHD-related complications, decrease healthcare costs, and improve patient outcomes. Moreover, the accessibility and scalability of cloud-based solutions democratize advanced health monitoring, making it available to a broader population. This study illustrates the transformative potential of integrating technology and healthcare, setting a new standard for the early detection and management of chronic diseases.Keywords: coronary heart disease, cloud-based ai, machine learning, novel simulation techniques, early detection, preventive healthcare
Procedia PDF Downloads 67145 Mineralized Nanoparticles as a Contrast Agent for Ultrasound and Magnetic Resonance Imaging
Authors: Jae Won Lee, Kyung Hyun Min, Hong Jae Lee, Sang Cheon Lee
Abstract:
To date, imaging techniques have attracted much attention in medicine because the detection of diseases at an early stage provides greater opportunities for successful treatment. Consequently, over the past few decades, diverse imaging modalities including magnetic resonance (MR), positron emission tomography, computed tomography, and ultrasound (US) have been developed and applied widely in the field of clinical diagnosis. However, each of the above-mentioned imaging modalities possesses unique strengths and intrinsic weaknesses, which limit their abilities to provide accurate information. Therefore, multimodal imaging systems may be a solution that can provide improved diagnostic performance. Among the current medical imaging modalities, US is a widely available real-time imaging modality. It has many advantages including safety, low cost and easy access for patients. However, its low spatial resolution precludes accurate discrimination of diseased region such as cancer sites. In contrast, MR has no tissue-penetrating limit and can provide images possessing exquisite soft tissue contrast and high spatial resolution. However, it cannot offer real-time images and needs a comparatively long imaging time. The characteristics of these imaging modalities may be considered complementary, and the modalities have been frequently combined for the clinical diagnostic process. Biominerals such as calcium carbonate (CaCO3) and calcium phosphate (CaP) exhibit pH-dependent dissolution behavior. They demonstrate pH-controlled drug release due to the dissolution of minerals in acidic pH conditions. In particular, the application of this mineralization technique to a US contrast agent has been reported recently. The CaCO3 mineral reacts with acids and decomposes to generate calcium dioxide (CO2) gas in an acidic environment. These gas-generating mineralized nanoparticles generated CO2 bubbles in the acidic environment of the tumor, thereby allowing for strong echogenic US imaging of tumor tissues. On the basis of this previous work, it was hypothesized that the loading of MR contrast agents into the CaCO3 mineralized nanoparticles may be a novel strategy in designing a contrast agent for dual imaging. Herein, CaCO3 mineralized nanoparticles that were capable of generating CO2 bubbles to trigger the release of entrapped MR contrast agents in response to tumoral acidic pH were developed for the purposes of US and MR dual-modality imaging of tumors. Gd2O3 nanoparticles were selected as an MR contrast agent. A key strategy employed in this study was to prepare Gd2O3 nanoparticle-loaded mineralized nanoparticles (Gd2O3-MNPs) using block copolymer-templated CaCO3 mineralization in the presence of calcium cations (Ca2+), carbonate anions (CO32-) and positively charged Gd2O3 nanoparticles. The CaCO3 core was considered suitable because it may effectively shield Gd2O3 nanoparticles from water molecules in the blood (pH 7.4) before decomposing to generate CO2 gas, triggering the release of Gd2O3 nanoparticles in tumor tissues (pH 6.4~7.4). The kinetics of CaCO3 dissolution and CO2 generation from the Gd2O3-MNPs were examined as a function of pH and pH-dependent in vitro magnetic relaxation; additionally, the echogenic properties were estimated to demonstrate the potential of the particles for the tumor-specific US and MR imaging.Keywords: calcium carbonate, mineralization, ultrasound imaging, magnetic resonance imaging
Procedia PDF Downloads 240144 Upflow Anaerobic Sludge Blanket Reactor Followed by Dissolved Air Flotation Treating Municipal Sewage
Authors: Priscila Ribeiro dos Santos, Luiz Antonio Daniel
Abstract:
Inadequate access to clean water and sanitation has become one of the most widespread problems affecting people throughout the developing world, leading to an unceasing need for low-cost and sustainable wastewater treatment systems. The UASB technology has been widely employed as a suitable and economical option for the treatment of sewage in developing countries, which involves low initial investment, low energy requirements, low operation and maintenance costs, high loading capacity, short hydraulic retention times, long solids retention times and low sludge production. Whereas dissolved air flotation process is a good option for the post-treatment of anaerobic effluents, being capable of producing high quality effluents in terms of total suspended solids, chemical oxygen demand, phosphorus, and even pathogens. This work presents an evaluation and monitoring, over a period of 6 months, of one compact full-scale system with this configuration, UASB reactors followed by dissolved air flotation units (DAF), operating in Brazil. It was verified as a successful treatment system, and an issue of relevance since dissolved air flotation process treating UASB reactor effluents is not widely encompassed in the literature. The study covered the removal and behavior of several variables, such as turbidity, total suspend solids (TSS), chemical oxygen demand (COD), Escherichia coli, total coliforms and Clostridium perfringens. The physicochemical variables were analyzed according to the protocols established by the Standard Methods for Examination of Water and Wastewater. For microbiological variables, such as Escherichia coli and total coliforms, it was used the “pour plate” technique with Chromocult Coliform Agar (Merk Cat. No.1.10426) serving as the culture medium, while the microorganism Clostridium perfringens was analyzed through the filtering membrane technique, with the Ágar m-CP (Oxoid Ltda, England) serving as the culture medium. Approximately 74% of total COD was removed in the UASB reactor, and the complementary removal done during the flotation process resulted in 88% of COD removal from the raw sewage, thus the initial concentration of COD of 729 mg.L-1 decreased to 87 mg.L-1. Whereas, in terms of particulate COD, the overall removal efficiency for the whole system was about 94%, decreasing from 375 mg.L-1 in raw sewage to 29 mg.L-1 in final effluent. The UASB reactor removed on average 77% of the TSS from raw sewage. While the dissolved air flotation process did not work as expected, removing only 30% of TSS from the anaerobic effluent. The final effluent presented an average concentration of 38 mg.L-1 of TSS. The turbidity was significantly reduced, leading to an overall efficiency removal of 80% and a final turbidity of 28 NTU.The treated effluent still presented a high concentration of fecal pollution indicators (E. coli, total coliforms, and Clostridium perfringens), showing that the system did not present a good performance in removing pathogens. Clostridium perfringens was the organism which suffered the higher removal by the treatment system. The results can be considered satisfactory for the physicochemical variables, taking into account the simplicity of the system, besides that, it is necessary a post-treatment to improve the microbiological quality of the final effluent.Keywords: dissolved air flotation, municipal sewage, UASB reactor, treatment
Procedia PDF Downloads 333143 Deep Learning Based Polarimetric SAR Images Restoration
Authors: Hossein Aghababaei, Sergio Vitale, Giampaolo ferraioli
Abstract:
In the context of Synthetic Aperture Radar (SAR) data, polarization is an important source of information for Earth's surface monitoring . SAR Systems are often considered to transmit only one polarization. This constraint leads to either single or dual polarimetric SAR imaging modalities. Single polarimetric systems operate with a fixed single polarization of both transmitted and received electromagnetic (EM) waves, resulting in a single acquisition channel. Dual polarimetric systems, on the other hand, transmit in one fixed polarization and receive in two orthogonal polarizations, resulting in two acquisition channels. Dual polarimetric systems are obviously more informative than single polarimetric systems and are increasingly being used for a variety of remote sensing applications. In dual polarimetric systems, the choice of polarizations for the transmitter and the receiver is open. The choice of circular transmit polarization and coherent dual linear receive polarizations forms a special dual polarimetric system called hybrid polarimetry, which brings the properties of rotational invariance to geometrical orientations of features in the scene and optimizes the design of the radar in terms of reliability, mass, and power constraints. The complete characterization of target scattering, however, requires fully polarimetric data, which can be acquired with systems that transmit two orthogonal polarizations. This adds further complexity to data acquisition and shortens the coverage area or swath of fully polarimetric images compared to the swath of dual or hybrid polarimetric images. The search for solutions to augment dual polarimetric data to full polarimetric data will therefore take advantage of full characterization and exploitation of the backscattered field over a wider coverage with less system complexity. Several methods for reconstructing fully polarimetric images using hybrid polarimetric data can be found in the literature. Although the improvements achieved by the newly investigated and experimented reconstruction techniques are undeniable, the existing methods are, however, mostly based upon model assumptions (especially the assumption of reflectance symmetry), which may limit their reliability and applicability to vegetation and forest scenarios. To overcome the problems of these techniques, this paper proposes a new framework for reconstructing fully polarimetric information from hybrid polarimetric data. The framework uses Deep Learning solutions to augment hybrid polarimetric data without relying on model assumptions. A convolutional neural network (CNN) with a specific architecture and loss function is defined for this augmentation problem by focusing on different scattering properties of the polarimetric data. In particular, the method controls the CNN training process with respect to several characteristic features of polarimetric images defined by the combination of different terms in the cost or loss function. The proposed method is experimentally validated with real data sets and compared with a well-known and standard approach from the literature. From the experiments, the reconstruction performance of the proposed framework is superior to conventional reconstruction methods. The pseudo fully polarimetric data reconstructed by the proposed method also agree well with the actual fully polarimetric images acquired by radar systems, confirming the reliability and efficiency of the proposed method.Keywords: SAR image, deep learning, convolutional neural network, deep neural network, SAR polarimetry
Procedia PDF Downloads 94142 Learning Curve Effect on Materials Procurement Schedule of Multiple Sister Ships
Authors: Vijaya Dixit Aasheesh Dixit
Abstract:
Shipbuilding industry operates in Engineer Procure Construct (EPC) context. Product mix of a shipyard comprises of various types of ships like bulk carriers, tankers, barges, coast guard vessels, sub-marines etc. Each order is unique based on the type of ship and customized requirements, which are engineered into the product right from design stage. Thus, to execute every new project, a shipyard needs to upgrade its production expertise. As a result, over the long run, holistic learning occurs across different types of projects which contributes to the knowledge base of the shipyard. Simultaneously, in the short term, during execution of a project comprising of multiple sister ships, repetition of similar tasks leads to learning at activity level. This research aims to capture above learnings of a shipyard and incorporate learning curve effect in project scheduling and materials procurement to improve project performance. Extant literature provides support for the existence of such learnings in an organization. In shipbuilding, there are sequences of similar activities which are expected to exhibit learning curve behavior. For example, the nearly identical structural sub-blocks which are successively fabricated, erected, and outfitted with piping and electrical systems. Learning curve representation can model not only a decrease in mean completion time of an activity, but also a decrease in uncertainty of activity duration. Sister ships have similar material requirements. The same supplier base supplies materials for all the sister ships within a project. On one hand, this provides an opportunity to reduce transportation cost by batching the order quantities of multiple ships. On the other hand, it increases the inventory holding cost at shipyard and the risk of obsolescence. Further, due to learning curve effect the production scheduled of each consequent ship gets compressed. Thus, the material requirement schedule of every next ship differs from its previous ship. As more and more ships get constructed, compressed production schedules increase the possibility of batching the orders of sister ships. This work aims at integrating materials management with project scheduling of long duration projects for manufacturing of multiple sister ships. It incorporates the learning curve effect on progressively compressing material requirement schedules and addresses the above trade-off of transportation cost and inventory holding and shortage costs while satisfying budget constraints of various stages of the project. The activity durations and lead time of items are not crisp and are available in the form of probabilistic distribution. A Stochastic Mixed Integer Programming (SMIP) model is formulated which is solved using evolutionary algorithm. Its output provides ordering dates of items and degree of order batching for all types of items. Sensitivity analysis determines the threshold number of sister ships required in a project to leverage the advantage of learning curve effect in materials management decisions. This analysis will help materials managers to gain insights about the scenarios: when and to what degree is it beneficial to treat a multiple ship project as an integrated one by batching the order quantities and when and to what degree to practice distinctive procurement for individual ship.Keywords: learning curve, materials management, shipbuilding, sister ships
Procedia PDF Downloads 502141 A Bioinspired Anti-Fouling Coating for Implantable Medical Devices
Authors: Natalie Riley, Anita Quigley, Robert M. I. Kapsa, George W. Greene
Abstract:
As the fields of medicine and bionics grow rapidly in technological advancement, the future and success of it depends on the ability to effectively interface between the artificial and the biological worlds. The biggest obstacle when it comes to implantable, electronic medical devices, is maintaining a ‘clean’, low noise electrical connection that allows for efficient sharing of electrical information between the artificial and biological systems. Implant fouling occurs with the adhesion and accumulation of proteins and various cell types as a result of the immune response to protect itself from the foreign object, essentially forming an electrical insulation barrier that often leads to implant failure over time. Lubricin (LUB) functions as a major boundary lubricant in articular joints, a unique glycoprotein with impressive anti-adhesive properties that self-assembles to virtually any substrate to form a highly ordered, ‘telechelic’ polymer brush. LUB does not passivate electroactive surfaces which makes it ideal, along with its innate biocompatibility, as a coating for implantable bionic electrodes. It is the aim of the study to investigate LUB’s anti-fouling properties and its potential as a safe, bioinspired material for coating applications to enhance the performance and longevity of implantable medical devices as well as reducing the frequency of implant replacement surgeries. Native, bovine-derived LUB (N-LUB) and recombinant LUB (R-LUB) were applied to gold-coated mylar surfaces. Fibroblast, chondrocyte and neural cell types were cultured and grown on the coatings under both passive and electrically stimulated conditions to test the stability and anti-adhesive property of the LUB coating in the presence of an electric field. Lactate dehydrogenase (LDH) assays were conducted as a directly proportional cell population count on each surface along with immunofluorescent microscopy to visualize cells. One-way analysis of variance (ANOVA) with post-hoc Tukey’s test was used to test for statistical significance. Under both passive and electrically stimulated conditions, LUB significantly reduced cell attachment compared to bare gold. Comparing the two coating types, R-LUB reduced cell attachment significantly compared to its native counterpart. Immunofluorescent micrographs visually confirmed LUB’s antiadhesive property, R-LUB consistently demonstrating significantly less attached cells for both fibroblasts and chondrocytes. Preliminary results investigating neural cells have so far demonstrated that R-LUB has little effect on reducing neural cell attachment; the study is ongoing. Recombinant LUB coatings demonstrated impressive anti-adhesive properties, reducing cell attachment in fibroblasts and chondrocytes. These findings and the availability of recombinant LUB brings into question the results of previous experiments conducted using native-derived LUB, its potential not adequately represented nor realized due to unknown factors and impurities that warrant further study. R-LUB is stable and maintains its anti-fouling property under electrical stimulation, making it suitable for electroactive surfaces.Keywords: anti-fouling, bioinspired, cell attachment, lubricin
Procedia PDF Downloads 125140 Librarian Liaisons: Facilitating Multi-Disciplinary Research for Academic Advancement
Authors: Tracey Woods
Abstract:
In the ever-evolving landscape of academia, the traditional role of the librarian has undergone a remarkable transformation. Once considered as custodians of books and gatekeepers of information, librarians have the potential to take on the vital role of facilitators of cross and inter-disciplinary projects. This shift is driven by the growing recognition of the value of interdisciplinary collaboration in addressing complex research questions in pursuit of novel solutions to real-world problems. This paper shall explore the potential of the academic librarian’s role in facilitating innovative, multi-disciplinary projects, both recognising and validating the vital role that the librarian plays in a somewhat underplayed profession. Academic libraries support teaching, the strengthening of knowledge discourse, and, potentially, the development of innovative practices. As the role of the library gradually morphs from a quiet repository of books to a community-based information hub, a potential opportunity arises. The academic librarian’s role is to build knowledge across a wide span of topics, from the advancement of AI to subject-specific information, and, whilst librarians are generally not offered the research opportunities and funding that the traditional academic disciplines enjoy, they are often invited to help build research in support of the academic. This identifies that one of the primary skills of any 21st-century librarian must be the ability to collaborate and facilitate multi-disciplinary projects. In universities seeking to develop research diversity and academic performance, there is an increasing awareness of the need for collaboration between faculties to enable novel directions and advancements. This idea has been documented and discussed by several researchers; however, there is not a great deal of literature available from recent studies. Having a team based in the library that is adept at creating effective collaborative partnerships is valuable for any academic institution. This paper outlines the development of such a project, initiated within and around an identified library-specific need: the replication of fragile special collections for object-based learning. The research was developed as a multi-disciplinary project involving the faculties of engineering (digital twins lab), architecture, design, and education. Centred around methods for developing a fragile archive into a series of tactile objects furthers knowledge and understanding in both the role of the library as a facilitator of projects, chairing and supporting, alongside contributing to the research process and innovating ideas through the bank of knowledge found amongst the staff and their liaising capabilities. This paper shall present the method of project development from the initiation of ideas to the development of prototypes and dissemination of the objects to teaching departments for analysis. The exact replication of artefacts is also balanced with the adaptation and evolutionary speculations initiated by the design team when adapted as a teaching studio method. The dynamic response required from the library to generate and facilitate these multi-disciplinary projects highlights the information expertise and liaison skills that the librarian possesses. As academia embraces this evolution, the potential for groundbreaking discoveries and innovative solutions across disciplines becomes increasingly attainable.Keywords: Liaison librarian, multi-disciplinary collaborations, library innovations, librarian stakeholders
Procedia PDF Downloads 76139 Techno-Economic Assessment of Distributed Heat Pumps Integration within a Swedish Neighborhood: A Cosimulation Approach
Authors: Monica Arnaudo, Monika Topel, Bjorn Laumert
Abstract:
Within the Swedish context, the current trend of relatively low electricity prices promotes the electrification of the energy infrastructure. The residential heating sector takes part in this transition by proposing a switch from a centralized district heating system towards a distributed heat pumps-based setting. When it comes to urban environments, two issues arise. The first, seen from an electricity-sector perspective, is related to the fact that existing networks are limited with regards to their installed capacities. Additional electric loads, such as heat pumps, can cause severe overloads on crucial network elements. The second, seen from a heating-sector perspective, has to do with the fact that the indoor comfort conditions can become difficult to handle when the operation of the heat pumps is limited by a risk of overloading on the distribution grid. Furthermore, the uncertainty of the electricity market prices in the future introduces an additional variable. This study aims at assessing the extent to which distributed heat pumps can penetrate an existing heat energy network while respecting the technical limitations of the electricity grid and the thermal comfort levels in the buildings. In order to account for the multi-disciplinary nature of this research question, a cosimulation modeling approach was adopted. In this way, each energy technology is modeled in its customized simulation environment. As part of the cosimulation methodology: a steady-state power flow analysis in pandapower was used for modeling the electrical distribution grid, a thermal balance model of a reference building was implemented in EnergyPlus to account for space heating and a fluid-cycle model of a heat pump was implemented in JModelica to account for the actual heating technology. With the models set in place, different scenarios based on forecasted electricity market prices were developed both for present and future conditions of Hammarby Sjöstad, a neighborhood located in the south-east of Stockholm (Sweden). For each scenario, the technical and the comfort conditions were assessed. Additionally, the average cost of heat generation was estimated in terms of levelized cost of heat. This indicator enables a techno-economic comparison study among the different scenarios. In order to evaluate the levelized cost of heat, a yearly performance simulation of the energy infrastructure was implemented. The scenarios related to the current electricity prices show that distributed heat pumps can replace the district heating system by covering up to 30% of the heating demand. By lowering of 2°C, the minimum accepted indoor temperature of the apartments, this level of penetration can increase up to 40%. Within the future scenarios, if the electricity prices will increase, as most likely expected within the next decade, the penetration of distributed heat pumps can be limited to 15%. In terms of levelized cost of heat, a residential heat pump technology becomes competitive only within a scenario of decreasing electricity prices. In this case, a district heating system is characterized by an average cost of heat generation 7% higher compared to a distributed heat pumps option.Keywords: cosimulation, distributed heat pumps, district heating, electrical distribution grid, integrated energy systems
Procedia PDF Downloads 151