Search results for: establishment of information system
2757 Dynamic Analysis and Clutch Adaptive Prefill in Dual Clutch Transmission
Authors: Bin Zhou, Tongli Lu, Jianwu Zhang, Hongtao Hao
Abstract:
Dual clutch transmissions (DCT) offer a high comfort performance in terms of the gearshift. Hydraulic multi-disk clutches are the key components of DCT, its engagement determines the shifting comfort. The prefill of the clutches requests an initial engagement which the clutches just contact against each other but not transmit substantial torque from the engine, this initial clutch engagement point is called the touch point. Open-loop control is typically implemented for the clutch prefill, a lot of uncertainties, such as oil temperature and clutch wear, significantly affects the prefill, probably resulting in an inappropriate touch point. Underfill causes the engine flaring in gearshift while overfill arises clutch tying up, both deteriorating the shifting comfort of DCT. Therefore, it is important to enable an adaptive capacity for the clutch prefills regarding the uncertainties. In this paper, a dynamic model of the hydraulic actuator system is presented, including the variable force solenoid and clutch piston, and validated by a test. Subsequently, the open-loop clutch prefill is simulated based on the proposed model. Two control parameters of the prefill, fast fill time and stable fill pressure is analyzed with regard to the impact on the prefill. The former has great effects on the pressure transients, the latter directly influences the touch point. Finally, an adaptive method is proposed for the clutch prefill during gear shifting, in which clutch fill control parameters are adjusted adaptively and continually. The adaptive strategy is changing the stable fill pressure according to the current clutch slip during a gearshift, improving the next prefill process. The stable fill pressure is increased by means of the clutch slip while underfill and decreased with a constant value for overfill. The entire strategy is designed in the Simulink/Stateflow, and implemented in the transmission control unit with optimization. Road vehicle test results have shown the strategy realized its adaptive capability and proven it improves the shifting comfort.Keywords: clutch prefill, clutch slip, dual clutch transmission, touch point, variable force solenoid
Procedia PDF Downloads 3082756 Gadolinium-Based Polymer Nanostructures as Magnetic Resonance Imaging Contrast Agents
Authors: Franca De Sarno, Alfonso Maria Ponsiglione, Enza Torino
Abstract:
Recent advances in diagnostic imaging technology have significantly contributed to a better understanding of specific changes associated with diseases progression. Among different imaging modalities, Magnetic Resonance Imaging (MRI) represents a noninvasive medical diagnostic technique, which shows low sensitivity and long acquisition time and it can discriminate between healthy and diseased tissues by providing 3D data. In order to improve the enhancement of MRI signals, some imaging exams require intravenous administration of contrast agents (CAs). Recently, emerging research reports a progressive deposition of these drugs, in particular, gadolinium-based contrast agents (GBCAs), in the body many years after multiple MRI scans. These discoveries confirm the need to have a biocompatible system able to boost a clinical relevant Gd-chelate. To this aim, several approaches based on engineered nanostructures have been proposed to overcome the common limitations of conventional CAs, such as the insufficient signal-to-noise ratios due to relaxivity and poor safety profile. In particular, nanocarriers, labeling or loading with CAs, capable of carrying high payloads of CAs have been developed. Currently, there’s no a comprehensive understanding of the thermodynamic contributions enable of boosting the efficacy of conventional CAs by using biopolymers matrix. Thus, considering the importance of MRI in diagnosing diseases, here it is reported a successful example of the next generation of these drugs where the commercial gadolinium chelate is incorporate into a biopolymer nanostructure, formed by cross-linked hyaluronic acid (HA), with improved relaxation properties. In addition, they are highlighted the basic principles ruling biopolymer-CA interactions in the perspective of their influence on the relaxometric properties of the CA by adopting a multidisciplinary experimental approach. On the basis of these discoveries, it is clear that the main point consists in increasing the rigidification of readily-available Gd-CAs within the biopolymer matrix by controlling the water dynamics, the physicochemical interactions, and the polymer conformations. In the end, the acquired knowledge about polymer-CA systems has been applied to develop of Gd-based HA nanoparticles with enhanced relaxometric properties.Keywords: biopolymers, MRI, nanoparticles, contrast agent
Procedia PDF Downloads 1512755 Posterior Acetabular Fractures-Optimizing the Treatment by Enhancing Practical Skills
Authors: Olivera Lupescu, Taina Elena Avramescu, Mihail Nagea, Alexandru Dimitriu
Abstract:
Acetabular fractures represent a real challenge due to their impact upon the long term function of the hip joint, and due to the risk of intra- and peri-operative complications especially that they affect young, active people. That is why treating these fractures require certain skills which must be exercised, regarding the pre-operative planning, as well as the execution of surgery.The authors retrospectively analyse 38 cases with acetabular fractures operated using the posterior approach in our hospital between 01.01.2013- 01.01.2015 for which complete medical records ensure a follow-up of 24 months, in order to establish the main causes of potential errors and to underline the methods for preventing them. This target is included in the Erasmus + project ‘Collaborative learning for enhancing practical skills for patient-focused interventions in gait rehabilitation after orthopedic surgery COR-skills’. This paper analyses the pitfalls revealed by these cases, as well as the measures necessary to enhance the practical skills of the surgeons who perform acetabular surgery. Pre-op planning matched the intra and post-operative outcome in 88% of the analyzed points, from 72% at the beginning to 94% in the last case, meaning that experience is very important in treating this injury. The main problems detected for the posterior approach were: nervous complications - 3 cases, 1 of them a complete paralysis of the sciatic nerve, which recovered 6 months after surgery, and in other 2 cases intra-articular position of the screws was demonstrated by post-operative CT scans, so secondary screw removal was necessary in these cases. We analysed this incident, too, due to lack of information about the relationship between the screws and the joint secondary to this approach. Septic complications appeared in 3 cases, 2 superficial and 1 profound (requiring implant removal). The most important problems were the reduction of the fractures and the positioning of the screws so as not to interfere with the the articular space. In posterior acetabular fractures, pre-op complex planning is important in order to achieve maximum treatment efficacy with minimum of risk; an optimal training of the surgeons insisting on the main points of potential mistakes ensure the success of the procedure, as well as a favorable outcome for the patient.Keywords: acetabular fractures, articular congruency, surgical skills, vocational training
Procedia PDF Downloads 2092754 A Review of Digital Twins to Reduce Emission in the Construction Industry
Authors: Zichao Zhang, Yifan Zhao, Samuel Court
Abstract:
The carbon emission problem of the traditional construction industry has long been a pressing issue. With the growing emphasis on environmental protection and advancement of science and technology, the organic integration of digital technology and emission reduction has gradually become a mainstream solution. Among various sophisticated digital technologies, digital twins, which involve creating virtual replicas of physical systems or objects, have gained enormous attention in recent years as tools to improve productivity, optimize management and reduce carbon emissions. However, the relatively high implementation costs including finances, time, and manpower associated with digital twins have limited their widespread adoption. As a result, most of the current applications are primarily concentrated within a few industries. In addition, the creation of digital twins relies on a large amount of data and requires designers to possess exceptional skills in information collection, organization, and analysis. Unfortunately, these capabilities are often lacking in the traditional construction industry. Furthermore, as a relatively new concept, digital twins have different expressions and usage methods across different industries. This lack of standardized practices poses a challenge in creating a high-quality digital twin framework for construction. This paper firstly reviews the current academic studies and industrial practices focused on reducing greenhouse gas emissions in the construction industry using digital twins. Additionally, it identifies the challenges that may be encountered during the design and implementation of a digital twin framework specific to this industry and proposes potential directions for future research. This study shows that digital twins possess substantial potential and significance in enhancing the working environment within the traditional construction industry, particularly in their ability to support decision-making processes. It proves that digital twins can improve the work efficiency and energy utilization of related machinery while helping this industry save energy and reduce emissions. This work will help scholars in this field to better understand the relationship between digital twins and energy conservation and emission reduction, and it also serves as a conceptual reference for practitioners to implement related technologies.Keywords: digital twins, emission reduction, construction industry, energy saving, life cycle, sustainability
Procedia PDF Downloads 1092753 Investigation of Heat Conduction through Particulate Filled Polymer Composite
Authors: Alok Agrawal, Alok Satapathy
Abstract:
In this paper, an attempt to determine the effective thermal conductivity (keff) of particulate filled polymer composites using finite element method (FEM) a powerful computational technique is made. A commercially available finite element package ANSYS is used for this numerical analysis. Three-dimensional spheres-in-cube lattice array models are constructed to simulate the microstructures of micro-sized particulate filled polymer composites with filler content ranging from 2.35 to 26.8 vol %. Based on the temperature profiles across the composite body, the keff of each composition is estimated theoretically by FEM. Composites with similar filler contents are than fabricated using compression molding technique by reinforcing micro-sized aluminium oxide (Al2O3) in polypropylene (PP) resin. Thermal conductivities of these composite samples are measured according to the ASTM standard E-1530 by using the Unitherm™ Model 2022 tester, which operates on the double guarded heat flow principle. The experimentally measured conductivity values are compared with the numerical values and also with those obtained from existing empirical models. This comparison reveals that the FEM simulated values are found to be in reasonable good agreement with the experimental data. Values obtained from the theoretical model proposed by the authors are also found to be in even closer approximation with the measured values within percolation limit. Further, this study shows that there is gradual enhancement in the conductivity of PP resin with increase in filler percentage and thereby its heat conduction capability is improved. It is noticed that with addition of 26.8 vol % of filler, the keff of composite increases to around 6.3 times that of neat PP. This study validates the proposed model for PP-Al2O3 composite system and proves that finite element analysis can be an excellent methodology for such investigations. With such improved heat conduction ability, these composites can find potential applications in micro-electronics, printed circuit boards, encapsulations etc.Keywords: analytical modelling, effective thermal conductivity, finite element method, polymer matrix composite
Procedia PDF Downloads 3242752 Improving Knowledge Management Practices in the South African Healthcare System
Authors: Kgabo H. Badimo, Sheryl Buckley
Abstract:
Knowledge is increasingly recognised in this, the knowledge era, as a strategic resource, by public sector organisations, in view of the public sector reform initiatives. People and knowledge play a vital role in attaining improved organisational performance and high service quality. Many government departments in the public sector have started to realise the importance of knowledge management in streamlining their operations and processes. This study focused on knowledge management in the public healthcare service organisations, where the concept of service provider competitiveness pales to insignificance, considering the huge challenges emanating from the healthcare and public sector reforms. Many government departments are faced with challenges of improving organisational performance and service delivery, improving accountability, making informed decisions, capturing the knowledge of the aging workforce, and enhancing partnerships with stakeholders. The purpose of this paper is to examine the knowledge management practices of the Gauteng Department of Health in South Africa, in order to understand how knowledge management practices influence improvement in organisational performance and healthcare service delivery. This issue is explored through a review of literature on dominant views on knowledge management and healthcare service delivery, as well as results of interviews with, and questionnaire responses from, the general staff of the Gauteng Department of Health. Web-based questionnaires, face-to-face interviews and organisational documents were used to collect data. The data were analysed using both the quantitative and qualitative methods. The central question investigated was: To what extent can the conditions required for successful knowledge management be observed, in order to improve organisational performance and healthcare service delivery in the Gauteng Department of Health. The findings showed that the elements of knowledge management capabilities investigated in this study, namely knowledge creation, knowledge sharing and knowledge application, have a positive, significant relationship with all measures of organisational performance and healthcare service delivery. These findings thus indicate that by employing knowledge management principles, the Gauteng Department of Health could improve its ability to achieve its operational goals and objectives, and solve organisational and healthcare challenges, thereby improving organisational.Keywords: knowledge management, Healthcare Service Delivery, public healthcare, public sector
Procedia PDF Downloads 2742751 Free Fibular Flaps in Management of Sternal Dehiscence
Authors: H. N. Alyaseen, S. E. Alalawi, T. Cordoba, É. Delisle, C. Cordoba, A. Odobescu
Abstract:
Sternal dehiscence is defined as the persistent separation of sternal bones that are often complicated with mediastinitis. Etiologies that lead to sternal dehiscence vary, with cardiovascular and thoracic surgeries being the most common. Early diagnosis in susceptible patients is crucial to the management of such cases, as they are associated with high mortality rates. A recent meta-analysis of more than four hundred thousand patients concluded that deep sternal wound infections were the leading cause of mortality and morbidity in patients undergoing cardiac procedures. Long-term complications associated with sternal dehiscence include increased hospitalizations, cardiac infarctions, and renal and respiratory failures. Numerous osteosynthesis methods have been described in the literature. Surgical materials offer enough rigidity to support the sternum and can be flexible enough to allow physiological breathing movements of the chest; however, these materials fall short when managing patients with extensive bone loss, osteopenia, or general poor bone quality, for such cases, flaps offer a better closure system. Early utilization of flaps yields better survival rates compared to delayed closure or to patients treated with sternal rewiring and closed drainage. The utilization of pectoralis major flaps, rectus abdominus, and latissimus muscle flaps have all been described in the literature as great alternatives. Flap selection depends on a variety of factors, mainly the size of the sternal defect, infection, and the availability of local tissues. Free fibular flaps are commonly harvested flaps utilized in reconstruction around the body. In cases regarding sternal reconstruction with free fibular flaps, the literature exclusively discussed the flap applied vertically to the chest wall. We present a different technique applying the free fibular triple barrel flap oriented in a transverse manner, in parallel to the ribs. In our experience, this method could have enhanced results and improved prognosis as it contributes to the normal circumferential shape of the chest wall.Keywords: sternal dehiscence, management, free fibular flaps, novel surgical techniques
Procedia PDF Downloads 1002750 Status of Physical, Chemical and Biological Attributes of Isheri, Ogun River, in Relation to the Surrounding Anthropogenic Activities of Kara Abattoir, South West Nigeria
Authors: N. B. Ikenweiwe, A. A. Alimi, N. A. Bamidele, A. O. Ewumi, J. Dairo, I. A. Akinnubi, S. O. Otubusin
Abstract:
A study on the physical, chemical and biological parameters of the lower course of Ogun River, Isheri-Olofin was carried out between January and December 2014 in order to determine the effects of the anthropogenic activities of the Kara abattoir and domestic waste depositions on the quality of the water. Water samples were taken twice each month at three selected stations A, B and C (based on characteristic features or activity levels) along the water course. Samples were analysed using standard methods for chemical and biological parameters the same day in the laboratory while physical parameters were determined in-situ with water parameters kit. Generally, results of Transparency, Dissolved Oxygen, Nitrates, TDS and Alkalinity fall below the permissible limits of WHO and FEPA standards for drinking and fish production. Results of phosphates, lead and cadmium were also low but still within the permissible limit. Only Temperature and pH were within limit. Low plankton community, (phytoplankton, zooplankton), which ranges from 3, 5 to 40, 23 were as a result of low levels of DO, transparency and phosphate. The presence of coliform bacteria of public health importance like Escherichia coli, Proteus vulgaris, Aeromonas sp., Shigella sp, Enterobacter aerogenes as well as gram negative bacteria Proteus morganii are mainly indicators of faecal pollution. Fish and other resources obtained from this water stand the risk of being contaminated with these organisms and man is at the receiving end. The results of the physical, chemical and some biological parameters of Isheri, Ogun River, according to this study showed that the live forms of aquatic and fisheries resources there are dwelling under stress as a result of deposition of bones, horns, faecal components, slurry of suspended solids, fat and blood into the water. Government should therefore establish good monitoring system against illegal waste depositions and create education programmes that will enlighten the community on the social, ecological and economic values of the river.Keywords: water parameters, Isheri Ogun river, anthropogenic activities, Kara abattoir
Procedia PDF Downloads 5462749 New Advanced Medical Software Technology Challenges and Evolution of the Regulatory Framework in Expert Software, Artificial Intelligence, and Machine Learning
Authors: Umamaheswari Shanmugam, Silvia Ronchi, Radu Vornicu
Abstract:
Software, artificial intelligence, and machine learning can improve healthcare through innovative and advanced technologies that are able to use the large amount and variety of data generated during healthcare services every day. As we read the news, over 500 machine learning or other artificial intelligence medical devices have now received FDA clearance or approval, the first ones even preceding the year 2000. One of the big advantages of these new technologies is the ability to get experience and knowledge from real-world use and to continuously improve their performance. Healthcare systems and institutions can have a great benefit because the use of advanced technologies improves the same time efficiency and efficacy of healthcare. Software-defined as a medical device, is stand-alone software that is intended to be used for patients for one or more of these specific medical intended uses: - diagnosis, prevention, monitoring, prediction, prognosis, treatment or alleviation of a disease, any other health conditions, replacing or modifying any part of a physiological or pathological process–manage the received information from in vitro specimens derived from the human samples (body) and without principal main action of its principal intended use by pharmacological, immunological or metabolic definition. Software qualified as medical devices must comply with the general safety and performance requirements applicable to medical devices. These requirements are necessary to ensure high performance and quality and also to protect patients’ safety. The evolution and the continuous improvement of software used in healthcare must take into consideration the increase in regulatory requirements, which are becoming more complex in each market. The gap between these advanced technologies and the new regulations is the biggest challenge for medical device manufacturers. Regulatory requirements can be considered a market barrier, as they can delay or obstacle the device approval, but they are necessary to ensure performance, quality, and safety, and at the same time, they can be a business opportunity if the manufacturer is able to define in advance the appropriate regulatory strategy. The abstract will provide an overview of the current regulatory framework, the evolution of the international requirements, and the standards applicable to medical device software in the potential market all over the world.Keywords: artificial intelligence, machine learning, SaMD, regulatory, clinical evaluation, classification, international requirements, MDR, 510k, PMA, IMDRF, cyber security, health care systems.
Procedia PDF Downloads 962748 RA-Apriori: An Efficient and Faster MapReduce-Based Algorithm for Frequent Itemset Mining on Apache Flink
Authors: Sanjay Rathee, Arti Kashyap
Abstract:
Extraction of useful information from large datasets is one of the most important research problems. Association rule mining is one of the best methods for this purpose. Finding possible associations between items in large transaction based datasets (finding frequent patterns) is most important part of the association rule mining. There exist many algorithms to find frequent patterns but Apriori algorithm always remains a preferred choice due to its ease of implementation and natural tendency to be parallelized. Many single-machine based Apriori variants exist but massive amount of data available these days is above capacity of a single machine. Therefore, to meet the demands of this ever-growing huge data, there is a need of multiple machines based Apriori algorithm. For these types of distributed applications, MapReduce is a popular fault-tolerant framework. Hadoop is one of the best open-source software frameworks with MapReduce approach for distributed storage and distributed processing of huge datasets using clusters built from commodity hardware. However, heavy disk I/O operation at each iteration of a highly iterative algorithm like Apriori makes Hadoop inefficient. A number of MapReduce-based platforms are being developed for parallel computing in recent years. Among them, two platforms, namely, Spark and Flink have attracted a lot of attention because of their inbuilt support to distributed computations. Earlier we proposed a reduced- Apriori algorithm on Spark platform which outperforms parallel Apriori, one because of use of Spark and secondly because of the improvement we proposed in standard Apriori. Therefore, this work is a natural sequel of our work and targets on implementing, testing and benchmarking Apriori and Reduced-Apriori and our new algorithm ReducedAll-Apriori on Apache Flink and compares it with Spark implementation. Flink, a streaming dataflow engine, overcomes disk I/O bottlenecks in MapReduce, providing an ideal platform for distributed Apriori. Flink's pipelining based structure allows starting a next iteration as soon as partial results of earlier iteration are available. Therefore, there is no need to wait for all reducers result to start a next iteration. We conduct in-depth experiments to gain insight into the effectiveness, efficiency and scalability of the Apriori and RA-Apriori algorithm on Flink.Keywords: apriori, apache flink, Mapreduce, spark, Hadoop, R-Apriori, frequent itemset mining
Procedia PDF Downloads 3012747 Second Time’s a Charm: The Intervention of the European Patent Office on the Strategic Use of Divisional Applications
Authors: Alissa Lefebre
Abstract:
It might seem intuitive to hope for a fast decision on the patent grant. After all, a granted patent provides you with a monopoly position, which allows you to obstruct others from using your technology. However, this does not take into account the strategic advantages one can obtain from keeping their patent applications pending. First, you have the financial advantage of postponing certain fees, although many applicants would probably agree that this is not the main benefit. As the scope of the patent protection is only decided upon at the grant, the pendency period introduces uncertainty amongst rivals. This uncertainty entails not knowing whether the patent will actually get granted and what the scope of protection will be. Consequently, rivals can only depend upon limited and uncertain information when deciding what technology is worth pursuing. One way to keep patent applications pending, is the use of divisional applications. These applicants can be filed out of a parent application as long as that parent application is still pending. This allows the applicant to pursue (part of) the content of the parent application in another application, as the divisional application cannot exceed the scope of the parent application. In a fast-moving and complex market such as the tele- and digital communications, it might allow applicants to obtain an actual monopoly position as competitors are discouraged to pursue a certain technology. Nevertheless, this practice also has downsides to it. First of all, it has an impact on the workload of the examiners at the patent office. As the number of patent filings have been increasing over the last decades, using strategies that increase this number even more, is not desirable from the patent examiners point of view. Secondly, a pending patent does not provide you with the protection of a granted patent, thus not only create uncertainty for the rivals, but also for the applicant. Consequently, the European patent office (EPO) has come up with a “raising the bar initiative” in which they have decided to tackle the strategic use of divisional applications. Over the past years, two rules have been implemented. The first rule in 2010 introduced a time limit, upon which divisional applications could only be filed within a 24-month limit after the first communication with the patent office. However, after carrying-out a user feedback survey, the EPO abolished the rule again in 2014 and replaced it by a fee mechanism. The fee mechanism is still in place today, which might be an indication of a better result compared to the first rule change. This study tests the impact of these rules on the strategic use of divisional applications in the tele- and digital communication industry and provides empirical evidence on their success. Upon using three different survival models, we find overall evidence that divisional applications prolong the pendency time and that only the second rule is able to tackle the strategic patenting and thus decrease the pendency time.Keywords: divisional applications, regulatory changes, strategic patenting, EPO
Procedia PDF Downloads 1372746 Adequate Dietary Intake to Improve Outcome of Urine: Urea Nitrogen with Balance Nitrogen and Total Lymphocyte Count
Authors: Mardiana Madjid, Nurpudji Astuti Taslim, Suryani As'ad, Haerani Rasyid, Agussalim Bukhari
Abstract:
The high level of Urine Urea Nitrogen (UUN) indicates hypercatabolism occurs in hospitalized patients. High levels of Total Lymphocyte Count (TLC) indicates the immune system condition, adequate wound healing, and limit complication. Adequate dietary intake affects to decrease of hypercatabolism status in treated patient’s hospitals. Nitrogen Balance (NB) is simply the difference between nitrogen (N₂) intake and output. If more N₂ intake than output, then positive NB or anabolic will occur. This study aims to evaluate the effect of dietary intake in influencing balance nitrogen and total lymphocyte count. Method: A total of 43 patients admitted to a Wahidin Sudirohusodo Hospital between 2018 and 2019 for 10 days' treats are included. The inclusion criteria were patients who were treated for 10 days and receives food from the hospital orally. Patients did not experience gastrointestinal disorders such as vomiting and diarrhea and experience impair kidney function and liver function and expressed approval to participate in this study. During hospitalization, food intake, UUN, albumin serum, balance nitrogen, and TLC was assessed twice on day 1 and day 10. There is no Physician Clinical Nutritional intervention to correct food intake. UUN is 24 hours of urine collected on the second day after admission and the tenth day. Statistical analysis uses SPSS 24 with observational cohort methods. Result: The Forty-three participants completed the follow-up (27 men and 18 women). The age of fewer than 4 years is 22 people, 45 to 60 years is 16 people, and over 60 years is 4 people. The result of the study on day 1 obtained SGA score A, SGA score B, SGA score C are 8, 32, 3 until day 10 are 8, 31, 4, respectively. According to 24h dietary recalls, the energy intake during observation was from 522.5 ± 400.4 to 1011.9 ± 545.1 kcal/day P < 0.05, protein intake from 20.07 ± 17.2 to 40.3 ± 27.3 g/day P < 0.05, carbohydrates from 92.5 ± 71.6 to 184.8 ± 87.4 g/day, and fat from 5.5 ± 3.86 to 13.9 ± 13.9 g/day. The UUN during the observation was from 6.6 ± 7.3 to 5.5 ± 3.9 g/day, TLC decreased from 1622.9 ± 897.2 to 1319.9 ± 636.3/mm³ value target 1800/mm³, albumin serum from 3.07 ± 0.76 to 2.9 ± 0.57 g/day, and BN from -7.5 ± 7.2 to -3.1 ± 4.86. Conclusion: The high level of UUN needs to correct adequate dietary intake to improve NB and TLC status on hospitalized patients.Keywords: adequate dietary intake, balance nitrogen, total lymphocyte count, urine urea nitrogen
Procedia PDF Downloads 1292745 Learning from Flood: A Case Study of a Frequently Flooded Village in Hubei, China
Authors: Da Kuang
Abstract:
Resilience is a hotly debated topic in many research fields (e.g., engineering, ecology, society, psychology). In flood management studies, we are experiencing the paradigm shift from flood resistance to flood resilience. Flood resilience refers to tolerate flooding through adaptation or transformation. It is increasingly argued that our city as a social-ecological system holds the ability to learn from experience and adapt to flood rather than simply resist it. This research aims to investigate what kinds of adaptation knowledge the frequently flooded village learned from past experience and its advantages and limitations in coping with floods. The study area – Xinnongcun village, located in the west of Wuhan city, is a linear village and continuously suffered from both flash flood and drainage flood during the past 30 years. We have a field trip to the site in June 2017 and conducted semi-structured interviews with local residents. Our research summarizes two types of adaptation knowledge that people learned from the past floods. Firstly, at the village scale, it has formed a collective urban form which could help people live during both flood and dry season. All houses and front yards were elevated about 2m higher than the road. All the front yards in the village are linked and there is no barrier. During flooding time, people walk to neighbors through houses yards and boat to outside village on the lower road. Secondly, at individual scale, local people learned tacit knowledge of preparedness and emergency response to flood. Regarding the advantages and limitations, the adaptation knowledge could effectively help people to live with flood and reduce the chances of getting injuries. However, it cannot reduce local farmers’ losses on their agricultural land. After flood, it is impossible for local people to recover to the pre-disaster state as flood emerges during June and July will result in no harvest. Therefore, we argue that learning from past flood experience could increase people’s adaptive capacity. However, once the adaptive capacity cannot reduce people’s losses, it requires a transformation to a better regime.Keywords: adaptation, flood resilience, tacit knowledge, transformation
Procedia PDF Downloads 3352744 Impact of the Hayne Royal Commission on the Operating Model of Australian Financial Advice Firms
Authors: Mohammad Abu-Taleb
Abstract:
The final report of the Royal Commission into Australian financial services misconduct, released in February 2019, has had a significant impact on the financial advice industry. The recommendations released in the Commissioner’s final report include changes to ongoing fee arrangements, a new disciplinary system for financial advisers, and mandatory reporting of compliance concerns. This thesis aims to explore the impact of the Royal Commission’s recommendations on the operating model of financial advice firms in terms of advice products, processes, delivery models, and customer segments. Also, this research seeks to investigate whether the Royal Commission’s outcome has accelerated the use of enhanced technology solutions within the operating model of financial advice firms. And to identify the key challenges confronting financial advice firms whilst implementing the Commissioner’s recommendations across their operating models. In order to achieve the objectives of this thesis, a qualitative research design has been adopted through semi-structured in-depth interviews with 24 financial advisers and managers who are engaged in the operation of financial advice services. The study used the thematic analysis approach to interpret the qualitative data collected from the interviews. The findings of this thesis reveal that customer-centric operating models will become more prominent across the financial advice industry in response to the Commissioner’s final report. And the Royal Commission’s outcome has accelerated the use of advice technology solutions within the operating model of financial advice firms. In addition, financial advice firms have started more than before using simpler and more automated web-based advice services, which enable financial advisers to provide simple advice in a greater scale, and also to accelerate the use of robo-advice models and digital delivery to mass customers in the long term. Furthermore, the study identifies process and technology changes as, long with technical and interpersonal skills development, as the key challenges encountered financial advice firms whilst implementing the Commissioner’s recommendations across their operating models.Keywords: hayne royal commission, financial planning advice, operating model, advice products, advice processes, delivery models, customer segments, digital advice solutions
Procedia PDF Downloads 912743 Adaptive Assemblies: A Scalable Solution for Atlanta's Affordable Housing Crisis
Authors: Claudia Aguilar, Amen Farooq
Abstract:
Among other cities in the United States, the city of Atlanta is experiencing levels of growth that surpass anything we have witnessed in the last century. With the surge of population influx, the available housing is practically bursting at the seams. Supply is low, and demand is high. In effect, the average one-bedroom apartment runs for 1,800 dollars per month. The city is desperately seeking new opportunities to provide affordable housing at an expeditious rate. This has been made evident by the recent updates to the city’s zoning. With the recent influx in the housing market, young professionals, in particular millennials, are desperately looking for alternatives to stay within the city. To remedy Atlanta’s affordable housing crisis, the city of Atlanta is planning to introduce 40 thousand of new affordable housing units by 2026. To achieve the urgent need for more affordable housing, the architectural response needs to adapt to overcome this goal. A method that has proven successful in modern housing is to practice modular means of development. A method that has been constrained to the dimensions of the max load for an eighteen-wheeler. This approach has diluted the architect’s ability to produce site-specific, informed design and rather contributes to the “cookie cutter” stigma that the method has been labeled with. This thesis explores the design methodology for modular housing by revisiting its constructability and adaptability. This research focuses on a modular housing type that could break away from the constraints of transport and deliver adaptive reconfigurable assemblies. The adaptive assemblies represent an integrated design strategy for assembling the future of affordable dwelling units. The goal is to take advantage of a component-based system and explore a scalable solution to modular housing. This proposal aims specifically to design a kit of parts that are made to be easily transported and assembled but also gives the ability to customize the use of components to benefit all unique conditions. The benefits of this concept could include decreased construction time, cost, on-site labor, and disruption while providing quality housing with affordable and flexible options.Keywords: adaptive assemblies, modular architecture, adaptability, constructibility, kit of parts
Procedia PDF Downloads 892742 A Xenon Mass Gauging through Heat Transfer Modeling for Electric Propulsion Thrusters
Authors: A. Soria-Salinas, M.-P. Zorzano, J. Martín-Torres, J. Sánchez-García-Casarrubios, J.-L. Pérez-Díaz, A. Vakkada-Ramachandran
Abstract:
The current state-of-the-art methods of mass gauging of Electric Propulsion (EP) propellants in microgravity conditions rely on external measurements that are taken at the surface of the tank. The tanks are operated under a constant thermal duty cycle to store the propellant within a pre-defined temperature and pressure range. We demonstrate using computational fluid dynamics (CFD) simulations that the heat-transfer within the pressurized propellant generates temperature and density anisotropies. This challenges the standard mass gauging methods that rely on the use of time changing skin-temperatures and pressures. We observe that the domes of the tanks are prone to be overheated, and that a long time after the heaters of the thermal cycle are switched off, the system reaches a quasi-equilibrium state with a more uniform density. We propose a new gauging method, which we call the Improved PVT method, based on universal physics and thermodynamics principles, existing TRL-9 technology and telemetry data. This method only uses as inputs the temperature and pressure readings of sensors externally attached to the tank. These sensors can operate during the nominal thermal duty cycle. The improved PVT method shows little sensitivity to the pressure sensor drifts which are critical towards the end-of-life of the missions, as well as little sensitivity to systematic temperature errors. The retrieval method has been validated experimentally with CO2 in gas and fluid state in a chamber that operates up to 82 bar within a nominal thermal cycle of 38 °C to 42 °C. The mass gauging error is shown to be lower than 1% the mass at the beginning of life, assuming an initial tank load at 100 bar. In particular, for a pressure of about 70 bar, just below the critical pressure of CO2, the error of the mass gauging in gas phase goes down to 0.1% and for 77 bar, just above the critical point, the error of the mass gauging of the liquid phase is 0.6% of initial tank load. This gauging method improves by a factor of 8 the accuracy of the standard PVT retrievals using look-up tables with tabulated data from the National Institute of Standards and Technology.Keywords: electric propulsion, mass gauging, propellant, PVT, xenon
Procedia PDF Downloads 3482741 Critical Parameters of a Square-Well Fluid
Authors: Hamza Javar Magnier, Leslie V. Woodcock
Abstract:
We report extensive molecular dynamics (MD) computational investigations into the thermodynamic description of supercritical properties for a model fluid that is the simplest realistic representation of atoms or molecules. The pair potential is a hard-sphere repulsion of diameter σ with a very short attraction of length λσ. When λ = 1.005 the range is so short that the model atoms are referred to as “adhesive spheres”. Molecular dimers, trimers …etc. up to large clusters, or droplets, of many adhesive-sphere atoms are unambiguously defined. This then defines percolation transitions at the molecular level that bound the existence of gas and liquid phases at supercritical temperatures, and which define the existence of a supercritical mesophase. Both liquid and gas phases are seen to terminate at the loci of percolation transitions, and below a second characteristic temperature (Tc2) are separated by the supercritical mesophase. An analysis of the distribution of clusters in gas, meso- and liquid phases confirms the colloidal nature of this mesophase. The general phase behaviour is compared with both experimental properties of the water-steam supercritical region and also with formally exact cluster theory of Mayer and Mayer. Both are found to be consistent with the present findings that in this system the supercritical mesophase narrows in density with increasing T > Tc and terminates at a higher Tc2 at a confluence of the primary percolation loci. The expended plot of the MD data points in the mesophase of 7 critical and supercritical isotherms in highlight this narrowing in density of the linear-slope region of the mesophase as temperature is increased above the critical. This linearity in the mesophase implies the existence of a linear combination rule between gas and liquid which is an extension of the Lever rule in the subcritical region, and can be used to obtain critical parameters without resorting to experimental data in the two-phase region. Using this combination rule, the calculated critical parameters Tc = 0.2007 and Pc = 0.0278 are found be agree with the values found by of Largo and coworkers. The properties of this supercritical mesophase are shown to be consistent with an alternative description of the phenomenon of critical opalescence seen in the supercritical region of both molecular and colloidal-protein supercritical fluids.Keywords: critical opalescence, supercritical, square-well, percolation transition, critical parameters.
Procedia PDF Downloads 5312740 Experiences of Social Participation among Community Elderly with Mild Cognitive Impairment: A Qualitative Research
Abstract:
Mild cognitive impairment (MCI) is a clinical stage that occurs between normal aging and dementia. Although MCI increases the risk of developing dementia, individuals with MCI may maintain stable cognitive function and even recover to a typical cognitive state. An intervention to prevent or delay the progression to dementia in individuals with MCI may involve promoting social engagement. Social participation is the engagement in socially relevant social exchanges and meaningful activities. Older adults with MCI may encounter restricted cognitive abilities, mood changes, and behavioral difficulties during social participation, influencing their willingness to engage. Therefore, this study aims to employ qualitative research methods to gain an in-depth comprehension of the authentic social participation experiences of older adults with mild cognitive impairment, which will establish a foundation for designing appropriate intervention programs. A phenomenological research was conducted. The study participants were selected using the purposive sampling method in combination with the maximum differentiation sampling strategy. Face-to-face semistructured interviews were conducted among 12 elderly individuals suffering from mild cognitive impairment in a community in Zhengzhou City from May to July 2023. Colaizzi 7-step method was used to analyze the data and extract the theme. The real experience of social participation in older adults with mild cognitive impairment can be summarized into 3 themes: (1) a single social relationship but a strong desire to participate, (2) a dual experience of social participation with both positive and negative aspects, (3) multiple barriers to social participation, including impaired memory capacity, heavy family responsibilities and lack of infrastructure. The study found that elderly individuals with mild cognitive impairment and one social interaction display an increased desire to engage in society. To improve social participation levels and reduce cognitive function decline, healthcare providers should work with relevant government agencies and the community to create a comprehensive social participation system. It is important for healthcare providers to note the social participation status of the elderly with mild cognitive impairment.Keywords: mild cognitive impairment, the elderly, social participation, qualitative research
Procedia PDF Downloads 992739 Utilization and Proximate Composition of Nile Tilapia, Common Carp and African Mudfish Polycultured in Fertilized Ponds
Authors: I. A. Yola
Abstract:
Impact of poultry dropping, cow dung and rumen content on utilization and proximate composition of Oreochromis niliticus, Clarias gariepinus and Cyprinus capio in a polyculture system were studied. The research was conducted over a period of 52 weeks. Poultry droppings (PD), cow dung (CD) and rumen content (RC) were applied at three levels 30g,60g and 120g/m2/week, 25g,50g and 100g/m2/week and 22g, 44g and 88g/m2/week treatment, respectively. The control only conventional feed with 40% CP without manure application was used. Physicochemical and biological properties measured were higher in manure pond than control. The difference was statistically significant (P < 0.05) between and within treatments with exception of temperature with a combined mean of 27.900C. The water was consistently alkaline with mean values for pH of 6.61, transparency 22.6cm, conductivity 35.00µhos/cm, dissolved oxygen 4.6 mg/l, biological oxygen demand 2.8mg/l, nitrate and phosphates 0.9mg/l and 0.35mg/l, respectively. The three fish species increase in weight with increased manure rate, with a higher value in PD treatment on C. capio record 340g, O. niloticus weighed 310g and C. gariepinus 280g over the experimental period. Fishes fed supplementary diet (control) grew bigger with highest value on C. capio (685g) O. niloticus (620g) and then C. gariepinus (526g). The differences were statistically significant (P < 0.05). The result of whole body proximate analysis indicated that various manures and rates had an irregular pattern on the protein and ash gain per 100g of fish body weight gain. The combined means for whole fish carcass protein, lipids, moisture, ash and gross energy were 11.84, 2.43, 74.63, 3.00 and 109.9 respectively. The notable exceptions were significant (p < 0.05) increases in body fat and gross energy gains in all fish species accompanied by decreases in percentages of moisture as manure rates increased. Survival percentage decreases from 80% to 70%. It is recommended to use poultry dropping as manure/feeds at the rate of 120kg/ha/week for good performances in polyculture.Keywords: organic manure, Nile tilapia, African mud fish, common carp, proximate composition
Procedia PDF Downloads 5592738 An Analysis of Pick Travel Distances for Non-Traditional Unit Load Warehouses with Multiple P/D Points
Authors: Subir S. Rao
Abstract:
Existing warehouse configurations use non-traditional aisle designs with a central P/D point in their models, which is mathematically simple but less practical. Many warehouses use multiple P/D points to avoid congestion for pickers, and different warehouses have different flow policies and infrastructure for using the P/D points. Many warehouses use multiple P/D points with non-traditional aisle designs in their analytical models. Standard warehouse models introduce one-sided multiple P/D points in a flying-V warehouse and minimize pick distance for a one-way travel between an active P/D point and a pick location with P/D points, assuming uniform flow rates. A simulation of the mathematical model generally uses four fixed configurations of P/D points which are on two different sides of the warehouse. It can be easily proved that if the source and destination P/D points are both chosen randomly, in a uniform way, then minimizing the one-way travel is the same as minimizing the two-way travel. Another warehouse configuration analytically models the warehouse for multiple one-sided P/D points while keeping the angle of the cross-aisles and picking aisles as a decision variable. The minimization of the one-way pick travel distance from the P/D point to the pick location by finding the optimal position/angle of the cross-aisle and picking aisle for warehouses having different numbers of multiple P/D points with variable flow rates is also one of the objectives. Most models of warehouses with multiple P/D points are one-way travel models and we extend these analytical models to minimize the two-way pick travel distance wherein the destination P/D is chosen optimally for the return route, which is not similar to minimizing the one-way travel. In most warehouse models, the return P/D is chosen randomly, but in our research, the return route P/D point is chosen optimally. Such warehouses are common in practice, where the flow rates at the P/D points are flexible and depend totally on the position of the picks. A good warehouse management system is efficient in consolidating orders over multiple P/D points in warehouses where the P/D is flexible in function. In the latter arrangement, pickers and shrink-wrap processes are not assigned to particular P/D points, which ultimately makes the P/D points more flexible and easy to use interchangeably for picking and deposits. The number of P/D points considered in this research uniformly increases from a single-central one to a maximum of each aisle symmetrically having a P/D point below it.Keywords: non-traditional warehouse, V cross-aisle, multiple P/D point, pick travel distance
Procedia PDF Downloads 482737 Equivalences and Contrasts in the Morphological Formation of Echo Words in Two Indo-Aryan Languages: Bengali and Odia
Authors: Subhanan Mandal, Bidisha Hore
Abstract:
The linguistic process whereby repetition of all or part of the base word with or without internal change before or after the base itself takes place is regarded as reduplication. The reduplicated morphological construction annotates with itself a new grammatical category and meaning. Reduplication is a very frequent and abundant phenomenon in the eastern Indian languages from the states of West Bengal and Odisha, i.e. Bengali and Odia respectively. Bengali, an Indo-Aryan language and a part of the Indo-European language family is one of the largest spoken languages in India and is the national language of Bangladesh. Despite this classification, Bengali has certain influences in terms of vocabulary and grammar due to its geographical proximity to Tibeto-Burman and Austro-Asiatic language speaking communities. Bengali along with Odia belonged to a single linguistic branch. But with time and gradual linguistic changes due to various factors, Odia was the first to break away and develop as a separate distinct language. However, less of contrasts and more of similarities still exist among these languages along the line of linguistics, leaving apart the script. This paper deals with the procedure of echo word formations in Bengali and Odia. The morphological research of the two languages concerning the field of reduplication reveals several linguistic processes. The revelation is based on the information elicited from native language speakers and also on the analysis of echo words found in discourse and conversational patterns. For the purpose of partial reduplication analysis, prefixed class and suffixed class word formations are taken into consideration which show specific rule based changes. For example, in suffixed class categorization, both consonant and vowel alterations are found, following the rules: i) CVx à tVX, ii) CVCV à CVCi. Further classifications were also found on sentential studies of both languages which revealed complete reduplication complexities while forming echo words where the head word lose its original meaning. Complexities based on onomatopoetic/phonetic imitation of natural phenomena and not according to any rule-based occurrences were also found. Taking these aspects into consideration which are very prevalent in both the languages, inferences are drawn from the study which bring out many similarities in both the languages in this area in spite of branching away from each other several years ago.Keywords: consonant alteration, onomatopoetic, partial reduplication and complete reduplication, reduplication, vowel alteration
Procedia PDF Downloads 2432736 Suitability of Satellite-Based Data for Groundwater Modelling in Southwest Nigeria
Authors: O. O. Aiyelokun, O. A. Agbede
Abstract:
Numerical modelling of groundwater flow can be susceptible to calibration errors due to lack of adequate ground-based hydro-metrological stations in river basins. Groundwater resources management in Southwest Nigeria is currently challenged by overexploitation, lack of planning and monitoring, urbanization and climate change; hence to adopt models as decision support tools for sustainable management of groundwater; they must be adequately calibrated. Since river basins in Southwest Nigeria are characterized by missing data, and lack of adequate ground-based hydro-meteorological stations; the need for adopting satellite-based data for constructing distributed models is crucial. This study seeks to evaluate the suitability of satellite-based data as substitute for ground-based, for computing boundary conditions; by determining if ground and satellite based meteorological data fit well in Ogun and Oshun River basins. The Climate Forecast System Reanalysis (CFSR) global meteorological dataset was firstly obtained in daily form and converted to monthly form for the period of 432 months (January 1979 to June, 2014). Afterwards, ground-based meteorological data for Ikeja (1981-2010), Abeokuta (1983-2010), and Oshogbo (1981-2010) were compared with CFSR data using Goodness of Fit (GOF) statistics. The study revealed that based on mean absolute error (MEA), coefficient of correlation, (r) and coefficient of determination (R²); all meteorological variables except wind speed fit well. It was further revealed that maximum and minimum temperature, relative humidity and rainfall had high range of index of agreement (d) and ratio of standard deviation (rSD), implying that CFSR dataset could be used to compute boundary conditions such as groundwater recharge and potential evapotranspiration. The study concluded that satellite-based data such as the CFSR should be used as input when constructing groundwater flow models in river basins in Southwest Nigeria, where majority of the river basins are partially gaged and characterized with long missing hydro-metrological data.Keywords: boundary condition, goodness of fit, groundwater, satellite-based data
Procedia PDF Downloads 1322735 Visualization of Chinese Genealogies with Digital Technology: A Case of Genealogy of Wu Clan in the Village of Gaoqian
Authors: Huiling Feng, Jihong Liang, Xiaodong Gong, Yongjun Xu
Abstract:
Recording history is a tradition in ancient China. A record of a dynasty makes a dynastic history; a record of a locality makes a chorography, and a record of a clan makes a genealogy – the three combined together depicts a complete national history of China both macroscopically and microscopically, with genealogy serving as the foundation. Genealogy in ancient China traces back to a family tree or pedigrees in the early and medieval historical times. After Song Dynasty, the civilian society gradually emerged, and the Emperor had to allow people from the same clan to live together and hold the ancestor worship activities, thence compilation of genealogy became popular in the society. Since then, genealogies, regarded as important as ancestor and religious temples in a traditional villages even today, have played a primary role in identification of a clan and maintain local social order. Chinese genealogies are rich in their documentary materials. Take the Genealogy of Wu Clan in Gaoqian as an example. Gaoqian is a small village in Xianju County of Zhejiang Province. The Genealogy of Wu Clan in Gaoqian is composed of a whole set of materials from Foreword to Family Trees, Family Rules, Family Rituals, Family Graces and Glories, Ode to An ancestor’s Portrait, Manual for the Ancestor Temple, documents for great men in the clan, works written by learned men in the clan, the contracts concerning landed property, even notes on tombs and so on. Literally speaking, the genealogy, with detailed information from every aspect recorded in stylistic rules, is indeed the carrier of the entire culture of a clan. However, due to their scarcity in number and difficulties in reading, genealogies seldom fall into the horizons of common people. This paper, focusing on the case of the Genealogy of Wu Clan in the Village of Gaoqian, intends to reproduce a digital Genealogy by use of ICTs, through an in-depth interpretation of the literature and field investigation in Gaoqian Village. Based on this, the paper goes further to explore the general methods in transferring physical genealogies to digital ones and ways in visualizing the clanism culture embedded in the genealogies with a combination of digital technologies such as software in family trees, multimedia narratives, animation design, GIS application and e-book creators.Keywords: clanism culture, multimedia narratives, genealogy of Wu Clan, GIS
Procedia PDF Downloads 2262734 Solar Panel Design Aspects and Challenges for a Lunar Mission
Authors: Mannika Garg, N. Srinivas Murthy, Sunish Nair
Abstract:
TeamIndus is only Indian team participated in the Google Lunar X Prize (GLXP). GLXP is an incentive prize space competition which is organized by the XPrize Foundation and sponsored by Google. The main objective of the mission is to soft land a rover on the moon surface, travel minimum displacement of 500 meters and transmit HD and NRT videos and images to the Earth. Team Indus is designing a Lunar Lander which carries Rover with it and deliver onto the surface of the moon with a soft landing. For lander to survive throughout the mission, energy is required to operate all attitude control sensors, actuators, heaters and other necessary components. Photovoltaic solar array systems are the most common and primary source of power generation for any spacecraft. The scope of this paper is to provide a system-level approach for designing the solar array systems of the lander to generate required power to accomplish the mission. For this mission, the direction of design effort is to higher efficiency, high reliability and high specific power. Towards this approach, highly efficient multi-junction cells have been considered. The design is influenced by other constraints also like; mission profile, chosen spacecraft attitude, overall lander configuration, cost effectiveness and sizing requirements. This paper also addresses the various solar array design challenges such as operating temperature, shadowing, radiation environment and mission life and strategy of supporting required power levels (peak and average). The challenge to generate sufficient power at the time of surface touchdown, due to low sun elevation (El) and azimuth (Az) angle which depends on Lunar landing site, has also been showcased in this paper. To achieve this goal, energy balance analysis has been carried out to study the impact of the above-mentioned factors and to meet the requirements and has been discussed in this paper.Keywords: energy balance analysis, multi junction solar cells, photovoltaic, reliability, spacecraft attitude
Procedia PDF Downloads 2332733 Effective Learning and Testing Methods in School-Aged Children
Authors: Farzaneh Badinlou, Reza Kormi-Nouri, Monika Knopf, Kamal Kharrazi
Abstract:
When we teach, we have two critical elements at our disposal to help students: learning styles as well as testing styles. There are many different ways in which educators can effectively teach their students; verbal learning and experience-based learning. Lecture as a form of verbal learning style is a traditional arrangement in which teachers are more active and share information verbally with students. In experienced-based learning as the process of through, students learn actively through hands-on learning materials and observing teachers or others. Meanwhile, standard testing or assessment is the way to determine progress toward proficiency. Teachers and instructors mainly use essay (requires written responses), multiple choice questions (includes the correct answer and several incorrect answers as distractors), or open-ended questions (respondents answers it with own words). The current study focused on exploring an effective teaching style and testing methods as the function of age over school ages. In the present study, totally 410 participants were selected randomly from four grades (2ⁿᵈ, 4ᵗʰ, 6ᵗʰ, and 8ᵗʰ). Each subject was tested individually in one session lasting around 50 minutes. In learning tasks, the participants were presented three different instructions for learning materials (learning by doing, learning by observing, and learning by listening). Then, they were tested via different standard assessments as free recall, cued recall, and recognition tasks. The results revealed that generally students remember more of what they do and what they observe than what they hear. The age effect was more pronounced in learning by doing than in learning by observing, and learning by listening, becoming progressively stronger in the free-recall, cued-recall, and recognition tasks. The findings of this study indicated that learning by doing and free recall task is more age sensitive, suggesting that both of them are more strategic and more affected by developmental differences. Pedagogically, these results denoted that learning by modeling and engagement in program activities have the special role for learning. Moreover, the findings indicated that the multiple-choice questions can produce the best performance for school-aged children but is less age-sensitive. By contrast, the essay as essay can produce the lowest performance but is more age-sensitive. It will be very helpful for educators to know that what types of learning styles and test methods are most effective for students in each school grade.Keywords: experience-based learning, learning style, school-aged children, testing methods, verbal learning
Procedia PDF Downloads 2062732 Simplified Modelling of Visco-Elastic Fluids for Use in Recoil Damping Systems
Authors: Prasad Pokkunuri
Abstract:
Visco-elastic materials combine the stress response properties of both solids and fluids and have found use in a variety of damping applications – both vibrational and acoustic. Defense and automotive applications, in particular, are subject to high impact and shock loading – for example: aircraft landing gear, firearms, and shock absorbers. Field responsive fluids – a class of smart materials – are the preferred choice of energy absorbents because of their controllability. These fluids’ stress response can be controlled by the application of a magnetic or electric field, in a closed loop. Their rheological properties – elasticity, plasticity, and viscosity – can be varied all the way from that of a liquid such as water to a hard solid. This work presents a simplified model to study the impulse response behavior of such fluids for use in recoil damping systems. The well-known Burger’s equation, in conjunction with various visco-elastic constitutive models, is used to represent fluid behavior. The Kelvin-Voigt, Upper Convected Maxwell (UCM), and Oldroyd-B constitutive models are implemented in this study. Using these models in a one-dimensional framework eliminates additional complexities due to geometry, pressure, body forces, and other source terms. Using a finite difference formulation to numerically solve the governing equation(s), the response to an initial impulse is studied. The disturbance is confined within the problem domain with no-inflow, no-outflow boundary conditions, and its decay characteristics studied. Visco-elastic fluids typically involve a time-dependent stress relaxation which gives rise to interesting behavior when subjected to an impulsive load. For particular values of viscous damping and elastic modulus, the fluid settles into a stable oscillatory state, absorbing and releasing energy without much decay. The simplified formulation enables a comprehensive study of different modes of system response, by varying relevant parameters. Using the insights gained from this study, extension to a more detailed multi-dimensional model is considered.Keywords: Burgers Equation, Impulse Response, Recoil Damping Systems, Visco-elastic Fluids
Procedia PDF Downloads 2972731 Nitrification and Denitrification Kinetic Parameters of a Mature Sanitary Landfill Leachate
Authors: Tânia F. C. V. Silva, Eloísa S. S. Vieira, João Pinto da Costa, Rui A. R. Boaventura, Vitor J. P. Vilar
Abstract:
Sanitary landfill leachates are characterized as a complex mixture of diverse organic and inorganic contaminants, which are usually removed by combining different treatment processes. Due to its simplicity, reliability, high cost-effectiveness and high nitrogen content (mostly under the ammonium form) inherent in this type of effluent, the activated sludge biological process is almost always applied in leachate treatment plants (LTPs). The purpose of this work is to assess the effect of the main nitrification and denitrification variables on the nitrogen's biological removal, from mature leachates. The leachate samples were collected after an aerated lagoon, at a LTP nearby Porto, presenting a high amount of dissolved organic carbon (1.0-1.3 g DOC/L) and ammonium nitrogen (1.1-1.7 g NH4+-N/L). The experiments were carried out in a 1-L lab-scale batch reactor, equipped with a pH, temperature and dissolved oxygen (DO) control system, in order to determine the reaction kinetic constants at unchanging conditions. The nitrification reaction rate was evaluated while varying the (i) operating temperature (15, 20, 25 and 30ºC), (ii) DO concentration interval (0.5-1.0, 1.0-2.0 and 2.0-4.0 mg/L) and (iii) solution pH (not controlled, 7.5-8.5 and 6.5-7.5). At the beginning of most assays, it was verified that the ammonium stripping occurred simultaneously to the nitrification, reaching up to 37% removal of total dissolved nitrogen. The denitrification kinetic constants and the methanol consumptions were calculated for different values of (i) volatile suspended solids (VSS) content (25, 50 and 100 mL of centrifuged sludge in 1 L solution), (ii) pH interval (6.5-7.0, 7.5-8.0 and 8.5-9.0) and (iii) temperature (15, 20, 25 and 30ºC), using effluent previously nitrified. The maximum nitrification rate obtained was 38±2 mg NH4+-N/h/g VSS (25ºC, 0.5-1.0 mg O2/L, pH not controlled), consuming 4.4±0.3 mg CaCO3/mg NH4+-N. The highest denitrification rate achieved was 19±1 mg (NO2--N+NO3--N)/h/g VSS (30ºC, 50 mL of sludge and pH between 7.5 and 8.0), with a C/N consumption ratio of 1.1±0.1 mg CH3OH/mg (NO2--N+NO3--N) and an overall alkalinity production of 3.7±0.3 mg CaCO3/mg (NO2--N+NO3--N). The denitrification process showed to be sensitive to all studied parameters, while the nitrification reaction did not suffered significant change when DO content was changed.Keywords: mature sanitary landfill leachate, nitrogen removal, nitrification and denitrification parameters, lab-scale activated sludge biological reactor
Procedia PDF Downloads 2792730 The Estimation Method of Stress Distribution for Beam Structures Using the Terrestrial Laser Scanning
Authors: Sang Wook Park, Jun Su Park, Byung Kwan Oh, Yousok Kim, Hyo Seon Park
Abstract:
This study suggests the estimation method of stress distribution for the beam structures based on TLS (Terrestrial Laser Scanning). The main components of method are the creation of the lattices of raw data from TLS to satisfy the suitable condition and application of CSSI (Cubic Smoothing Spline Interpolation) for estimating stress distribution. Estimation of stress distribution for the structural member or the whole structure is one of the important factors for safety evaluation of the structure. Existing sensors which include ESG (Electric strain gauge) and LVDT (Linear Variable Differential Transformer) can be categorized as contact type sensor which should be installed on the structural members and also there are various limitations such as the need of separate space where the network cables are installed and the difficulty of access for sensor installation in real buildings. To overcome these problems inherent in the contact type sensors, TLS system of LiDAR (light detection and ranging), which can measure the displacement of a target in a long range without the influence of surrounding environment and also get the whole shape of the structure, has been applied to the field of structural health monitoring. The important characteristic of TLS measuring is a formation of point clouds which has many points including the local coordinate. Point clouds is not linear distribution but dispersed shape. Thus, to analyze point clouds, the interpolation is needed vitally. Through formation of averaged lattices and CSSI for the raw data, the method which can estimate the displacement of simple beam was developed. Also, the developed method can be extended to calculate the strain and finally applicable to estimate a stress distribution of a structural member. To verify the validity of the method, the loading test on a simple beam was conducted and TLS measured it. Through a comparison of the estimated stress and reference stress, the validity of the method is confirmed.Keywords: structural healthcare monitoring, terrestrial laser scanning, estimation of stress distribution, coordinate transformation, cubic smoothing spline interpolation
Procedia PDF Downloads 4352729 Economics of Precision Mechanization in Wine and Table Grape Production
Authors: Dean A. McCorkle, Ed W. Hellman, Rebekka M. Dudensing, Dan D. Hanselka
Abstract:
The motivation for this study centers on the labor- and cost-intensive nature of wine and table grape production in the U.S., and the potential opportunities for precision mechanization using robotics to augment those production tasks that are labor-intensive. The objectives of this study are to evaluate the economic viability of grape production in five U.S. states under current operating conditions, identify common production challenges and tasks that could be augmented with new technology, and quantify a maximum price for new technology that growers would be able to pay. Wine and table grape production is primed for precision mechanization technology as it faces a variety of production and labor issues. Methodology: Using a grower panel process, this project includes the development of a representative wine grape vineyard in five states and a representative table grape vineyard in California. The panels provided production, budget, and financial-related information that are typical for vineyards in their area. Labor costs for various production tasks are of particular interest. Using the data from the representative budget, 10-year projected financial statements have been developed for the representative vineyard and evaluated using a stochastic simulation model approach. Labor costs for selected vineyard production tasks were evaluated for the potential of new precision mechanization technology being developed. These tasks were selected based on a variety of factors, including input from the panel members, and the extent to which the development of new technology was deemed to be feasible. The net present value (NPV) of the labor cost over seven years for each production task was derived. This allowed for the calculation of a maximum price for new technology whereby the NPV of labor costs would equal the NPV of purchasing, owning, and operating new technology. Expected Results: The results from the stochastic model will show the projected financial health of each representative vineyard over the 2015-2024 timeframe. Investigators have developed a preliminary list of production tasks that have the potential for precision mechanization. For each task, the labor requirements, labor costs, and the maximum price for new technology will be presented and discussed. Together, these results will allow technology developers to focus and prioritize their research and development efforts for wine and table grape vineyards, and suggest opportunities to strengthen vineyard profitability and long-term viability using precision mechanization.Keywords: net present value, robotic technology, stochastic simulation, wine and table grapes
Procedia PDF Downloads 2642728 Urban Livelihoods and Climate Change: Adaptation Strategies for Urban Poor in Douala, Cameroon
Authors: Agbortoko Manyigbe Ayuk Nkem, Eno Cynthia Osuh
Abstract:
This paper sets to examine the relationship between climate change and urban livelihood through a vulnerability assessment of the urban poor in Douala. Urban development in Douala places priority towards industrial and city-centre development with little focus on the urban poor in terms of housing units and areas of sustenance. With the high rate of urbanisation and increased land prices, the urban poor are forced to occupy marginal lands which are mainly wetlands, wastelands and along abandoned neighbourhoods prone to natural hazards. Due to climate change and its effects, these wetlands are constantly flooded thereby destroying homes, properties, and crops. Also, most of these urban dwellers have found solace in urban agriculture as a means for survival. However, since agriculture in tropical regions like Cameroon depends largely on seasonal rainfall, the changes in rainfall pattern has led to misplaced periods for crop planting and a huge wastage of resources as rainfall becomes very unreliable with increased temperature levels. Data for the study was obtained from both primary and secondary sources. Secondary sources included published materials related to climate change and vulnerability. Primary data was obtained through focus-group discussions with some urban farmers while a stratified sampling of residents within marginal lands was done. Each stratum was randomly sampled to obtain information on different stressors related to climate change and their effect on livelihood. Findings proved that the high rate of rural-urban migration into Douala has led to increased prevalence of the urban poor and their vulnerability to climate change as evident in their constant fight against flood from unexpected sea level rise and irregular rainfall pattern for urban agriculture. The study also proved that women were most vulnerable as they depended solely on urban agriculture and its related activities like retailing agricultural products in different urban markets which to them serves as a main source of income in the attainment of basic needs for the family. Adaptation measures include the constant use of sand bags, raised makeshifts as well as cultivation along streams, planting after evidence of constant rainfall has become paramount for sustainability.Keywords: adaptation, Douala, Cameroon, climate change, development, livelihood, vulnerability
Procedia PDF Downloads 298